Development and Validation of a Test for Competence in Evidence-Based Medicine. Academic Article uri icon

Overview

abstract

  • BACKGROUND: Medical educators need valid, reliable, and efficient tools to assess evidence-based medicine (EBM) knowledge and skills. Available EBM assessment tools either do not assess skills or are laborious to grade. OBJECTIVE: To validate a multiple-choice-based EBM test-the Resident EBM Skills Evaluation Tool (RESET). DESIGN: Cross-sectional study. PARTICIPANTS: A total of 304 medicine residents from five training programs and 33 EBM experts comprised the validation cohort. MAIN MEASURES: Internal reliability, item difficulty, and item discrimination were assessed. Construct validity was assessed by comparing mean total scores of trainees to experts. Experts were also asked to rate importance of each test item to assess content validity. KEY RESULTS: Experts had higher total scores than trainees (35.6 vs. 29.4, Pā€‰<ā€‰0.001) and also scored significantly higher than residents on 11/18 items. Cronbach's alpha was 0.6 (acceptable), and no items had a low item-total correlation. Item difficulty ranged from 7 to 86%. All items were deemed "important" by >ā€‰50% of experts. CONCLUSIONS: The proposed EBM assessment tool is a reliable and valid instrument to assess competence in EBM. It is easy to administer and grade and could be used to guide and assess interventions in EBM education.

publication date

  • December 17, 2019

Research

keywords

  • Clinical Competence
  • Educational Measurement

Identity

PubMed Central ID

  • PMC7210361

Scopus Document Identifier

  • 85076877410

Digital Object Identifier (DOI)

  • 10.1007/s11606-019-05595-2

PubMed ID

  • 31848856

Additional Document Info

volume

  • 35

issue

  • 5