Abstract
Background: Anatomy is the foundation stone of medical education, and assessing anatomy competency is essential. Assessment tools should fulfill objectivity, reliability, validity, and test higher cognitive skills. Assessment is a very significant component of education. To correctly judge the knowledge and skill of learners, assessment should be valid and should judge the appropriate levels of cognition. A large portion of the curriculum is assessed in a short period of time, requiring less effort from the student. However, it takes a lot of effort and time for the examiner to make high quality MCQs. Properly constructed multiple-choice questions assess higher-order cognitive processing of Bloom's taxonomy, such as interpretation, synthesis, and application of knowledge, instead of just testing recall of isolated facts
Methods: Hundred MCQ’s from internal examinations of anatomy papers were analyzed for difficulty index, discrimination index (DI), and distractor efficiency (DE).
Results: In the present study, out of 100 MCQs, 19 were difficult, 60 were acceptable, and 21 were easy. 25 items did not discriminate against high achievers versus low achievers, 37 were acceptable, and 38 were highly discriminatory. There was a low positive correlation between the Difficulty Index and the Discrimination Index in tests two and three. However, there was a negligible correlation in tests one, four, and five.
Conclusion: Item analysis of the MCQs performed after the assessment provides insight into test item reliability and validity. In addition, it indicates how difficult or easy the questions were