Keywords
difficulty index, discrimination index, efficiency distractor, mathematic grade examination
Document Type
Article
Abstract
The multiple-choice test is a common test format used in education. One of the purposes of this test is to evaluate the success of the learning process in a particular subject. Therefore, the efficiency of the evaluation depends on the quality of the test items used. This research was conducted in order to reveal the quality of the final mathematics examination items statistically. It was descriptive quantitative research employing two-parameter logistic (2pl) model of Item Response Theory (IRT). The data were obtained from the sample of 353 students established using the purposive sampling technique. This finding shows that 40% of the 35 items tested are very difficult, 60% are in the medium level, and there is no easy item. The most difficult material is the trigonometric calculation. The percentage of the item discrimination index is described as follows: 8.57% of the items are categorized as very low, 51.43% are categorized as low, 31.43% of the items have a medium item discrimination index value, 5.71% have a high item discrimination index value, and 2.86% of the items are categorized as very high. Moreover, the research found that all distractors functioned well. The highest information on ability θ = 0.4 with information function value of 5.38 and SEM = 0.6. This test is suitable for students with the ability of -1.42 <θ <2.65.
Page Range
70-78
Issue
1
Volume
4
Digital Object Identifier (DOI)
10.21831/reid.v4i1.20202
Source
https://journal.uny.ac.id/index.php/reid/article/view/20202
Recommended Citation
Kusumawati, M., & Hadi, S. (2018). An analysis of multiple choice questions (MCQs): Item and test statistics from mathematics assessments in senior high school. REID (Research and Evaluation in Education), 4(1). https://doi.org/10.21831/reid.v4i1.20202
References
Aisyah, U. (2013). Pengembangan perangkat pembelajaran kompetensi sulit matematika SMA di Riau. Master Thesis. Universitas Negeri Yogyakarta, Yogyakarta.
Allen, M. J., & Yen, W. M. (1979). Introduction to measurement theory. Montery, CA: Cole Publishing.
Arikunto, S. (1999). Dasar-dasar evaluasi pendidikan. Jakarta: Bumi Aksara.
Baker, F. B. (2001). The basics of item response theory (2nd ed.). College Park, MD: ERIC Clearinghouse on Assessment and Evaluation.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3-12. https://doi.org/10.1111/j.1745-3992.2010.00195.x
Daryanto, M. (2012). Evaluasi pendidikan. Jakarta: Rineka Cipta.
Debeer, D., & Janssen, R. (2013). Modeling item-position effects within an IRT framework. Journal of Educational Measurement, 50(2), 164-185. https://doi.org/10.1111/jedm.12009
Gajjar, S., Sharma, R., Kumar, P., & Rana, M. (2014). Item and test analysis to identify quality Multiple Choice Questions (MCQs) from an assessment of medical students of Ahmedabad, Gujarat. Indian Journal of Community Medicine: Official Publication of Indian Association of Preventive & Social Medicine, 39(1), 17-20. https://doi.org/10.4103/0970-0218.126347
Gronlund, N. E. (1982). Measurement and evaluation in teaching (4th ed.). Cliffs, NY: Macmillan.
Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 51-78. https://doi.org/10.1207/s15324818ame0201_4
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage Publications.
Keoviphone, C., & Wibowo, U. B. (2015). Factors discouraging students from schooling: A case study at Junior Secondary School in Laos. REiD (Research and Evaluation in Education), 1(1), 1-12. https://doi.org/10.21831/reid.v1i1.4894
Kolte, V. (2015). Item analysis of multiple choice questions in physiology examination. Indian Journal of Basic and Applied Medical Research, 4(4), 320-326.
Mardapi, D. (2012). Pengukuran, penilaian, dan evaluasi pendidikan. Yogyakarta: Nuha Medika.
Miller, M. D., Linn, R. L., & Gronlund, N. E. (2009). Measurement and assessment in teaching (10th ed.). Upper Saddle River, NJ: Pearson.
Retnawati, H. (2014). Teori respons butir dan penerapannya: Untuk peneliti, praktisi pengukuran dan pengujian, mahasiswa pascasarjana. Yogyakarta: Nuha Medika.
Saudi Commission for Health Specialties. (2011). Item writing manual for multiple-choice questions.
Sugiyono. (2001). Metode penelitian bisnis. Bandung: Alfabeta.
Sulistiawan, C. H. (2016). Kualitas soal ujian sekolah matematika program IPA dan kontribusinya terhadap hasil ujian nasional. Jurnal Penelitian Dan Evaluasi Pendidikan, 20(1), 1-10. https://doi.org/10.21831/pep.v20i1.7516
Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis. BMC Medical Education, 9(40), 1-8. https://doi.org/10.1186/1472-6920-9-40
Tshabalala, T., & Ncube, A. C. (2014). The effectiveness of measurement and evaluation in Zimbabwean primary schools: Teachers and heads' perceptions. International Journal of Innovation and Applied Studies, 8(1), 141-148.
Wongapiwatkul, P., Laosinchai, P., & Panijpan, B. (2011). Enhancing conceptual understanding of trigonometry using earth geometry and the great circle. Australian Senior Mathematics Journal, 25(1), 54-63.