Could not download file: This paper is available to authorised users only.


L. Reddy

University of Johannesburg (SOUTH AFRICA)
To design examination questions that are fair, reliable and with an equitable levels of complexity as prescribed by Bloom’s taxonomy is a challenge for most test writers or examiners. In this respect test writers need to control variables that may influence the complexity of the questions. It was revealed from previous National Senior Certificate (NSE) examinations in South Africa that an undesirable number of questions were pitched at higher levels of complexity on the Bloom’s levels of taxonomy, and in particular to the 2015 examinations, since the inception of the Curriculum Assessment Policy Statement (CAPS) by the Department of Basic Education in South Africa. There could be a number of factors that could be contributing to the complexity of the examinations. The purpose of this paper is to trial the original questions to 30 first year university students, who are engaged in a physics disciplined study and who have recently completed their grade 12 examinations. A total of 5 question which were taken from the NSE examinations, and which we though in our opinion needed remediation, were given to the students. These types of questions out of a total of 10 questions had the effect of increasing the overall difficulty of the examinations. Further, it was noted that the performances by students in these questions were appallingly bad in their grade 12 examinations. From the students’ performances in this research, we then tried to identify the sources of difficulty (SOD) in these examination questions so that remedial measures could be undertaken. Once the SOD were identified, we took the opportunity to manipulate the original questions to make them clearer and more understandable without necessarily losing its level of difficulty of what was intended by the examiners.

The manipulated questions were then re-trialled to the same students to see if insidious changes we made to these questions had any effect in their performances. The theoretical framework used in this study is taken from the one developed by Ferrara et al. (2011). Such a framework is ideal as it was used to determine the factors that were identified in this assessment, such as language and cognitive demand of questions, had an impact on their performances. The results revealed that the manipulated questions had an enhanced improvement in their test performance. This research has been useful in a sense it could inform test writers of the pitfalls to avoid when designing questions for the examinations so that in future students may have a better chance of passing.

[1] Ferrara, S., Svetina, D., Skucha, S., & Davidson, A. H. (2011). Test development with performance standards and achievement growth in mind. Educational Measurement: Issues and practice, 30 (4), 3-15.