ASSESSMENT OF EVALUATION CRITERIA USING MULTIVARIATE STATISTICS
Universidad Politécnica de Valencia (SPAIN)
About this paper:
Appears in: ICERI2010 Proceedings
Publication year: 2010
Conference name: 3rd International Conference of Education, Research and Innovation
Dates: 15-17 November, 2010
Location: Madrid, Spain
Abstract:The assessment of academic yield at universities has been conducted traditionally by means of a final exam, but this scenario is changing. Nowadays, the continuous assessment is being applied in many centres, as it is a valuable and reliable tool which provides varied and plentiful information. As a result, university students are requested to make different kinds of assignments or partial exams along the year that are assessed by the lecturer in order to obtain partial scores that contribute to the final mark. So, when evaluating the academic yield, the lecturer may have a set of partial marks available from each student that, properly weighted, lead to the final mark. Nevertheless, most lecturers do not assess the suitability of the proposed exam or continuous evaluation program attending to the complete set of partial results achieved by the students. In these cases, any analytic tool aimed at providing complementary information about the quality of the evaluation tools would be of interest.
Principal component analysis (PCA) is a useful technique for explaining the data variability of a matrix, as well as for the interpretation of relationships among observations and variables. This paper describes the application of PCA to a real data set of academic marks and analyzes if the evaluation procedure to assess students’ yield was suitable, detecting possible weak points that might have been improved.
The dataset was obtained from the partial marks of 79 students in the subject ‘Statistics’, corresponding to the degree of Civil Engineering at the Universidad Politécnica de Valencia in the academic year 2002-2003. In this case, only a final exam was proposed consisting of 6 questions with different sections. As a result, 17 marks were available from each student. A PCA was applied to this matrix using the program SIMCA-P (www.umetrics.com) after centering the data and scaling to unit variance. The cross-validation criterion was used to determine how many principal components account for the relevant information in the data set.
PC1 and PC2 explain 32.9% and 12.4% of the total data variance, respectively, although PC2 is slightly under the significance threshold according to the cross-validation criterion. Nevertheless, considering both components, it is possible to determine in the PC1-PC2 score plot a borderline between students who passed the subject versus those who failed. The variables with the highest loadings in PC1 correspond to the final mark, followed by questions 4b, 6, 4c, and 1c, respectively. The analysis of the loading plot allows the detection of questions whose contribution to the final mark should be revised. Results allow the detection of possible trends between students who passed or failed the exam and their yield in the different questions of the exam. The distance to the model was also checked in order to identify students with a slightly different performance. The proposed methodology can be used as a complementary tool to assess the suitability of the proposed exam. Generally speaking, the results illustrate that the application of efficient statistical tools to academic datasets might provide complementary information about the quality of the evaluation criteria used by the lecturer for academic evaluation.
Keywords: Evaluation criteria, multivariate statistics, principal component analysis.