Could not download file: This paper is available to authorised users only.

IS PEER ASSESSMENT RELIABLE OR VALID IN A WEB-BASED PORTFOLIO ENVIRONMENT FOR HIGH SCHOOL STUDENTS IN A COMPUTER COURSE?

C.C. Chang, Y.H. Chen

National Taiwan Normal University (TAIWAN)
This study examined the reliability and validity of Web-based portfolio peer assessment. The participants were 79 students in a computer course at a senior high school, among which the portfolios developed by 72 participants were completed and suitable for the statistical analysis, including 34 males and 38 females. Some participants had had experience in creating paper portfolios, but none of them had used Web-based portfolios or portfolio assessments. The peer-assessment was carried out anonymously so that raters would not know who the portfolio writers were. The duration of the study was a 12-week period with 3 hours for each week.
The participants performed portfolio creation, inspection and peer-assessment via the Web-based portfolio assessment system. The two-unit computer course addressed “Animation Creation” and “Website Creation” covering design skills and web page creation abilities. Furthermore, in the digital-based setting, students were required to electronically save and showcase their work, which was ideally compatible with the practice of Web-based portfolio assessment. At this study, the participants were not aware of their involvement in the experiment in order to gain more accurate information and to avoid the Hawthorne and John Henry effects.
The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters demonstrated inconsistency assessing different portfolios, i.e. inner-rater reliability; 3) peer-assessment scores were not consistent with teacher-assessment scores (criterion-related validity); 4) significant differences were found between peer-assessment scores and end-of-course examination scores, implying that Web-based portfolio peer assessment failed to reflect learning achievements (criterion-related validity). In short, Web-based portfolio peer assessment was not a reliable and valid method.