Could not download file: This paper is available to authorised users only.


P. DeCarlo, N. Rizk, Z. Mughal

University of Houston (UNITED STATES)
This paper discusses the overall design and effectiveness of an expert system in use at college institutions. Expert systems are a type of software designed to clarify uncertainties in areas that typically involve a human expert. Instructors usually make the decision to adopt classroom materials, assignments, and teaching methods for their courses. However, instructors may encounter difficulty in determining whether their choices are effective or synergistic with each other. Our work aims to produce an artificially intelligent evaluation system which can find and produce inferences in student survey data using concepts from machine learning and data mining and report those findings in a human-readable format to aid decision makers.

The system we propose works dynamically by taking into account class specific data at exam intervals and processing this to produce a course specific evaluation. We achieve this by obtaining metrics pertaining to assignments, study materials, textbooks, student perceptions and course quality via a custom developed online survey system which generates itself based on input from the instructor. After the survey is administered, students' grades are uploaded and appended to our dataset. Next, we apply Agrawal’s aprioi algorithm on the data to produce association rules with a support of 25% and a confidence of 90% or higher. These rules are then processed through a Prolog system to generate inference rules. Finally, we manipulate this inference data into a format easy for a human user to interpret and the results are delivered. Findings appear in a format similar to "Students who completed assignment 1 and studied primarily using the textbook received a B or above" and "Students who do not own the textbook received a C or above". From this example, an instructor could be guided into ensuring that students purchase and use the textbook in preparing for exams, as it would appear to create a difference of one to two letter grades.

Instructors from several institutions have used our software and reported their experience to us to help guide further development. Our findings give us confidence that a system of this type can benefit educators and students in decision making. For example, by informing the users of positive and negative relations which arise from our class measures; instructors can consider adopting a new textbook, using a new classroom technology, or giving more of one type of assignment over another. Students can use this information to decide which course materials lead to higher exam performance and find optimal ways to study. This work falls under the emerging field of educational data mining, as it serves to provide new information to educators and students by examining learning environments, teaching methods, and student factors to formulate insights into the overall curriculum. Our platform is web-based and can thus be accessed and used remotely by instructors worldwide. The goal is to provide instructors with an easy to administer and easy to understand evaluation system which can be deployed anywhere at any time.