University of Granada (SPAIN)
About this paper:
Appears in: ICERI2017 Proceedings
Publication year: 2017
Pages: 3058-3064
ISBN: 978-84-697-6957-7
ISSN: 2340-1095
doi: 10.21125/iceri.2017.0855
Conference name: 10th annual International Conference of Education, Research and Innovation
Dates: 16-18 November, 2017
Location: Seville, Spain
Statistical inference has been considered as a fundamental tool in the development of other sciences, since it allows responding to problems of biology, medicine, psychology and economics. Batanero, Díaz and López-Martín (2017) summarize the following approximations to the study of the statistical test problem, used today in statistics (frequentist methodologies (Fisher and Neyman-Pearson); Bayesian and informal approximation) and reveal the differences between these different approximations in relation to the conceptual and procedural level. In this work, we are interested by the procedures (defined by Godino, Batanero and Font, 2008, as the algorithms or strategies that allow operating with the data to solve the problem or generalize it) necessary to solve hypothesis tests problems. In Fisher’s methodology it is necessary firstly to set up the null hypothesis (H_0) and later the probability of obtaining a given value or another more extreme under the assumption that the null hypothesis is true. In Neyman and Pearson’s methodology, we determine null hypothesis, calculate the value of the statistic for the sample, define the critical region and then reject the null hypothesis if and only if the statistic falls in the critical region. On the other hand, Maximum likelihood needs to define the likelihood function and the likelihood of the observed result under the null and alternative hypotheses; later it is necessary to calculate the ratio of the two likelihoods with the objective of determining the likelihood ratio test's critical region. Finally, another method is to calculate the confidence interval, from the sample statistic and check if the value of the null hypothesis is in the interval.

In the last decades, the research carried out in the area of statistics has shown a special interest in studying how students learn in order to ensure a correct teaching and learning process. Without a doubt, the teacher and the prospective teacher have a fundamental part in this process. For this reason, this work is focused on analysing the knowledge in relation to statistical test problems of the prospective mathematics teacher. Specifically, we analyse which types of procedures are most used by prospective teachers and their correspondence with the curricular guidelines and textbooks. The University Access Test (UAT) has a big influence in the second year of secondary high school; for this reason, a statistical test problem with characteristics similar to those included in the UAT has been proposed to prospective teachers.

The initial results show a greater use of Fisher’s methodology of (70%) against the Neyman-Pearson’s methodology (21%). The remaining prospective teachers used different procedures (Maximum likelihood and Confidence Interval, 6% each one of them). However, the analysis of participants’ responses shows that although the Fisher’s methodology was the most used, only 7% of prospective teachers have correctly described and calculated the p-value. Furthermore, the participants using the maximum likelihood solve correctly the problem.
Statistical test problem, Methodology of Fisher, Methodology of Neyman and Pearson, p-value.