formularioHidden
formularioRDF
Login

Sign up

 

Categorizing Students' Response Patterns using the Concept of Fractal Dimension

InProceedings

We show how students’ response patterns can be quantified both globally and locally using the concept of fractal dimension. This metric allows us to identify students who respond to a series of questions and problems in a persistent or anti-persistent manner with implications for personalized just-in-time teaching and learning.

"1. INTRODUCTION. The motivation behind the present study lies in our observation that students’ responses considered as a time series exhibit random walk- or Brownian motion-like characteristics [1]. This observation naturally led to the question of the quantification of such behaviors. Since fundamentally, a random walk-like behavior exhibits irregularities or fluctuations about the expectations, a concept that attempts to look at such irregularities was deemed necessary. One such concept is the fractal dimension [2]. In this paper we show how students’ response patterns can be categorized using the concept of fractal dimension, thereby identifying students who do not exhibit persistent response patterns, and hence most likely are struggling with a concept domain. 2. THE DATA AND THE METHOD. The response data analyzed in this study originated from a class of about 250 students using MasteringChemistry (www.masteringchemistry.com) in an introductory chemistry class at a large public university in the United States. MasteringChemistry is an online Socratic homework tutor which allows instructors to assign homework for their students, which are then automatically graded by the platform.1 The homework problems of tutorial nature in the Mastering system provides students with automated feedback, followup comments, and the opportunity to request declarative and procedural hints at impasses. The correct (graded 1) or incorrect (graded 0) first attempt responses by students were tracked at the part level (e.g., part A, part B, etc.) of a given online homework question or problem (we will collectively call these items from now on). The “first attempt” was defined as a correct or an incorrect response by a given student to a given part of an item without requesting any hints beforehand. Such interactions were then tracked throughout the semester resulting in about 550 first attempt interactions per student on average, which is the starting point of the retrospective data mining and analysis task that we describe in this paper. From the first attempt responses described above we derived the “net-score”, which is the difference between the number of correct and the number of incorrect first attempt responses at any given instance. The net-score can thus be considered as the displacement from the origin for a one dimensional walker with a step to the right considered as a correct response and a step to the left considered as an incorrect response. If the walker is random walking, then the fluctuations in the displacements against the number of steps, and hence the fluctuations in the net-score against the number of first attempt interactions – the response pattern – would be rough or irregular. 2.1 Response Patterns & Fractal Dimension. How can we quantify the differences in the response patterns? The concept of fractal dimension can be used to quantify the degree of regularity or roughness of a student’s response pattern, which is the variation of the net-score against the number of first attempt interactions (which we will call the net-score space) [3]. Thus, we use fractal dimension as a measure of the roughness of a curve rather than its degree of self-similarity. A student having a perfect netscore (i.e., all correct first attempt responses) would show a straight line in the net-score space, and hence would have a fractal dimension of 1 (this is equally applicable to a student who has all incorrect first attempt responses). In contrast, a student who is randomly responding would show a very irregular pattern, which would ideally cover the 2-dimensional net-score space, and hence would have a fractal dimension of 2. Thus, the fractal dimension values would range from 1 to 2, with lower values corresponding to regular, and higher values corresponding to irregular response patterns. Simulation studies that we have conducted show that it is reasonable to categorize a student as random walking when their fractal dimension reaches a value of 1.8 or above (typical errors are of the order of 0.1). 2.2 Global & Local Estimates. The fractal dimension characterization of a student’s response pattern can be obtained either globally or locally. The former means that we can characterize the response pattern of the entire semester. The latter means that we can characterize subsets of interactions within the semester. We have found that 16% of the students in the class under consideration were random walking globally; that is, these students were responding in a random fashion throughout the semester. The local characterization was done by choosing a first attempt interaction window of length 33 (which roughly corresponds to about 10 items – the typical number of items within an assignment) and shifting this window towards higher interaction values whenever 4 new interactions (which roughly corresponds to the number of first attempt interactions within an item) become available. The time series of a student, with the local fractal dimension estimates and the net-score (i.e., the response pattern) superimposed on the same graph is shown in Figure 1. Figure 1: The changes in the fractal dimension and the netscore (blue) along the response pattern for a student having a global fractal dimension of 1.74. Only the first attempt interactions up to the mid-semester are shown. The scale (window of length 33) is also shown. The particular student shown has a response pattern that can be globally quantified as having a fractal dimension of 1.74, and hence this student cannot be considered as random walking overall throughout the semester. However, the response pattern locally shows interesting features, the most prominent being the onset (around the 150th interaction) of an increasing trend in the fractal dimension, and hence, an increasing trend in the irregularity of the response pattern. This shows that the student is struggling around this time. It is worth noting that this difficulty (onset) for the student starts at the beginning of encountering a new concept (stoichiometry), and then showing signs of random response behavior (having a fractal dimension 1.8) only 5 items later. The concept areas where students act as random walkers can then be clearly distinguished for just-in-time teaching and learning. 3. CONCLUSIONS. We have shown that students’ response patterns can be quantified using the concept of fractal dimension in a netscore space either locally or globally. We are then able to identify instances where they are in effect responding randomly to a set of items imitating a random walk in onedimension. It can be questioned whether the net-score or a traditional score alone would not be sufficient to identify students who are struggling. In this context we note that a typical net-score can be reached in many ways leading to different response patterns, and hence, different fractal dimensions. Similarly, a traditional (say, percentage) score provides only a point estimate and also raises issues of what score would indicate mastery. The fractal dimension of a response pattern encodes how a student has achieved that score, which provides more finer-level information than a single score. (The fractal dimension alone would not suffice to identify a struggling student when that student has a consistent but decreasing net-score. The combination of the two would accomplish this task.) The concept and method we have described and demonstrated above can thus be used as an alert system to identify students at risk and who are struggling within a given concept domain. The method is easily scalable since it only relies on students’ response patterns, and hence, a specific student learning model or an instructional model is not needed. Since the method is fundamentally reliant on the responses it is important that the careless errors or lucky guesses are accounted for. Given the complex non-multiple-choice nature of the items with which the students have interacted in this study, the likelihood of lucky guesses can be considered to be negligible. Although the careless error rate may not be as negligible, the correlations that we have investigated with the end-of-term examination show that we are not tapping into noise, and the responses that we have considered are valid. We hope to understand how to correct for careless errors and lucky guesses, and the effect of factoring in the second responses on the fractal dimension of a student’s response pattern in the future. 4. ACKNOWLEDGEMENTS. We thank Prof. Randall Hall and Prof. Leslie Butler of Louisiana State University, USA, for permission to use their class’ data (while preserving its anonymity), and Dr. David Kokorowski for providing us the opportunity to work on this research. We also thank Dr. John Behrens for encouraging us to submit this short paper to the EDM 2012 conference, and Dr. John Stamper for accommodating it."

About this resource...

Visits 188

0 comments

Do you want to comment? Sign up or Sign in