Identiï¬cation of student learning behaviors, especially those that characterize or distinguish students, can yield important insights for the design of adaptation and feedback mechanisms in Intelligent Tutoring Systems (ITS). In this paper, we analyze trace data to identify distinguishing patterns of behavior in a study of 51 college students learning about a complex science topic with an agent-based ITS that fosters self-regulated learning (SRL). Preliminary analysis with an Expectation-Maximization clustering algorithm revealed the existence of three distinct groups of students, distinguished by their test and quiz scores (low for the ï¬rst group, medium for the second group, and high for the third group), their learning gains (low, medium, high), the frequency of their note-taking (rare, frequent, rare) and note-checking (rare, rare, frequent), the proportion of sub-goals attempted (low, low, high), and the time spent reading (high, high, low). In this paper, we extend this analysis to identify characteristic learning behaviors and strategies that distinguish these three groups of students. We employ a differential sequence mining technique to identify differentially frequent activity patterns between the student groups and interpret these patterns in terms of relevant learning behaviors. The results of this analysis reveal that high-performing students tend to be better at quickly identifying the relevance of a page to their subgoal, are more methodical in their exploration of the pedagogical content, rely on system prompts to take notes and summarize, and are more strategic in their preparation for the post-test (e.g., using the end of their session to briefly review pages). These results provide a ï¬rst step in identifying the group to which a student belongs during the learning session, thus making possible a real-time adaptation of the system.
"1. INTRODUCTION. Use of metacognition and self-regulated processes has been identiï¬ed as a key element for successful learning in general [2; 19; 20; 22]. In the particular context of an intelligent tutoring system (ITS), it means it is crucial to ensure that students are actively using key self-regulated learning (SRL) processes, which can be achieved through prompts, scaffolding, and feedback. A major challenge is to make the ITS more adaptive to individual learning characteristics, such as browsing behavior and initiative in performing appropriate SRL processes. Using MetaTutor, an agent-based ITS that fosters the use of SRL processes, we have collected a large amount of data from students interacting with the system while they were learning about the human circulatory system. In this paper, our goal is to answer two questions: (1) how can students be grouped according to their performance and their type of interaction with the system? and (2) how do speciï¬c learning behaviors of high- and low-performing students differ, in particular regarding their use of SRL processes in MetaTutor? In this paper, we propose to answer the ï¬rst question using a clustering approach that groups students with similar performance and scores on other system interaction metrics. For the second question, we analyze members of the three clusters (especially comparing high- and low-performing students) with a differential sequence mining method [11], which identiï¬es statistically signiï¬cant differences in frequent behaviors between clusters. This paper is organized as follows. In section 2, we start by discussing related work that combines clustering and pattern mining techniques for analysis of data from computerbased learning environments. In section 3, we introduce the ITS used for data collection, MetaTutor, as well as theoretical grounding of its key features, which encourage learners to perform self-regulation monitoring and strategy as they learn with the system. Section 4 describes the data collected and the relevant events encoded as actions, as well as the clustering performed to distinguish different types of students. Section 5 presents the principles of the method of differential sequence mining, its application to the data, and the results obtained in terms of patterns of actions that distinguish students from different clusters. Section 6 then discusses the practical implications of those ï¬ndings in terms of potential modiï¬cations to the ITS, before concluding in section 7. 2. RELATED WORK. Analysis of trace log data from users’ interactions to better understand their learning process and distinguish groups of learners (e.g., efficient versus inefficient ones) has been an important area of research in educational data mining. For example, Perera et al. [15] follow a 2-step methodology like ours, as they start by using a clustering algorithm (k-means) to identify strong groups of students collaborating in a software development task using an open environment (TRAC). The students are ï¬rst clustered according to a set of attributes extracted a posteriori, and then they use a modiï¬ed version of the Generalized Sequential Pattern mining algorithm [17] to identify frequent sequences of actions that characterize the most successful groups. In [16], Romero et al. also use a combination of clustering and sequential pattern mining to identify different kinds of browsing behavior that students exhibit in their learning environment, “AHA!â€, in order to provide them links to the most appropriate pages. With gStudy, Nesbit et al. [14] are interested in the use of self-regulation by students learning from multimedia documents. They apply sequential pattern mining to ï¬nd common subsequences between groups of students, although they do not perform any clustering beforehand. Martinez et al. [13] pursue a similar approach and objective, as they aim to discover frequent sequences of actions that distinguish a group of students with high achievements from one with low achievements. They use a combination of pattern mining and clustering techniques to identify the most successful strategies in the context of a collaborative learning tool on a tabletop device. However, they ï¬rst extract frequent patterns of actions and then cluster them in order to examine clusters of patterns associated with each group. Tang and McCalla [18] also use sequence mining and then clustering in their web learning environment, to facilitate instructional planning and diagnose students behaviors. 3. METATUTOR ENVIRONMENT. 3.1 General overview. MetaTutor is a multi-agent, adaptive hypermedia learning environment, which presents challenging human biology science content. The primary goal underlying this environment is to investigate how multi-agent system can adaptively scaffold SRL and metacognition within the context of learning about complex biological content. MetaTutor is grounded in a theory of SRL that views learning as an active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognitive and metacognitive processes in the service of those goals [6; 3; 2]. More speciï¬cally, MetaTutor is based on several theoretical assumptions of SRL that emphasize the role of cognitive, metacognitive (where metacognition is conceptualized as being subsumed under SRL), motivational, and affective processes [19; 22]. Moreover, learners must regulate their cognitive and metacognitive processes in order to integrate multiple informational representations available from the system. While all students have the potential to regulate, few students do so effectively, possibly due to inefficient or a lack of cognitive or metacognitive strategies, knowledge, or control. As a learning tool, MetaTutor has a multitude of features that embody and foster self-regulated learning (cf. Figure 1). These include four pedagogical agents which guide students through the learning session and prompt students to engage in planning, monitoring, and strategic learning behaviors. In addition, the agents can provide feedback and engage in a tutorial dialogue in an attempt to scaffold students’ selection of appropriate sub-goals, accuracy of metacognitive judgments, and use of particular learning strategies. The system also uses natural language processing to allow learners to express metacognitive monitoring and control processes. For example, learners can type that they do not understand a paragraph and can also use the interface to summarize a static illustration related to the circulatory system. Additionally, MetaTutor collects information from user interactions with it to provide adaptive feedback on the deployment of students’ SRL behaviors. For example, students can be prompted to self-assess their understanding (i.e., system-initiated judgment of learning [JOL]) and are then administered a brief quiz. Results from the self-assessment and quiz allow pedagogical agents to provide adaptive feedback according to the calibration between students’ conï¬dence of comprehension and their actual quiz performance. During learning, MetaTutor is capable of measuring the deployment of self-regulatory processes by allowing us to collect rich, multi-stream data, including: self-report measures of SRL, on-line measures of cognitive and metacognitive processes (e.g., concurrent think-alouds), dialogue moves regarding agent-student interactions, natural language processing of help-seeking behavior, physiological measures of motivation and emotions, emerging patterns of effective problem solving behaviors and strategies, facial data on both basic (e.g., anger) and learning-centered emotions (e.g., boredom), and eye-tracking data regarding the selection, organization, and integration of multiple representations of information (e.g., text, diagrams). The collection of these various data streams is critical to enhancing our understanding of when, how, and why students regulate or do not regulate their learning and adapt their regulatory behaviors. These data are then used to develop computational models designed to detect, track, model, and foster students’ SRL processes during learning. 3.2 Self-Regulated Learning with MetaTutor. This paper is theoretically-guided by contemporary models of SRL that emphasize the temporal deployment of these processes during learning [6]. As such, the goal is to use multiple measures to detect, track, and model learners’ use of cognitive, affective, and metacognitive (CAM) processes during learning. As such, we use Winne and Hadwin’s model [20; 21] because it proposes that learning occurs in four basic phases: (1) task deï¬nition, (2) goal-setting and planning, (3) studying tactics, and (4) adaptations to metacognition. Their model emphasizes the role of metacognitive monitoring and control as the central aspects of learners’ ability to learn complex material across different instructional contexts (e.g., using a multi-agent system to track and foster SRL) in that information is processed and analyzed within each phase of the model. Recently, Azevedo and colleagues [7; 6; 4; 5; 2] extended this model and provided extensive evidence regarding the role and function of several dozen CAM processes during learning with studentcentered learning environments (e.g., multimedia, hypermedia, simulations, intelligent tutoring systems). In brief, our model makes the following assumptions: (1) successful learning involves having learners monitor and control (regulate) key CAM processes during learning; (2) SRL is context-speciï¬c and therefore successful learning may require a learner to increase/decrease the use of certain key SRL processes at different points in time during learning; (3) a learner’s ability to monitor and control both internal (e.g., prior knowledge) and external factors (e.g., changing dynamics of the learning environment; relative utility of an agent’s prompt) are crucial in successful learning; (4) a learner’s ability to make adaptive, real-time adjustments to internal and external conditions, based on accurate judgments of their use of CAM processes, is fundamental to successful learning; and; (5) certain CAM processes (e.g., interest, self-efficacy, task value) are necessary to motivate a learner to engage and deploy appropriate CAM processes during learning and problem solving. Figure 1: Annotated screenshot of MetaTutor (A: time remaining in the session, B: table of contents, C: current subgoals and progression, D: embodied pedagogical agent, E: palette of monitoring and strategy actions). This model is best suited for this project since it deals speciï¬cally with the person-in-context perspective and postulates that CAM processes occur during learning with a multi-agent system, which will be useful in examining when and how learners will regulate their learning about the human circulatory system. As such, the macro-level processes used in this paper are reading, metacognitive monitoring, and learning strategies. Reading behavior is critical since it is the most important activity related to acquiring, comprehending, and using content knowledge related to the science topic. During reading, learners need to monitor and regulate several key processes such as: (1) selecting relevant content (i.e., text and diagrams) based on their current sub-goal; (2) spending appropriate amounts of time on each page, depending on their relevance regarding their current sub-goal; (3) deciding when to switch or create a new sub-goal; (4) making accurate assessments of their emerging understanding; (5) conceptually connecting content with prior knowledge; (6) adaptively selecting, using, and assessing the effective use of several learning strategies including re-reading, coordinating informational sources, summarizing, making inferences, in order to comprehend the material at various levels (i.e., declarative, procedural, and conceptual knowledge); and, (7) making adaptive changes to behavior based on a variety of external (e.g., quiz scores, quality and timing of agents’ prompts and feedback) and internal sources (e.g., affective experiences including both positive and negative affective states, perception of task difficulty). In sum, SRL involves the continuous monitoring and regulation of CAM processes during learning with MetaTutor. 3.3 Participants and data collection. While data has been collected over a sample of 148 undergraduate students from two large public universities in North America, we consider for this study only a sub-sample of 51 participants from the experimental condition that included the most prompts from the pedagogical agents to perform SRL actions and in which students were given some adaptive feedback after having performed those actions. Participants from other conditions did not receive a similar experience with the system, and therefore the values of the variables considered (cf. section 4.2.1) were completely different for them (e.g. they took less quizzes as they were not prompted to self-regulate their learning). Considered logs contained an average of 1072 events per session (σ = 255). 4. PRELIMINARY STEPS. 4.1 Data preparation, coding and extraction. For the analysis performed here, as justiï¬ed in section 3.2, we abstracted the set of collected interactions into three broad categories: reading, monitoring, and strategy (cf. Table 1 for The detailed list of actions extracted from the data). 4.1.1 Reading. A reading action (Read ) is coded each time the student clicks to display a new page of content to read. They can be split according to two combinatorial criteria, r and t, written as Readrt, where: • r stands for the relevance of the page with regard to the student’s current subgoal (+ for a relevant page, - for an irrelevant page, ∅ if no subgoal is currently set and relevance can’t be determined); • t stands for the time the student spent reading the page (s if they remain less than 15 seconds, threshold under which no SRL prompt can be triggered, l otherwise). 4.1.2 Monitoring. A monitoring action (M on ) is coded when the student performs, or is prompted to perform, a monitoring action with respect to their learning. This monitoring action could be a judgment of learning (JOL) about what they have just read, a feeling of knowing (FOK) regarding the content of the page, an evaluation of the content (CE) relevance with respect to their current subgoal, or an assessment of their progress towards their current subgoal (MPTG). They can also be split according to two combinatorial criteria, e and i, written as M one , where: • e ∈ {+, −, ∅} stands for the correctness of the monitoring evaluation performed by the student (+ if the evaluation is right, − if it is wrong, ∅ if no direct evaluation is possible for the monitoring process); • i ∈ {u, a} stands for the initiator of the action (u for the user, a for the agent). Following FOKs and JOLs, as well as when the student claims to have ï¬nished a subgoal, students are asked to answer a short quiz (of 3 to 10 questions). Those actions, coded as Quiz , can be split along one dimension and are then written Quiz s , where s ∈ {+, −} stands for the success or failure to pass the test (+ if the student obtained more than 66% of correct answers, − otherwise). 4.1.3 Strategy. A strategy action (Str ) is coded when the student uses a strategy to self-regulate their learning, including when the strategy is prompted by the agent, as well as when the user independently decides to perform the action. Strategy actions include a summarization (SUMM) of the page, a coordination of information sources (COIS) by viewing a related image, an inference (INF) regarding the reading material, a re-reading (RR) of a paragraph that was not well understood, or notes taken about the reading material. This action can also be split depending on the initiator of the action, and is then written Stri , where i ∈ {u, a} as deï¬ned in 4.1.2. Moreover, we distinguish a particular strategy consisting of taking or checking notes in the embedded note interface or using the electronic paper-based notepad provided next to the workstation. These note actions are coded as Notes. 4.2 User clustering. 4.2.1 Methodology. In a previous study [8], we ran a cluster analysis over a subset of 13 variables extracted from the interaction log after the end of the student’s learning session: pretest and posttest score, number of subgoal and page quizzes, mean ï¬rst score in subgoal and page quizzes, proportion of subgoals attempted among the 7 possible, number of subgoals changes, total session duration, time spent reading content, number of times the student took notes and checked notes, and the duration of the note-taking episodes. Table 2: Synthesis of clusters differences (italic means clusters weren’t signiï¬cantly different from one another according to that variable when using an ANOVA with p < 0.05). This analysis empoyed the Expectation-Maximization (EM) algorithm as implemented in the Weka data mining package [10]. The number of categories to ï¬nd being undetermined a priori, we used a 10-fold cross-validation, during which we incremented the number of clusters (starting with 1) as long as the loglikelihood averaged over the 10 folds was increasing (i.e. we stopped as soon as the loglikelihood with N+1 clusters was lower than with N clusters). We used 1000 different initialization seeds for the EM algorithm, in order to compensate for its tendency to get stuck into local optima, and selected, among the 1000 partitions of students generated, the most frequent one among the most frequently obtained number of clusters (3). 4.2.2 Results. Three clusters were obtained, which characteristics are summarized in Table 2, where clusters 0, 1 and 2 are made of 21, 14 and 16 students, respectively. Generally, students from cluster 2 scored high on pretest, posttest and intermediary quizzes, spent less time than others reading while attempting more subgoals, and took less notes and less time taking them. In contrast, students from cluster 1 scored low on pretest, posttest and intermediary quizzes, attempted less subgoals and took few notes and less time to take them. Students from cluster 0 occupied generally a intermediate position in terms of performance and subgoal uses, but took overall more notes and more time to take them. When using a formula derived from [9] to evaluate learning gains (cf. [8] for more details), we also found that students from cluster 2 had the most signiï¬cant knowledge acquisition, as opposed to those in cluster 1. For all those reasons, cluster 1 will be referred to as cluster L (for low), cluster 2 as cluster H (for high) and cluster 0 as cluster M (for medium). The fact that exactly three (as opposed to any other number) clusters were extracted might sound unsurprising, but comes from the fact that it was the best partition of the subjects in the 13-dimension space considered. Table 1: List of actions extracted from MetaTutor interaction logs. 5. DIFFERENTIAL SEQUENCE MINING. 5.1 Method principles. To identify important activity patterns in a comparison between student clusters, we employ a differential sequence mining technique [11]. This technique uses sequence mining and two different measures of pattern frequency to identify differentially frequent patterns between two sets of action sequences. Differential sequence mining combines frequency measures and techniques from sequential pattern mining [1], which determines the most frequent action patterns across a set of action sequences, and episode mining [12], which determines the most frequently used action patterns within a given sequence. The sequential pattern mining frequency measure (i.e., how many sequences/students exhibit the given pattern) is used to identify patterns common to a group of students. We refer to this as the “sequence support†(s-support) of the pattern, and we call patterns meeting a given s-support threshold s-frequent. In this analysis, we employ an s-support threshold of 0.5 to focus on patterns exhibited by at least half of a given group of students. The episode mining frequency (i.e., the frequency with which the pattern is repeated within an action sequence) is important for assessing the extent to which a student relies on a particular pattern of activities. For a given student, we refer to this as the “instance support†(i-support), and we call patterns meeting a given isupport threshold i-frequent. To calculate the i-support of a pattern for a group of students, we use the mean of the pattern’s i-support values across all traces in the group. The differential sequence mining technique ï¬rst uses a sequential pattern mining algorithm to identify the patterns that meet a minimum s-support constraint within each group [11]. To compare the identiï¬ed frequent patterns across groups, we calculate the i-support of each pattern for each student (in each group). Using a t-test, we ï¬lter the sfrequent patterns to identify those for which there is a statistically signiï¬cant difference in i-support values between groups. Comparing the mean i-support value for each pattern between groups then allows us to focus the comparison on patterns that are employed signiï¬cantly more often by one group than the other. This comparison produces four distinct categories of frequent patterns: two categories where the patterns are sfrequent in only one group, illustrating patterns primarily employed by the respective groups, and two categories where the patterns are common to both groups but used signiï¬cantly more often in one group than the other. The patterns in each of these qualitatively distinct categories are (separately) sorted by the difference in mean group i-support1 to focus the analysis on the most differentially frequent patterns [11]. 5.2 Application to the data. In order to identify patterns more closely related to changes in students’ knowledge and understanding, we decided to focus mainly on clusters H and L, as deï¬ned in section 4.2.2. Moreover, to further identify the patterns most characteristic of students in cluster H (resp. L) we identiï¬ed differentially frequent patterns with respect to the other two clusters M and L (resp. M and H) in a secondary analysis. {1} Even though a pattern may not be s-frequent in a group of action sequences, it can still occur in some sequences in the group, so an i-support value can be calculated (or the i-support is 0 if the pattern does not occur in any trace in the group). We employed an s-support threshold of 50% in this analysis, to consider all the patterns that were exhibited by at least half of the students in a given cluster, and a standard value of 0.05 for the t-test cutoff p value. We tried to preliminarily group sequences of identical actions together, but the results obtained were not very different from the ones without grouping, as the data extracted do not display long sequences of similar actions – therefore, those results are not reported here. Similarly, although we also considered the possibility of using gaps of one or more actions when identifying patterns, we discarded this analysis because the frequency of events collected in the log is low, which means that even a gap of only one action could mean that two actions of a pattern are actually separated by a rather long period of inactivity. 5.3 Results. The Table 3 displays the patterns with the highest difference of S-support between clusters H and L (positive value in column 3) as well as between clusters L and H (negative value in column 3), provided that difference is statistically signiï¬cant (i.e. a t-test p value below 0.05 in column 4). It also displays a selection of interesting patterns, which differed in a statistically signiï¬cant way between the two clusters. Columns 6 to 11 provide the results obtained for that selection of patterns using two different samples of students: ï¬rst (columns 6 to 8), cluster H alone and a merge of clusters L and M, and then (columns 9 to 11), cluster L alone and a merge of cluster H and M. Columns 5, 8 and 11 show, for the two considered samples, if only one or both of them were having a s-support above 50% for the considered pattern. Values N/A are used when the pattern is non statistically signiï¬cant for the two considered samples. The following observations can be made: – According to pattern 1, when prompted to use a strategy (regardless of the one suggested by the agent), students in cluster H reacted by taking notes more often than students in cluster L. We already knew that students in cluster H had received signiï¬cantly more prompts from the system, and taken less notes overall than those in cluster L (but checked them more often). This pattern seems to suggest that the reason might be that the notes they were taking mainly came from prompts from the agents. Moreover, since when they type a summary, students are offered the possibility to add it to their notes, it appears that students from cluster H must have preferred that strategy, which also would explain why they spent less time with the note-taking interface open (since the summary is typed in a different text box, and the note-taking interface is opened only to add the already typed text). Finally, the fact that the difference for this pattern is signiï¬cant for cluster H vs. L, H vs. M&L and H&M vs. L indicate that the degree to which one relies on the prompt for notes or summaries to take notes is directly correlated to the belonging to one of the three clusters (i.e. this behavior is observed more in cluster H than in M, and more in M than in L). Similarly, pattern 3 indicates that after a note-taking event, students from cluster H often moved on to another relevant page, which they read for an extended period. Pattern 5, which is a combination of patterns 1 and 3, conï¬rm the idea that students from cluster H had a very methodical approach to navigating through the content: they selected a relevant page, read it until being prompted by the agent to take notes or summarize it, performed that action, and then moved on to a new relevant page. Incidentally, it also indicates their effectiveness in identifying a page relevant to their current subgoal simply from its title (since that is all they can see before opening it). This latter hypothesis is itself reinforced by the observation that patterns 10 and 11, relative to a brief visit on an irrelevant page or to a succession of brief visits to irrelevant pages, is characteristic of students from clusters M and L, as opposed to students from cluster H who seem to not even need to open the pages to ï¬gure out they are irrelevant to their current subgoal. – Pattern 2 simply conï¬rms what we already knew about the tendency of student in cluster H to have answered correctly more often to intermediate quizzes (for a page or a subgoal). It also signiï¬cantly distinguish members of cluster H from those in cluster M&L considered together. – Patterns 4 & 7 are relative to pages viewed when the students did not have any active subgoal set. Pattern 4 indicates that students in the cluster H have visited more pages for a long time without having a subgoal set, which is conï¬rmed by pattern 7 which also indicate an alternation between short and long reads when no subgoals were set. As we also know that students from cluster H attempted more subgoals overall than students in the cluster L, it cannot mean that they have simply refused to set additional subgoals once they had ï¬nished their original ones (e.g., in an attempt to get rid from the system prompts and feedback), but rather that: a) they might have spent some time reviewing pages already read before taking the posttest, and/or b) instead of setting a ï¬nal subgoal when they did not have much time left, they took some time to review the pages they had not yet explored. This hypothesis can be conï¬rmed by looking at the temporal distribution of those two patterns: for students in cluster H, the median time is of 108 and 112 minutes (for an overall session of approximately 120 minutes), which means that it’s during the last 15 minutes of their learning session that students were displaying that kind of browsing behavior, clearly distinct from the ones they had displayed earlier in the session. – Pattern 6 indicates that students in cluster H seemed to more often estimate properly their level of understanding of the content or the relevance of the page they were visiting when it was relevant for their current subgoal. While this pattern is only marginally signiï¬cant when comparing clusters H and L, it is statistically signiï¬cant when comparing H to M&L, conï¬rming that it is speciï¬c of students in cluster H. It tends to show that not only other students had difficulties to identify the relevance of a page from its title, but that even once they had been able to spend some time reading its content, they were less prone to correctly evaluate its relevance or their understanding of it. This hypothesis seems to be conï¬rmed by the complementary pattern 8, which indicates that students from cluster L, when they were on a page irrelevant for their subgoal for a long time and got prompted to evaluate its relevance (the only prompt they can get on a non-relevant page), tended to be wrong in their evaluation. If we consider again the temporal distribution of those two patterns, we can notice that the median time, for students in cluster L, is of 50 and 45 minutes, i.e. less than the median time of the session (60 minutes). Table 3: Signiï¬cant and most frequent patterns differentiating clusters. We can therefore assume that, at least, students from cluster L have been slightly improving their capacity to evaluate their learning and the relevance of a page over time. – Pattern 9 conï¬rms the previous observation that students in cluster L really had issues to see the relevance of a page with regard to their subgoal: they did not simply end up going to random pages that were irrelevant to their subgoal, or ignored the subgoal they had set, but instead, they appeared to sometimes skim through a relevant page, miss its relevance, and end up instead spending a long time on a page that wasn’t irrelevant to their subgoal. This tendency is shared, to some extent, with students from cluster M, as the results of clusters H vs. M&L are also statistically signiï¬cant. – A ï¬nal observation can be made regarding the tendency of a student to obey system prompts: if we run the same analysis without distinguishing the correctness of the evaluation of students monitoring (i.e. by considering actions M ona = M on+ ∪ M on− and M onu = M on+ ∪ M on− ), we observe that the pattern M ona M onu is signiï¬cantly more frequent for students in cluster H, which tends to indicate that when prompted to perform an optional monitoring action (most likely, a MPTG, since otherwise there should be a Quiz action following the M ona ), they are more prone to accomplish the suggested action. 6. DISCUSSION. To summarize the results obtained in the previous section, we can conclude that students from cluster H are more inclined to follow the system prompts and to follow the suggestions to take notes or summarize what they have just learned. Further, they are more prone to keep applying the same method for each page they read, are better at identifying a page relevant to their subgoal from its title, and are more strategic in their preparation for the posttest (e.g., they usually use their last 10 to 15 minutes to briefly review various pages). From an ITS design point of view, the fact that these students used system prompts to effectively regulate their learning tends to indicate that the frequency of Strategy prompts should probably not be reduced. However, as they seem good at distinguishing relevant pages from irrelevant ones, they might need less scaffolding regarding the Monitoring processes. On the other hand, students from cluster L appear particularly unable to identify pages relevant to their subgoal, which is probably linked to their lower prior knowledge. For them, it seems that additional scaffolding from the system would certainly be beneï¬cial. However, even when prompted to monitor their learning, they tend to be mistaken in their evaluation. Therefore, it could be necessary to go further than the methods currently employed to suggest ways in which they can better evaluate the relevance of a page. 7. CONCLUSION, FUTURE DIRECTIONS. In this paper, we ha"
Acerca de este recurso...
Visitas 168
Categorías:
0 comentarios
¿Quieres comentar? Regístrate o inicia sesión