formularioHidden
formularioRDF
Login

Regístrate

 

Revisiting Formative Evaluation: Dynamic Monitoring for the Improvement of Learning Activity Design and Delivery

InProceedings

Distance education courses have a tradition of a formative evaluation cycle that takes place before a course is formally delivered. This paper discusses opportunities for improving online and blended learning by collecting formative data during course presentation. With a goal of overall improvement in instructional effectiveness and identification of promising practices for inclusion in a learning activities design library, we propose the immediate and on-going monitoring of the effectiveness of learning activities, tutor facilitation and learner satisfaction during the course presentation. This has implications for constructively involving the learners and facilitators in the course improvement process. While originally conceived to reduce the time for pilot evaluation of new courses and learning activities, the proposed system could also be extended to individualized and blended learning environments, and if implemented using semantic web technologies, for research into the effectiveness of learning activity patterns.

"1 Introduction. Distance education has a long tradition of conducting formative evaluation of instructional materials and learning activities before the ongoing delivery of a course. The feedback from pilot testing and expert evaluation enables course designers to catch and correct any weaknesses detected. The lessons learned can be incorporated into the professional design heuristics of the course designers enabling promising design practices to be reused in new courses, and disappointing practices to be redesigned or rejected. In recent years there has been an influx of traditional “face to face delivery” institutions to the online environment. Sometimes, indeed often, they expect the instructor of a face to face course to convert their course (or certain activities of their “blended” courses) to on-line delivery with a minimum of instructional design support, and there is little provision for observing which learning activities work and which require improvement. Typically course evaluation takes place at the end of the course, after the final marks have been submitted, but before learners receive grades. This delayed process does not capture immediate responses and reflections in time to provide meaningful formative evaluation that might enhance the learning experience of a course in session. Thus for both blended and distance courses a case can be made for a system for improving formative evaluation. This paper looks at the potential for embedding formative evaluation tools in both online and blended course delivery. Our goal is to improve the quality of online and blended learning experiences, along with facilitation and instructional design practice, by stimulating ongoing reflective practices among course designers and course facilitators such as professors, instructors, mentors or tutors. We recognize that there may be pitfalls to openly soliciting feedback from learners during a course, and there may be governance and collective agreement issues arising from providing feedback on the effectiveness of learning activity facilitation. The proposals contained here are work in progress, and an opportunity to open dialogue and critical reflection on this topic. We are fully aware that every on-line cohort establishes informal back channels where the learners actively blog their opinions – possibly the only ones not in the conversation on instructional effectiveness are the faculty presenting the course. 2 Formative Evaluation. Scrivens [1] coined the terms “formative” and “summative” evaluation to distinguish between evaluation of educational materials during their development and at the end of the instructional cycle. Formative evaluation is intrinsic to instructional systems design models [2] [3] and it has been ingrained into the development cycle of most distance learning organizations that produce instructional media or course packages. During the “big media” phase [4] when distance education was dominated by centralized production facilities turning out television shows and print packages, formative evaluation was a key part of ensuring quality before printing hundreds of copies for the warehouse or broadcasting on television. Distance learning was in a sense asserting its rightful place, and the best way to counter criticism of traditional universities was to demonstrate the quality of the courses was as good as if not better than the traditional offerings. Indeed, what most distance learning courses lost in presence they more than made up for in a systematic approach to development, the alignment of course materials to instructional objectives, and the thoroughness of content delivery and student assessment [5]. Formative evaluation was also an essential part of multimedia development [6] and carried into web site development [7]. It was evident in early on-line course development, again in response to concerns about the quality of courses that simply shovelled content onto the web [8] [9]. Various formative evaluation approaches were suggested by Reigeluth and Frick [10] with the intent of improving instructional design theories that then would translate into improved theories and instructional design. However, as online delivery became mainstream and blended with classroom instruction, formative evaluation seemed to lose its earlier attention in the literature. Perhaps course designs and instructional activities became somewhat standardized, but probably the real reason was that increasingly in the 2000s, web delivery had become accepted as a credible indeed essential extension of the academy. With the volume of courses to be transferred to the web there were insufficient instructional design resources to conduct formative evaluations. This period of adjustment was characterized by the downsizing of resources for centralized distance learning departments as faculties set up their own distance programs, the growth of learning management systems making it easier for individual instructors to load content online, the rise of the dual mode university and in Canada the reduction in the number of single mode distance universities [11] [12]. Traditionally neither formative nor summative evaluation has seen a comfortable fit in the face-to-face classroom [13]. Courses were taught by faculty who were expected to “get the bugs out"" in two or three terms. As this same expectation is creeping into the practice of online education, formative evaluation in online learning has not seen a high profile in practice over the past decade. Yet formative evaluation can strengthen both the implementation of a program and the knowledge gained within it [14]. 3 Challenges in Online Educational Practice. In addition to changes in instructional development models for online courses, the past twenty yedars of educational practice have seen a change from objectivist philosophies and paradigms to increasingly constructivist views [15][16]. The internet is increasingly seen less as a medium of delivery and more as a medium of communication in which interactions can take place among learners and instructors, and learners and the content [17]. Traditional models of distance education offered individual delivery of content-based instructional materials. Alongside the growth of channels for interaction, cohort paced courses have been implemented. These require learners to interact in many ways and to create new knowledge together. The resulting learner engagement can promote both achievement and retention. The need for formative evaluation of learning activities in online, paced cohort courses is important in view of this shifting role of the learner. Learners are active participants with rich and complex experiences. The learners’ engagement in the collaborative activities places them in the position of co-creators of knowledge within the learning environment, as well as self-organizers of their learning [18]. As described by Parrish, “While IDs [instructional designers] work to tame instruction into a manageable, replicable process that begins by predetermining outcomes to be measured through properly aligned assessments, engagement describes that wild aspect of the process in which the learner is as much or more in control of the activities as the ID” [19]. The situatedness of the learners and the contexts in which they find themselves become meaningful realities in the learning environment [20]. Development of community, shared practices and reflection are important parts of learning activities. The application of cooperative learning techniques to the design of learning activities for cohort-paced e-learning can produce engaging discussion, reflection and deeper processing of the content. With instructor-facilitated cohort/collaborative approaches providing such positive results, distance learning course providers are abandoning investments in comprehensively detailed content packages and elaborate instructional designs. Institutions notice that these changes make a difference, and cohort-paced distance learning courses have lower drop-out rates than their self-paced counterparts, about 85% retention versus 65% for individualized delivery [21] [22]. In a review of literature Means, Toyama, Murphy et al. [23] noted significant effect sizes for facilitated and collaborative online learning when compared with individualized delivery for the same content, although they were careful not to attribute this as a media effect noting that the cohort modes often involve different activities and increased time on task. At the same time, it is clear from the research that the many of the types of activities included may be of little value. For instance, they also noted that the provision of extra video clips and chapter quizzes contributed little to student achievement, while activities that provoke reflection and engage the learners’ metacognitive processes can yield improvements in learning. Richards [24] observed that trivial learning activities such as knowledge level multiple choice quizzes, or forum directions to “post your thoughts and reply to the thoughts of two other learners” led to a superficial understanding of the course content. It is therefore important to continue formative evaluation in these dynamic new learning environments, in order to determine which activities are both valued and valuable and those which are little more than ""make work"" projects. Feedback from learners in these environments is necessary in gaining a better understanding of these activities. In this paper we strongly advocate for careful design of such activities for cohort-paced elearning, and suggest that if formative evaluation is no longer conducted before the delivery of a course, then it should be embedded into the course delivery. This should be simple to implement. Finally, since the purpose of formative evaluation is to inform practice and improve delivery, the process should promote reflection on the part of learners, instructors and designers as all have a role to play within the learning experience, and all might benefit from an open discussion on improving the learning environment. 4 Other Benefits of Formative Evaluation. Eijkman raises a series of questions we as educators need to consider in our use of web-based learning and social environments. For instance, what “practices, habits, and patterns of use emerge?"" and “What changes need to occur in institutional policies and technological practices in order to integrate the social Web effectively into the educational mainstream?” [26]. Documentation or other forms of visualization of learning activities and designs can help to capture emerging innovative and expert practices [26] [27], and to gain a deeper understanding of the user experience. While not a primary focus of this paper, research and development around reusability or adaptation of learning activities and designs along with educational policy and practice can also benefit from formative evaluation. If formative evaluation of activities leads to their improvement over time due to the use of this feedback in updating and maintaining courses, these activities can be added to design libraries for re-use and sharing. Further, analysis of the broader emerging patterns may be incorporated into strategic and operational planning. The goal in the end of improvement of learning activities and designs is improved quality of instruction [28]. 5 A Simple Micro Model of an Online Learning Activity. Fig. 1 diagrams three nested levels of the design and delivery of an online learning activity. Level 1 is the Instructional Design Level – the level at which instructional goals are aligned with learning activities that are appropriate for the learners. Level 2 is the Facilitation Level – and encompasses those roles, activities and resources that come together during the conduct of a learning activity. Level 3 represents the Learner Experience. Note that each level has been allocated three phases of preparation, enactment, and reflection. It is our belief that this is the simplest depiction possible for our purposes and we recognize that learning environments and learning activities may become extremely crowded with multiple roles, players and resources. We fully anticipate that other evaluators may want to expand this depiction to be more explicit or to compact the phases to be more specific. In some settings the design and facilitation roles may involve the same individual(s). In some settings the facilitator may also be consulted in the design process, while in others, the facilitator may become involved years after the initial design, after a course has run several times. Fig. 1. A Conceptual Model for Dynamic Evaluation of Learning Activities. Online learning activities evolve to meet the needs of content, audience and the constraints of the instructional system. The model looks at a single activity, whereas a “course” is a strategy of intentionally sequenced progression through a series of learning activities. Some activities such as a lecture are well-structured, and others like a reading assignment are loosely-structured. It is also possible that parallel learning activities such as study groups may be autonomously initiated and conducted by a learner or group of learners as they form a learning community. Whether these should be included in the scope of the Dynamic Evaluation Model is left to the discretion of those conducting the evaluation. Similarly, there may be others external to the instructional process having a bearing on the results. While Garrison and Anderson [29] only identify instructors, peers and content in their interaction model for online learning the actual educational environment may include professional faculty developers, mentors, peers, friends, family and others – anyone who influences the decisions and performance of any of the key roles. 6 Aligning the Model with Instructional Design Methodology. As discussed earlier, in a cohort-paced constructivist learning activity not all learner activity is predictable since learners bring their own experiences and contexts to the learning situation. While situated in the design and execution of intentional learning activities, the model also takes into account learners’ own experiences of the activity. We use the term “learning activity” to avoid confusion with the more technical terms “learning design” and “lesson plan” which are expressions of learning activities. The term “learning activity” encompasses any activity that brings learners into planned contact with content, other learners, and experiences that promote acquisition of skills, knowledge and attitudes. This broad definition is congruent with similar definitions [23]. While traditionally instructional design does not include accidental nor incidental learning activities, in more open ended learning environments learners might influence the learning environment in unpredictable ways, and in their search for alternate explanations may discover materials useful to others. To the extent that instructional design is an intentional and iterative process, we look at preparation (planning and alignment of goals with activities), the design itself, and reflection on the outcomes of the design. Preparation is included as part of facilitation because so much success depends on the facilitators’ skills and knowledge of facilitation techniques, their understanding of the activity and their role in and commitment to its success. Preparation is also important for learners in terms of both prerequisite skills and knowledge and in terms of adequate direction to participate the learning activity. We believe that reflection is a part of all processes; and in terms of improving the system, early reflection catches errors before they can become deeply embedded in the teaching-learning system. 7 Practical Issues. The goal of formative evaluation is to improve the learning experience. If evaluation of the learning activities is not conducted until the end of the course or beyond, then no remediation can take place if there is a problem. We propose the following guidelines: 1. Formative evaluation should take place during or at the end of each learning activity. 2. Formative evaluation should seek data and reflections from learners, facilitators and designers. 3. Formative evaluation to seek both quantitative and qualitative data. 4. The results of the formative evaluation should be open to all participants. 5. If error correction is required, it should be considered immediately 6. If activity re-design is required it should be embarked upon so that it can be revised for the next course offering. 7. If learning activities are to be evaluated for several courses, then investment in an evaluation system to gather and analyze the data should be considered. 8. The results of dynamic formative evaluations may have value in explaining the findings of end of course evaluations, and in the evaluation of generic learning activity designs, including the training of facilitators, and the directions provided to learners. For purposes of brevity, we have not described the importance of linking such a system to descriptive ontologies such as Learning Object Context Ontology [30]. However, we believe that semantic tagging will enhance the ability of researchers and designers to better understand the patterns that may emerge from the data collected, and raise the importance of both instructional activity design and evaluation as part of organizational learning. 8 Proposed Implementation. Richards [24] embedded questions on the efficacy of cohort learning activities in a graduate distance course in Instructional Design at Athabasca University. For each activity, the learners were asked eight questions to rate their experience (along a five point Likert scale) and provided an opportunity to comment. This rating has been conducted a number of years and Fig. 2 shows a typical result. As the evaluation was constrained to a single course, a Moodle questionnaire was used to present the questions and collate the data. Unfortunately, the raw data are not available so neither is further analysis - even simple statistics such as the standard distribution or the maximum and minimum values are unavailable. For a more robust system capable of handling multiple courses we propose to implement the dynamic evaluation system external to the learning management system so that we can have greater control over the data, and the results would be then returned to the course participants through a web service. The course instructors and course designers would also have access to the participant comments. In a course with several class sections or perhaps teaching assistants, additional questions could be developed to link into each section and to pass back the appropriate identifiers to and from the LMS. If the function of the embedded questionnaires is to improve the learning activities, then the most important question is what suggestions the participant offers to improve the learning activity. For research purposes, it will be useful to ferret out other additional information for example on the role of the facilitator in animating a learning activity. While it would be appropriate to ask the learners if ""the facilitator/ instructor/tutor contributed to the success of this activity"", it could only be interpreted in light of facilitators’ own reflections about their preparation for the activity, the amount of time devoted to the activity, and other such factors. Similar questions might be asked of the course designers when they review the results of the activity. It is important to note that it may be very easy or very difficult to pin down why a cohort activity works or does not work. For example the questions used in Fig. 2 take for granted that many preparatory steps had already gone correctly: learners had the appropriate prerequisites, text books had arrived, individuals had read the prescribed materials, there were no untimely interruptions in internet services, and other such assumptions. These are extrinsic factors. Intrinsic factors are more within the realm of the course developers and facilitators – was the activity relevant, was the group size appropriate (what about the group make up?), was the time appropriate. The outcomes are the feeling of connectedness (which is a vector on group cohesion), that all members of the group contributed equally is in part a function of the balance between individual and group accountability – group projects generally do not work if there is no positive interdependence [311]. Finally, the achievement worthiness is important: was the activity worthwhile and did the activity help with learning? We can well imagine learning activities that are well-intentioned but involve superficial treatment of the content and thus provoke little or no deep learning and have little long term effect on understanding or retention. Dynamic formative evaluation seeks to gather data to ascertain the effectiveness of a learning activity, if required remediate with the current learners, and make adjustments as required in the activity before the next class. The adjustments may be with the content and materials, the directions to the facilitator role, or directions to learners. However, as noted earlier, a significant value of dynamic formative evaluation may be in generalizing the lessons learned and formalizing the expertise so that it can be shared with other course developers. This loftier purpose requires the design of a data base that is semantically enriched so that pattern description of the activities and the roles can be generically described with ontologies such as the Learning Object Context Ontology (LOCO) and extensions [30]. The semantic tags will enable pattern analysis across several courses, initially to allow course developers to locate and view how winning activities are embedded in existing courses, but also in the long run to identify and extract patterns into a library of successful practices. This then brings us close to the ideals of Koper [28] in documenting successful learning designs that can be reused in a pragmatic manner. Before closing it is important to note that a key implementation issue will be acceptance of the system by all users. In distance delivery student response to end of course questionnaires administered by administrative and marketing groups can be as low as 10 per cent, while Richards [24] found embedding the questions as part of the course brought a 100 per cent response rate. For dynamic formative evaluation to be effective it needs to be an active part of the learning experience – the questions should provide feedback to the learners on how their perceptions and experience compared with that of others, and there should be an active response to problem areas identified. Going further, the dynamic evaluation system could also solicit suggestions to update the learning resources that might be used to help others learn – moving from a prescriptive to a constructive learning environment has been a successful strategy in the corporate learning context of the IntelLEO Project [32]. Similar benefits should be obvious for facilitator and course designers in improving the quality and efficiency of the online learning experience. Fig. 2. Typical results of a learning activity evaluation in MDE604. 9 Summary. In summary, the purpose of this paper was to provoke discussion about the need to revisit formative evaluation of elearning activities and course designs. If elearning and blended models are the new reality of distance education, then formative evaluation is more important than ever. Because of the proliferation of distance education, much of it developed without the assistance of an instructional design team, and the complexity of constructivist learning design in cohort-paced courses, in many cases formative evaluation needs to take place during early course delivery. A dynamic process for formative evaluation on the success of learning activities (whether designed or not) is important in the creation of an informed community of practice. Currently, because of back channel communications among the learners, the only ones out of the feedback loop are the instructors and course designers. A dynamic learning activity evaluation system will help to close that gap."

Acerca de este recurso...

Visitas 132

0 comentarios

¿Quieres comentar? Regístrate o inicia sesión