formularioHidden
formularioRDF
Login

Regístrate

 

Learning Analytics for Collaborative Writing: A Prototype and Case Study

InProceedings

This paper explores the ways in which participants in writing intensive environments might use learning analytics to make productive interventions during, rather than after, the collaborative construction of written artifacts. Specifically, our work considered how university students learning in a knowledge work model—one that is collaborative, project-based, and that relies on consistent peer-to-peer interaction and feedback—might leverage learning analytics as formative assessment to foster metacognition and improve final deliverables. We describe Uatu, a system designed to visualize the real time contribution and edit history of collaboratively written documents. After briefly describing the technical details of this system, we offer initial findings from a fifteen week qualitative case study of 8 computer science students who used Uatu in conjunction with Google Docs while collaborating on a variety of writing and programming tasks. These findings indicate both the challenges and promise of delivering useful metrics for collaborative writing scenarios in academe and industry.

"1. INTRODUCTION. The successful deployment of learning analytics holds much promise for educators and professionals who wish to make productive interventions into knowledge work as it happens; these interventions may be seen as a kind of formative assessment- regular, real time feedback about a specific deliverable as it is being developed. In recent years, several germinal studies of computer-supported cooperative work have emerged from the overlapping fields of technical and professional communication [see, for example, 1, 2, 3, 4, and 5]. Related studies of networked writing technologies in particular have contributed to the understanding of writing's formidable role in knowledge work, the cross-disciplinary, collaborative, often distributed work ubiquitous in post-industrial developed nations [see 6]. Indeed, researchers have cogently argued that “knowledge work often looks like writing,” leading to a focus on what writing does in collaborative work environments—how writing work both represents and moves knowledge assets [7, 8]. Building upon this body of research, we developed and studied a system that generates visualizations of real time metrics about the contribution and edit histories of collaboratively written documents as a means for fostering formative assessment in both peer-to-peer and instructor-to-student modes. The system we developed, known as Uatu, interoperates with Google Docs to better trace and represent the complexity of collaboratively written documents. Our project was driven by two conceptual research questions: first, do learning analytics related to collaborative writing foster greater metacognition among student participants?; and second, does such analytic data promote both instructor and peer opportunities for real time interventions into ongoing collaborations as formative assessment? In the remainder of this paper, we address these questions by first describing the technical details of our system; we then report the methods, data, and initial findings from a systematic qualitative case study conducted with eight computer science undergraduates who used Uatu over fifteen weeks; and finally, we explore the implications of our findings for the learning analytics community. Our research revealed a strong preference for co-located collaboration among participants, a practice that directly reduced the efficacy of our system since many co-located collaborative writing practices are ephemeral and thus do not produce measurable data. These findings suggest, therefore, that learning analytic systems designed to represent collaborative writing are perhaps better suited to fully online learning environments and to distributed teams in industry or other professional domains. 2. DEVELOPING THE UATU PROTOTYPE. Our work was motivated by conceptual interest concerning the ways in which writing functions as an arbiter of knowledge work in organizations and academe, as well as our recognition of the growing adoption and use of networked writing technologies such as Google Docs. A key impetus for this project, therefore, was our interest in providing real time data about collaborative writing histories to key stakeholders—students and instructors in academe, and knowledge workers and project managers in industry. We hypothesized that making such data readily visible to all participants would lead to more frequent and robust opportunities for metacognition. In this process, distributed collaborators might gain a better sense of ongoing text development [9] accompanied by a more thorough understanding of their own contributions to the knowledge and practice of a collaborative team. Provided with these metrics, students, instructors, knowledge workers, and project managers would also be equipped with data to better make formative assessments that might help shape final collaboratively written documents. Our focus, then, was on learning analytics for collaborative writing in both industry and academe. We chose the name Uatu for our prototype application in homage to the Marvel Comics character “The Watcher,” a fictional being tasked with watching over the earth. Uatu is a visualization system that continuously watches documents created in Google Docs and shared with a special user account that collects and stores edit and revision data (about user contributions, changes in document size, and time) in a MySQL database located on a local server. As a visualization system, Uatu is comprised of three interrelated modules—Watcher, Vis, and the Data Table Servlet—that interoperate with the Google Docs GData application programming interface and the Google Visualization application programming interface (see Figure 1). Figure 1. Uatu Module Architecture. In addition to the backend architecture, we created a front end web site for users; when logged in, Uatu produces basic visualizations of the overall collaborative writing activity for a requested document (see Figure 2). Figure 2. Uatu Visualizations. These visualizations are generated from document revision histories stored in the local Watcher database. As such, these values are not truly representative of the entire document, but instead reflect a very close approximation based on frequent polling (currently in one-minute intervals) of the document hosted on Google's servers. In Figure 2, the top visualization includes document revisions as they occurred over time, denoting who made a particular revision, when the revision was saved (noted by the horizontal axis of the graph), and the size of the contribution (noted by the vertical axis of the graph). Users may adjust the visualization by manipulating the left-hand side of the graph to highlight specific revisions. The bottom visualization in Figure 2 is a horizontal bar chart displaying each user who has contributed a revision and the number of saved revisions they have made to the ongoing document. Despite several early challenges encountered when interacting with Google's application programming interface, Uatu is presently a fully functioning prototype that faithfully watches and logs revision data from any documents shared with a special user account. Moreover, the front end web site delivers simple yet effective visualizations of collaborative writing activity generated in Google Docs. In addition to building the Uatu system, however, a major goal of this project was testing and researching the system with small groups working on authentic collaborative writing tasks. In order to meet the latter objective, we conducted a systematic qualitative case study of computer science undergraduates who used Google Docs and Uatu during a project-based advanced programming course over fifteen weeks. 3. METHODS AND DATA. Work on the technical development of the Uatu prototype began in July of 2010, while our systematic qualitative case study of Uatu, Google Docs, and learning analytics about collaborative networked writing activity began in January, 2011; data collection concluded in May, 2011. We used a systematic qualitative case study methodology conducted with ethnographic methods of field research [10, 11]. Following Dourish, we were interested in determining not only what Uatu can do as a system, but what Uatu does for participants in the course of their everyday work [12]. Consequently, we conducted a series of classroom observations, usability observations followed by stimulated recall interviews, observations of pair/group programming and group presentations, and semi-structured interviews with participants. These methods improved the reliability of our data, since thorough triangulation across data types and instances led to a deeper understanding of the collaborative behaviors we observed. In total, our fieldwork consisted of: twenty different classroom observations; twenty-four semi-structured and stimulated recall interviews spaced evenly over 15 weeks; fourteen observations of student writing and collaboration behaviors conducted outside of the classroom and accompanied by talk-aloud protocols; over seventy photographs; and the collection of nineteen participant-produced artifacts written in Google Docs, with granular revision history data captured by Uatu. Our study generated rich qualitative data about collaborative writing strategies among novice computer programmers. There were eight participants in total, and we collected in-depth and well-triangulated data from six of those participants, all of whom were undergraduate students at a mid-sized public research university in the Midwestern region of the United States; most were computer science majors, though two were computer science minors. The initial findings detailed in the following section were developed inductively through analysis of the granular data listed above. We used a combination of qualitative coding methods, including in vivo, descriptive, and process oriented approaches in order to derive superordinate categories and themes from our data [13]. Because of space limitations, we report these themes in narrative form. Reliability was fostered by our collection and analysis of data across multiple types (semi-structured interviews, talk-aloud protocols, stimulated recall interviews, observational fieldnotes) and instances (repeated observations and interviews over 15 weeks—from the first week of class until the final exam). 4. FINDINGS. In this section, we begin by briefly describing the project-based pedagogical model in which our research participants worked over the course of our study. We then detail three interrelated themes to emerge from our analysis—two minor themes which feed into our key conceptual theme. First, we describe our participants' workflow practices and their overwhelming preference for co-located collaboration. Next, we explore a key outcome of this collaborative preference, one that is especially relevant for learning analytics: much of our participants' workflow practices in co-located collaborations are ephemeral and thus do not render metrics that can be easily captured and measured. These two themes lead directly into our third theme: what “counts” as writing work for participants? We describe how certain forms of writing collaboration are often rendered invisible by common approaches to the assessment of writing assignments. This is the key conceptual finding of our study, one that has significant implications for the use of learning analytics in networked collaborative writing environments. Though ostensibly a course in advanced programming at the sophomore level, in actual practice our participants were immersed in a curriculum that more closely resembled an upper division software engineering course. Students learned methods of agile software development, Scrum, risk matrices, UML diagramming, and concepts in code reading and documentation that were all modeled and carried out in a project-based learning environment. Many students in the course were introduced to version control (using Mercurial) for the first time; they learned concepts in paper prototyping and usability, and were engaged in a variety of collaborative writing tasks that supported their programming practices (including project planning, requirements gathering and analysis, and learning assessments). The course was divided into two major units: a nine-week portion with two smaller group projects and several individual assessments, and a six-week major group project that involved designing, developing, and testing a working prototype application in Java. For this latter project, the three student teams of five were required to maintain and develop a learning standards document which eventually served as a major component of our study (while also providing much of the data that drove Uatu visualizations). Our first minor theme concerns the collaborative practices of participants. As they worked on the large six-week group project, our participants displayed a strong preference for co-located collaboration, despite access to distributed, networked programming and writing environments (Mercurial and Google Docs). In this sense, the actual collaborative practice of our participants varied greatly from our expectations. Many students in the course had never used Google Docs previously, and it was our expectation that using the application would ease collaborative writing tasks for groups since they could work asynchronously or even in real time from distributed locations, removing the potential barrier of coordinating school and work schedules to complete face-to-face meetings. However, participants in our study showed an overwhelming preference for both ad hoc and planned collaboration sessions that occurred face-to-face. In fact, two teams scheduled three to four hour-long meetings each week during the large project, collaborating on both programming and writing tasks during these sessions. In short, participants strongly preferred co-located verbal and gestural collaboration rather than distributed programming and writing collaboration, but they used Google Docs to develop their learning standards documents nonetheless. The primary motivation for this preference appears to be centered on alleviating programming anxiety among novice software developers. For example, one participant noted that pair or group work helped alleviate his programming anxiety since he could receive immediate feedback from peers on a given challenge. Another factor is convenience: distributed writing is labor intensive, and participants simply found writing as a group to be more efficient and less individually taxing. But this preference for face-to-face writing collaboration caused us to focus on exactly how our participants generated prose in such sessions, our second minor theme. When he was asked who contributed edits and saves to the major written component of the six-week development project, Terry, one of our research participants, responded: “um, I think just me and Roy. Roy copied down the stuff on the whiteboard.” Roy also logged meeting notes from the oral interactions of team members in face-to-face development sessions. Roy, it turns out, was the designated “typist” for his four-person group, both in the IDE and in Google Docs. When we examined the edit histories of the ongoing learning standards documents in Uatu, we noticed that often only one or two members of a group made edits and saves to a work that was to be explicitly written collaboratively. We naïvely assumed that the combination of a version control system (Mercurial) and a distributed writing application (Google Docs) would lead to more distributed and asynchronous group collaboration, and consequently, more group members contributing commits and saves. Our assumption, however, was diametrically opposed to observed student practices. Face-to-face collaborations included practices such as verbal interaction, whiteboarding and sketching activties, paper prototyping, and ad hoc ideation (for example, via the Google Docs embedded chat feature). In short, many important participant contributions were ephemeral, and thus invisible to Uatu. Because Uatu couldn't capture these more dynamic learning and collaboration activities, we were forced to re-examine our perspective on what “counts” as collaborative writing work during the six-week projects. For example, even though Roy was the designated “typist” for his group, he certainly didn't contribute a corresponding proportion of the actual writing of the learning objectives document. Typing does not equal writing. Examining learning analytic data captured by Uatu would indicate that Roy and Terry, to take one example, overwhelmingly “wrote” the final learning standards document. But this would not reflect the actual collaborative construction of that document, which was far more complex and nuanced. For example, other group members sketched ideas on white boards and paper, and they meaningfully contributed to the writing of the document orally in face-to-face sessions. We repeatedly observed dynamic collaboration sessions in which all members contributed in meaningful (but not necessarily equal) ways to the final document. However, because groups preferred face-to-face meetings, gestural, oral, and non-digital contributions that were integral in the collaborative writing of the learning standards document were rendered invisible in the document's edit history. What “counts” as collaborative writing work in such group sessions, therefore, cannot be accurately measured with a traditional approach to assessment or analytics. 5. IMPLICATIONS AND OPPORTUNITIES. The limitations for visualizing collaborative writing activity with Uatu are obvious: if group members are engaged in complex writing work without writing, their individual contributions will remain invisible to the system. This is actually an encouraging finding, despite being contrary to our expectations and hypothesis. Our participants collaborated on complex knowledge work projects in creative and productive ways, using the systems they had been given to best effect the inquiry-driven work they had undertaken. In the process, we recognized that Uatu's utility is limited for collaborative writing situations in which participants are likely to work in shared space, and where input tasks are delegated to one or two specific individuals. Another limitation of Uatu concerns shorter documents that are collaboratively written by a few (under 5 or 6) participants. An individual participant's sense of the text is strong in these scenarios; they readily apprehend the extent of their fellow team members' contributions at any given stage in the project, so Uatu's visualizations yield a representation of something they already implicitly understand. In such scenarios, the utility of learning analytics for formative assessment seems particularly limited. While Uatu's metrics weren't especially useful for our research site, the use of Google Docs as a real time collaborative writing application was valued across participants. For example, when participants contributed in ephemeral ways to collaboratively written documents, they could see their respective contributions reflected in real time via the operations of a given group's designated “typist.” In this way, several participants noted that seeing such contributions develop in real time caused them to become more metacognitive about how their own feedback played a part in the evolution of the learning standards document. Perhaps more importantly, our key theme should lead us, as educators, to re-evaluate how we assess collaborative writing and learning scenarios. By broadening our understanding of what “counts” as writing work, we can more readily account for meaningful contributions across learning styles. The major challenge for the learning analytics community, therefore, is capturing data about more ephemeral forms of collaboration. The real utility of a system like Uatu, it seems, is in visualizing large, complex documents that are collaboratively written by several distributed participants over an extended period of time, where any individual's sense of text is likely to be overwhelmed by the number of contributors, revisions, and saves. A project manager tracking the development of a complex policy document, for example, would greatly benefit from periodic Uatu visualizations. Rather than laboriously sifting through pages and pages of electronic text, the project manager could simply produce a daily visualization that details ongoing development. This is certainly no substitute for closer inspection; instead, it can help a project manager determine when and how to more closely and strategically investigate the developing text, a scenario in which learning analytics might foster robust formative assessment. Another clear application of Uatu is in online education, where visualizations of ongoing writing activity may help instructors provide more productive formative feedback and assessment, helping students learn as they work. Where collaborative writing assignments may be seen as onerous in contemporary online courses, a system such as Uatu could actually facilitate better integration of such assignments. Because fully online students are less likely to collaborate face-to-face, Uatu visualizations might better reflect the collaborative construction of knowledge occurring in and through writing work. Yet even in this scenario, our study suggests that important ephemeral forms of ideation and development would still contribute to final deliverables. These are forms of thinking and doing that remain difficult to capture with current learning analytic systems. 6. ACKNOWLEDGMENTS. This project was supported by a grant from the Indiana Space Grant Consortium, a Ball State University Emerging Media Initiative Innovation Grant, and the Ball State University Honors College."

Acerca de este recurso...

Visitas 155

0 comentarios

¿Quieres comentar? Regístrate o inicia sesión