formularioHidden
formularioRDF
Login

Sign up

 

Learning Analytics as Interpretive Practice: Applying Westerman to Educational Intervention

InProceedings

In Westerman’s [1] disruptive article, Quantitative research as an interpretive enterprise: The mostly unacknowledged role of interpretation in research efforts and suggestions for explicitly interpretive quantitative investigations., he invited qualitative researchers in psychology to adopt quantitative methods into interpretive inquiry, given that they were as capable as qualitative measures in producing meaning-laden results. The objective of this article is to identify Westerman’s [1] key arguments and apply them to the practice of learning analytics in educational interventions. The primary implication for learning analytics practitioners is the need to interpret quantitative analysis procedures at every phase from philosophy to conclusions. Furthermore, Westerman’s [1] argument indicates that learning analytics practitioners and consumers must critically examine any assumption that suggests quantitative methodologies in learning analytics are inherently objective or that learning analytics algorithms may replace judgment rather than aid it.

"1 Introduction. In traditional cognitive science inquiry, measurement almost always involves significant levels of abstraction away from the phenomena of interest. Usually, observable behavior is of interest because it is assumed to indicate cognitive phenomena. For example, in learning measurement, the factors of interest are unobservable, so behavior observation is used as a proxy. In online learning environments, however, even greater abstraction is required in order to conduct inquiry. Behavior, in most online learning scenarios, is not directly observable. So, second-order proxies that represent directly observable behavior must also be constructed. Furthermore, these second-order proxies are typically encumbered by a severely impoverished vocabulary—a language of actions consisting almost exclusively of mouse clicks and keystrokes. Hence, online and blended learning environments present situations for inquiry that, in most cases, require even more examination of assumptions and methodology than required in traditional cognitive science inquiry. Despite the potential advantages of scale in learning analytics to investigate learning phenomena, great care must be taken in order to account for the philosophy and human judgment behind the measures and constructs employed in a study in order to interpret results in a reasonable manner. 1.1 Westerman’s Interpretive Inquiry and Learning Analytics. The arguments of Michael Westerman [1], though directed towards critics of traditional psychology who eschewed positivism and quantitative methodology, have particular relevance to learning analytics. Westerman invited qualitative researchers in psychology to adopt quantitative methods into their practice, which is no small invitation. Most qualitative researchers avoid quantitative research methods because of their traditional association with positivist philosophy, which purports that control, prediction, objectivity, and universal models are the end goal of science [1]. Qualitative researchers in social science are usually interested in questions that address the meaning of psychological phenomena more than how to replicate them. Westerman argues that quantitative methods, however, are not tied to positivism, and in fact are fundamentally interpretive and meaning-laden. Consequently, researchers interested in questions that address meaning should adopt quantitative methods into their repertoire of inquiry tools. The implications of quantitative methods lacking default objectivity, requiring interpretation, and addressing questions of meaning are a watershed for the practice of learning analytics. Given the multiple levels of abstraction involved in identifying and interpreting behavior in online settings, we contented that Westerman’s arguments regarding interpretive quantitative inquiry have particular relevance to learning analytics practice. 2 The Mismatch Between Positivism and Scientific Inquiry. Mainstream social and physical science are usually associated with positivism [1]. Positivism, as referred to in this article, is a philosophy of science in the tradition of Aguste Comte that assumes that determinate, value-free, causal accounts of phenomena can be made through objective methodology, hypothesis testing, and operational definitions[1-3]. A good example of the type of scientific inquiry that a positivist philosophy is likely to produce comes in the words of the well-known physicist Stephen Hawking, who wrote, “If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes” [4]. In one of Westerman’s [1] central arguments, he questioned the appropriateness of employing positivism in the inquiry of psychological phenomena, given its assumptions of objective methodology and value-free ontology. In the light of learning analytics, positivist philosophy also poses a formidable contradiction between its assumptions and the science that educational interventionists attempt to conduct. 2.1 Positivism and Operational Definitions. So what is the trouble with a philosophy that provides the rationale for objectivity, causality, and prediction in scientific inquiry? The challenge rests on the central assumption that objectivity exists in the practice of science. In social science, operational definitions play a prominent mediating role that define how phenomena are observed, measured and analyzed, which can hardly be called objective. Westerman explained that “Notwithstanding nearly ubiquitous references to ‘‘operationalizing’’ variables and hypotheses about relations between variables, quantitative research procedures as they are actually employed do not objectively translate theoretical ideas about constructs and processes into meaning-free language about procedures” [1]. The challenge that operational definitions pose in research can also be illustrated by taking a closer look at how they create abstractions—caricatures of actual behaviors or psychological phenomena. Similar to many definitions of the term, the Center for Teaching and Learning at the University of Texas defines operational definition as, “a specific statement about how an event or behavior will be measured to represent the concept under study” [5]. The language “to represent” is key here. Operational definitions do not actually define concepts or observable behavior, but act as abstracted mediators of how behaviors and concepts are measured and interpreted. Westerman goes on to say, “In fact, what instruments of this kind provide by way of so-called ‘‘operational definitions’’ are natural language explanations of each category and examples. Such definitions are very useful, but they are anything but exhaustive. Indeed, they are useless if not employed by a coder with a wealth of background knowledge about the concepts, interpersonal behavior in our culture, and family life as we are familiar with it” [1]. A good example of how operational definitions provide a challenge in online environments occurred during Fast Company’s Influence Project [6] in the summer of 2010. The magazine asked its readers to participate by creating a profile on the project’s website. Participants gained “influence” by how many people clicked on their profiles. As the project came to a close, it became apparent that defining influence by the number of clicks a profile received was not the best measure. A lot of people tried to game the system, so judgment was required in order to define what constituted a valid click. The project organizers felt in the end that influence would have been better defined by how many participants were able to, not only persuade individuals to click on their profiles, but also to convert people who clicked on profiles into creators of their own profiles. Even with the latter definition of influence, however, many nuanced variations of online influence could not have been discovered if such a reduced meaning of influence were used as the sole definition and data point. From the report, we find that Fast Company used other means besides click counts in order to distinguish among six types of online influence: large existing networks, static advertisements, commoditized celebrity appeal, overt ideology, grass roots activism, and ability to convince others to participate, which is a much richer account of profile relationships than either number of clicks or number of converts. The point here is not to avoid systematic inquiry in scientific observation and analysis of data, but rather to appreciate that, unlike positivist assumptions of objectivity, interpretation is required at the most fundamental level of scientific inquiry, given the inseparable part that human judgment plays in defining the constructs that researchers examine. Learning analytics practitioners, particularly should avoid placing confidence in the idea that observational data collected through web-analytics measurement tools objectively map on to the constructs they are investigating through the lens of operational definitions. 2.1.1 Positivism, Operational Definitions, and Learning Analytics. Even seemingly simple constructs of potential interest to learning analytics researchers like “time on task” must first be constructed (hence the name) before they can be recognized, recorded, and analyzed. When a student sits silently in a comfortable chair looking intently at the page of a book, this behavior indicates that he might be attending to the book’s contents. However, he might also be daydreaming about his girlfriend, or worrying about his father’s illness, or thinking about an essentially infinite list of other things. “Looking intently at the page” is only a proxy for reading. In online settings, second-order proxies are further abstracted away from the true subject of a researcher’s interest. As an example, take a researcher who is interested in the amount of time an online student spends “on task.” Rather than observing a student sitting silently in a comfortable chair looking intently at the page of a book, the researcher may have access to a “page load” event and a “page unload” event in a webserver log. These events outline a rough window of time. But was the browser window containing the text the researcher hoped the student would read even in focus between the two events? Was the student even in front of the computer while the browser window was in focus? Was the student looking at the browser window, or texting, or reading a magazine? Technological tricks may be able to help us answer these questions. And when we overcome these many obstacles, we have only arrived back at the original level of uncertainty present in direct observation. Could reasonable people create meaningfully different operationalizations of the construct “time on task” in online settings? Could different operationalizations of the construct applied to the same data produce different answers to research questions? If the answers to both these questions is yes, as we believe they are, the purportedly objective process of conducting learning analytics research is built on a foundation of subjectivity. 2.2 Positivism and Methodology. Another way in which positivist philosophy diverges from the practice of science is in the assumption that methods and instruments objectively display what is being measured; “structured observations do not provide a way to examine hypothesized associations in a transparent manner” [1]. As Westerman went on to explain, “Interpretation plays a role when it comes to measurement, which lies at the heart of quantitative research. This point is obvious regarding research based on clinical judgments about such global constructs as ‘‘irritable’’ or ‘‘submissive.’’ Research of this sort may employ the technical machinery of Likert-type scales or Q-sort procedures, but it clearly is based on rich appreciation of the meanings of human behavior” [1]. Just because a concept is associated with a number does not mean that the association is objective. Judgment, and hence subjectivity, is always involved when assigning a metric to phenomena in the real world. Failing to recognize the nature of human judgments in scientific methodology has significant consequences regarding the validity of research conclusions. Joseph Rychlak illustrated these consequences well in his treatment of Philipp Frank’s Philosophy of science: the link between science and philosophy. “The obvious lesson is that science is not only a methodological endeavor. Constant attention must be given to theoretical considerations—or, as they might be called, metaphorical or philosophical considerations…[In] Newtonian science, the uncritical acceptance of empirical data without sophisticated study of assumptions lead to a “theorization” of scientific method—that is, the assumptions of the method were projected onto the world as a necessary characteristic and then “proved so” by the results of these very same method [7]… they constantly fall into the errors of…confusing what is their methodological commentary with their theory of explanation” [8]. In other words, not accounting for the method effects will make it impossible to evaluate whether your conclusions are accurate or valid, given an inability to distinguish between the results that represent the psychological phenomena and those that represent error. 2.2.1 Positivism, Methodology, and Learning Analytics. Frank’s warning about “the uncritical acceptance of empirical data without sophisticated study of assumptions” is even more important in the context of learning analytics research. Because learning analytics data is so inexpensively and easily captured, large collections of data are becoming available for study. Faced with access to large collections of data and powerful open source analysis software, researches will be subject to a variety of temptations to poke about in this data in thoroughly unprincipled ways. While these fishing expeditions may uncover seemingly interesting relationships between constructs, without an interpretive framework grounded in specific theoretical commitments, the data tail may come to wag the theory dog. 2.3 Positivism and Ontology. Another problem with positivism is how its ontology eschews meaning. As Stephen Hawking explained, positivism provides no framework to examine the meaning of phenomena, which is a necessary consideration when applying research results to the situations of individuals. Hawking’s belief that prediction is the preponderance of science is a prime example of the type of philosophy that Philipp Frank [9] rebuked when saying that science is more than objective methodology. The philosopher of science noted that, “scientific findings (validated predictions or observations) outstrip the common sense understanding of them, taking us back to that condition earlier in history where we could control and predict without knowing why, what, or how such regularities in events were really brought about. Man predicted his course of travel under the stars, controlled the crops through practical knowhow, and cured himself of certain diseases centuries before there was anything like a scientific account of these beneficial outcomes” [8]. In other words, mathematical models, control, and prediction are not sufficient to answer questions about why something happens or what it means. Furthermore, given the irreducibly interpretive nature of inquiry, not attempting to answer questions of meaning and purpose may easily lead to the wrong conclusion, even if one is able to replicate observed behavior. 2.3.1 Positivism, Ontology, and Learning Analytics. Even if the atheoretical application of learning analytics techniques can uncover stable relationships between an impoverished lexicon of online behaviors and (equally subjective measures of) academic performance, we have only arrived back at “that condition earlier in history where we could control and predict without knowing why.” We will never be able to understand why certain behavioral patterns relate to academic success or struggle without engaging in explicitly interpretive work—interpretive work that draws on a wealth of background, theoretical, and empirical knowledge about the learning process. Inasmuch as a primary goal of learning analytics is to support and improve learning in human beings, the findings of learning analytics research must be translated in concrete interventions with human beings before the value of learning analytics is realized. And if the learning analytics path leads inexorably to interventions with human begins, we must consider learning analytics to be an inherently ethical activity. In this context, asking explicitly interpretive questions of “why?” and “how?” with regard to the findings of learning analytics research gains importance beyond the requirements of responsible science and crosses firmly into the realm of ethics, requiring us to proactively work to protect the interests of the people with whom learning analytics may suggest we intervene by understanding issues of why and how. 3 Learning Analytics, Hermeneutics, and Interpretive Inquiry. For a number of reasons outlined above, a positivist view of learning analytics appears to be a combination that leaves out key components such as tractable, desirable or ethical. If positivism’s goal of defining universal, perfect models of phenomena is not achievable because of the subjectivity inherent in use and interpretation of inquiry conventions, constructs, and tools, then what philosophy is reasonable in which to approach science? Westerman proposed hermeneutics as an ideal philosophy of inquiry. It brings meaning and interpretation to the forefront of its explicit assumptions, “Our accounts must always refer to what people are doing, that is, to meaningful practices, rather than attempt to fully explain the meaning involved in what they are doing in some other terms” [1]. But how does one arrive at meaningful explanations of phenomena? Westerman identified multiple vehicles for approaching meaning in scientific inquiry, including metaphors and reductionism. As he pointed out, however, “Note that these...positions about meaning share something in common. They represent different ways of maintaining that the nature of objects is such (whether that nature is characterized by abstract meanings or the absence of meaning) that a subject reflecting on those objects from a removed vantage point could arrive at what Wittgenstein [10] called crystalline understanding, that is, a complete, determinate account” [1]. Approaches to meaning that assume an objective, removed observer do not fit within the hermeneutic framework of using “Practical activity [as] bedrock” [1]. Hermeneutics differs from crystalline understanding accounts of meaning by assuming that behavior is concrete and part of practical, meaning-laden activities. Behavior is concrete because the people “behaving” act and live in the world and within a social context. As we compare Westerman’s non-examples of meaning with his proposed interpretation of practical activity, the case for hermeneutics in learning analytics will begin to emerge. 3.1 Losing Meaning through Metaphor. One way of prescribing meaning to phenomena is by calling out similarities among new and the known phenomena. Westerman identified abstractions and metaphors as a path that many researchers take to achieve this type of meaning, though he felt it was misleading. “According to the tradition—rationalism, in particular—meaning refers to abstract structures that lie behind the diversity of events. Philosophers proceeding along the lines of the tradition locate the capacity to appreciate such meanings in the subject’s mind, and psychologists follow suit with ideas about how there are such abstract structures as scripts or rules inside the mind” [1]. For example, the information processing (IP) metaphor in cognitive science stands out as a primary instance of making the metaphor more real than the observed behavior. IP is a metaphor that cognitive science uses to describe mental processes. It was derived from computer science [11] and views the world as information to be inputted into the mind in order to be encoded for long-term memory [12]. Because computers have inputs, outputs, processors, and memory, so must humans. Right? Unfortunately for cognitive science, there is no rationale beyond preference alone to use the IP metaphor in order explain the mind. Nevertheless, cognitivism has made IP the vernacular for describing its inquiry, just as the steam engine was the preferred mind metaphor before the advent of computers (a metaphor which we ‘enlightened, 21st century scholars’ now scoff at as unbelievably puerile). The problem with metaphors and abstractions is that they all, by design, illustrate only some properties of the situation or object, while obfuscating others. Consequently, a metaphor cannot be a theory of all things. Whichever learning metaphor is employed, some type of learning will not find place to be adequately described. In the case of the IP metaphor, it can explain psychological phenomena as long as they superficially appear similar to how a computer processes data. As will all abstractions, IP loses its explanatory power as the psychological phenomena we attempt to explain diverge from the affordances [13] of the metaphor. IP reaches its limits in accounting for play, creativity, exploration, etc.—things a computer cannot do. Hence employing abstractions gives meaning, but the meaning portrayed may not be sufficiently representative of the phenomena under study. 3.2 Losing Meaning through Reductionism. Another way of assigning meaning to phenomena is by reducing it to supposed fundamental components, which is to say a definition of not what something is, but what something is made of. As previously discussed, linguistic operationalism is a prime of example reductionism. But behavior can be reduced in other ways; as Howard Gardiner described, “It seems to some observers that an account of the classical psychological phenomenon of habituation in terms of neurochemical reactions is an important step on the road to the absorption of cognition by the neurosciences. Once the basic mechanisms of learning have been described in this way, no additional level of explanation will be needed; in a way that would please such behaviorally oriented philosophers as Richard Rorty, these reductionists believe there is really nothing more to be said when neurophysiology has had its say” [11]. If the meaning of behavior can be reduced to no meaning by cutting it down to neurophysiology, the same could be said of behavior on the electronic networks through which students in online and blended learning environments interact. The implication would be that behavior is no more than the sum of its frequency counts, which sheds no light on the meaning of psychological or learning phenomena. This by no means is a call to avoid inquiry involving quantitative analysis. As Westerman explained, “The key point here is that even though mathematics enters into the picture via data analysis, our examination of phenomena is not mathematical in nature. Although this statement may seem strange, it is accurate because the mathematical aspects of the research procedures are embedded in the larger context of the meaningful, interpretive procedures” [1]. Aspects of behavior may be measured, counted and analyzed, but the philosophy behind the inquiry determines whether one interprets an ability as the sum of the observational data or as a meaning-laden practice in the lives of individuals. 3.3 Finding Meaning through Concreteness. Addressing meaning through interpretive inquiry requires looking at practical behaviors. “Practical” points to how the behavior is concretely embedded in a social practice. As Westerman eloquently stated, “We appreciate the meaning of objects of inquiry from the ‘‘inside.’’ Their significance always refers to the roles they play in the world of practices in which we are already engaged. This locates meaning in the world (not the dead world of brute events, but the living world of practices), not in the mind. As a result, meanings are concrete, not abstract—an impossibility from the point of view of the philosophical tradition, but a central tenet of a perspective that takes practical activity as the starting point” [1]. But how does viewing the significance of behavior as concrete change how inquiry is performed? Westerman noted that, “Recognizing that behaviors are parts of practices, however, leads to advocating the use of meaning-laden measurement procedures, because the significance of behaviors depends on the role they play in practices” [1]. So, interpretive inquiry must include a way to address the meaning that results from behavior in context. Westerman gave the example of codifying a heated conversation through either relational codes such as ‘‘A yells at B,’’ or by decibel-level. He suggested that the “objective measure” can be misleading, if, for example, “A says something endearing to B while the two stand on a corner with noisy traffic going by.” [1]. The question of meaning in inquiry brings into focus the relevance to how humans carryout their lives. In learning analytics, could meaning be assessed just by correlating grades to behavior counts? Could confirmatory factor analysis attribute the indicator variables to factors reasonably without looking at the meaning of the items used in the datagathering instrument? The concrete human experience weaves the interpretation of results into the fabric of conclusions, not at the loss of objectivity (inquiry is not objective in the first place), but in the acquisition of meaning, plausibility, and authenticity. 4 Conclusion. Learning analytics is an exciting new field of research with the potential to drastically improve learning. Online learning systems have become the educational equivalents of physics’ Large Hadron Collider, generating massive amounts of quantitative data that can be subjected to a wide variety of mathematical analyses. Armed with these huge data sets and powerful computational tools, there is a temptation for educational researchers to regress toward positivism in their approach to inquiry. In this article we have pointed out problems with the positivist approach in social science generally and in the context of learning analytics specifically. As Westerman presented the foil, “The empiricist wing of the tradition offers…the idea that the apparent meaningfulness of events can be reduced to chains of brute occurrences, behaviors, and sense data. Psychologists turn to this idea when they argue that they can operationalize constructs and hypotheses” [1]. Nevertheless, meanings are concrete, not abstract, because they are located in the act of participating in the practical activity of everyday life. Furthermore, science does not equal methodology. Methods cannot answer our questions. Instead, researchers must combine philosophy and science in a practice of meaning-laden, interpretive inquiry in order to provide answers to difficult questions, such as how observed patterns in the population apply to the cases of individuals. Consequently, we have argued that hermeneutics provides a more appropriate philosophical framework in which to conduct learning analytics research. We hope this article will catalyze a critical discussion in the learning analytics field, enabling researchers and practitioners alike to more effectively support learning in all its forms."

About this resource...

Visits 322

0 comments

Do you want to comment? Sign up or Sign in