"A large body of research in the learning sciences has focused on students’ commonsense science knowledge—the everyday knowledge of the natural world that is gained outside of formal instruction. Although researchers studying commonsense science have employed a variety of methods, one-on-one clinical interviews have played a unique and central role. The data that result from these interviews take the form of video recordings, which in turn are often compiled into written transcripts, and coded by human analysts. In my team’s work on learning analytics, we draw on this same type of data, but we attempt to automate its analysis. In this paper, I describe the success we have had using extremely simple methods from computational linguistics—methods that are based on rudimentary vector space models and simple clustering algorithms. These automated analyses are employed in an exploratory mode, as a way to discover student conceptions in the data. The aims of this paper are primarily methodological in nature; I will attempt to show that it is possible to use techniques from computational linguistics to analyze data from commonsense science interviews. As a test bed, I draw on transcripts of a corpus of interviews in which 54 middle school students were asked to explain the seasons."
"1. INTRODUCTION. Much of the recent interest in learning analytics has been driven by the great surge in the amount and kinds of data that are available. This paper, in contrast, applies learning analytic techniques to a type of data that has a long history, and that predates recent technological advances. For the last few decades, a large body of research in the learning sciences has focused on students’ commonsense science knowledge—the everyday knowledge of the natural world that is gained outside of formal instruction. Although researchers studying commonsense science knowledge have employed a variety of methods, one-on-one clinical interviews have played a unique and central role. The data that result from these interviews take the form of video recordings, which in turn are often compiled into written transcripts, and coded by human analysts. In my team’s work on learning analytics, we draw on this same type of data, but we attempt to automate its analysis. In this paper, I describe one part of this work. The automated analyses I present here are not intended to code the data using categories developed by human analysts. Rather, these analyses are employed in an exploratory mode, as a way to discover student conceptions in the data. Furthermore, my goal in this paper is not to contribute new results to research on commonsense science. Rather, my aims are primarily methodological in nature; I will attempt to show that it is possible to use relatively simple techniques from computational linguistics to analyze the type of data that is typically employed by researchers in commonsense science. As a test bed, I draw on transcripts of a corpus of interviews in which 54 middle school students were asked to explain the seasons. It should be emphasized that it is not at all obvious that it should be possible to analyze data of this sort using simple computational techniques. Unlike some other applications in learning analytics, the total amount of data I have is relatively small. Furthermore, the speech that occurs in commonsense science interviews can pose particular difficulties for comprehension. Student utterances are often halting and ambiguous. Furthermore, gestures can be very important, and external artifacts such as drawings are frequently referenced. However, our analysis algorithms only have access to written transcripts of the words spoken by participants. Even with all of this complexity, my general approach is to go as far as possible with simple methods, before proceeding to more complex methods. Thus, the analyses I describe here make use of extremely simple methods from computational linguistics— methods that are based on rudimentary vector space models and simple clustering algorithms. 2. LITERATURE REVIEW. 2.1 Commonsense Science. It is now widely accepted that many of the key issues in science instruction revolve around the prior conceptions of students. This focus on commonsense science leads to a perspective in which the central task of science instruction is understood as building on, adapting, and, when necessary, replacing students’ prior knowledge. One outcome of this focus has been the growth of a veritable industry of research on students’ prior conceptions. The bibliography compiled by Pfundt and Duit [1], which lists literature on the science conceptions of teachers and students, provides one measure of the scale of this effort. As of early 2009, the bibliography had over 8300 entries, spanning a wide range of scientific disciplines, including, for example, what students believe about the shape of the earth [2], evolution [3], and nutrition [4]. In discussing the literature on commonsense science, it has become commonplace to distinguish two theoretical poles. At one extreme is the theory-theory perspective. According to this perspective, commonsense science knowledge consists of relatively well-elaborated theories [5]. At the other extreme, is the knowledge-in-pieces (KiP) perspective. In this perspective, it is assumed that: (a) commonsense science knowledge consists of a moderately large number of elements—a system—of knowledge and (b) the elements of the knowledge do not align in any simple way with formal science domains [6, 7]. I believe that the computational methods described in this paper should be of interest to a broad range of researchers who study commonsense science, and adopt a range of theoretical perspectives. However, the exploration of computational methods presented in this paper was biased by my own theoretical perspective, which lies closer to the KiP pole. As I hope will become evident, my exploration of computational methods has been driven by a desire to get at the more basic knowledge—the pieces—that I believe comprise commonsense science knowledge. And I have attempted to capture the dynamics that unfold as students construct explanations during an interview. 2.2 Vector space models and their applications in education research. Generally speaking, the goal of this work is to attempt to use computational techniques in order to “see†student conceptions in transcripts of commonsense science interviewers. There are many techniques from computational linguistics that could be employed in this way. The techniques I will use are based primarily on a type of vector space model [8]. In vector space models, the meaning of a block of text—a word, paragraph, essay, etc.—is associated with a vector, usually in a high dimensional space. So, two blocks of text have the same meaning to the extent that their vectors are the same. In this way, a vector space analysis makes it possible to compute the similarity in meaning between any pair of words or blocks of text. In Section 4, I will describe, in some detail, the algorithms employed in the particular analyses used in this work. One particular variant of vector space model, Latent Semantic Analysis (LSA), has had increasing prominence across a range of disciplines and applications [9-11]. LSA incorporates several innovations that distinguish it from the most basic form of vector space analysis; most centrally, it makes use of an auxiliary training corpus that provides information about the wider contexts in which terms appear, and it reduces the dimensionality of the vector space, which has the effect of uncovering latent relations among terms. Vector space methods have seen increasing use in educational research. These applications have been greatly dominated by uses of LSA. In fact, outside of information retrieval, some of the earliest and most persistent uses of LSA have been in applications related to education [12]. These applications have been of two broad types. First, LSA has been used as a research tool by educational researchers—that is, as a means of analyzing data, in order to study thinking and learning. Second, LSA has been used as a component of intelligent instructional systems. The majority of these educational applications, across both types, have been focused on the teaching of reading and writing. For example, LSA-based systems have been employed to automatically score essays written by students [10, 13]. In a number of applications, students are asked to summarize a passage or document that they have just read, and an LSA-based system is used to evaluate these summaries. In one such application, Shapiro and McNamara [14] had students read and summarize portions of psychology textbooks. Using LSA, these summaries were then compared both to the text the students read, and to model essays composed by experts. Similarly, Magliano and colleagues conducted a wide range of studies in which LSA was used to assess the strategies employed by readers and their reading skill, more broadly [15, 16]. In many of these uses of LSA, the data consisted of written text produced by participants in the research. However, in some instances, LSA has been applied to transcriptions of verbal data. For example, in their study mentioned above Shapiro and McNamara [14] found that LSA could be applied successfully both to written summaries of the textbook and to transcriptions of verbal summaries given by students. Similarly, Magliano and Millis [15] applied LSA to think-aloud protocols that students produced as they read passages of text. As mentioned above, LSA has been used as a component of intelligent instructional systems. For example, intelligent systems have been constructed that provide feedback to students on summaries that they write of a given text passage [17, 18]. One LSA-based system, AutoTutor, is of particular interest here because it has been applied to teach science-related subject matter [19, 20]. AutoTutor teaches physics by first posing a problem or question. The student responds by typing a response into the system. The system then evaluates that response by using LSA to compare the student’s text to a set of predefined expectations and misconceptions; the expectations are pre-specified components of a correct response and the misconceptions are possible erroneous ideas that might be expressed by the student. Based on this analysis, the system responds by posing further questions to the student, either to help correct the misconceptions, or to draw out more components of a complete answer to the original problem. I want to say a bit about where the work described in the present paper fits within the space of uses of vector space models in education. First, in this work, SNLP is used as an analytic tool for researchers; I will not be describing an LSA-based system that is used by students. Second, I apply my analyses to verbal data. As mentioned above, many applications of vector space models in education use text that is typed by a student, either in the form of an essay or short responses. Furthermore, prior research that has worked with verbal data has employed data that is very different than that employed in the present work. For example, the work by Shapiro and McNamara [14] and Magliano and colleagues [15, 16], which I mentioned above, employed a more constrained type of think-aloud protocol, focused on passages of text that were just read. In contrast, the verbal data employed in this work consists of relatively free-flowing discussions involving back-and-forth between an interviewer and interviewee. Third, in all these applications, answers given by students, whether in written or verbal form, were evaluated by comparison to a predefined model. This model might be, for example, some portion of the text just read, or an ideal answer constructed by the researcher. In contrast, as mentioned above, I will describe techniques for automatically inducing a set of conceptions from the data itself. Finally, I want to emphasize one other respect in which this work differs from prior work in education that made use of LSA; namely, I am not using LSA! As noted above, I believe it makes sense to begin with simpler techniques, and then to pursue more sophisticated methods as it seems necessary. 3. THE INTERVIEWS. 3.1 Subject matter and interview design. (The data used in this work was drawn from a larger corpus collected by the NSF-funded Conceptual Dynamics Project (CDP).1 For the present work, I draw from a set of 54 interviews in which students were asked to explain Earth’s seasons [21]. The seasons have long been a popular subject of study in research on commonsense science, and a significant number of studies have set out to study student and adult understanding in this area [22-26]. Our seasons interview always began with the interviewer asking “Why is it warmer in the summer and colder in the winter?†After the student responded, the interviewer would, if necessary, ask for elaboration or clarification. The interviewer had the freedom, during this part of the interview, to craft questions on-the-spot in order to clarify what the student was saying. Next, the student was asked to draw a picture to illustrate their explanation. Then, once again, the interviewer could ask followup questions for clarification. Our interviewers were also prepared with a number of specific follow-up questions to be asked, as appropriate, during this part of the interview. Some of these questions were designed as challenges to specific explanations that students might give. 3.2 Overview of student responses. In prior work with our seasons data, Conceptual Dynamics researchers have adopted a strongly KiP perspective [21]. We assume that students possess a system consisting of many knowledge elements—the “piecesâ€â€”that may potentially be drawn upon as they endeavor to explain the seasons. When a student is asked a question during an interview, some subset of these elements are activated. The student then reasons based on this set of elements, and works to construct an assemblage of ideas in the service of explaining the seasons. We refer to this assemblage of ideas as the dynamic mental construct or DMC, for short. For the purpose of the present work, it is not a bad approximation to think of a DMC as a student’s current working explanation of the seasons. So, throughout this manuscript, I will use the terms “DMC†and “explanation†interchangeably. The explanations of the seasons given by the students we interviewed varied along a number of dimensions. But it is helpful, nonetheless, to begin with a number of reference points, in the form of a few categories of explanations (DMCs). The first category, closer-farther, is illustrated by the diagram in Figure 1a. In closer-farther explanations, the earth is seen as orbiting (or moving in some other manner) in such a way that it is sometimes closer to the sun and sometimes farther. When the earth is closer to the sun then it experiences summer; when it’s farther away it experiences winter. The second category of DMC, side-based, is illustrated in Figure 1b. Side-based explanations are usually focused on the rotational motion of the earth, rather than its orbital motion. In side-based explanations, the earth rotates so that first one side, then the other, faces the sun. The side facing the sun at a given time experiences summer, while the other side experiences winter. Figure 1. Closer-farther, side-based, and tilt-based DMCS. The third and final category of DMC, tilt-based, is depicted in Figure 1c. Tilt-based DMCs depend critically on the fact that the earth’s axis of rotation is tilted relative to a line connecting it to the sun. In a tilt-based explanation, the hemisphere that is tilted toward the sun experiences summer and the hemisphere that is tilted away experiences winter. This category includes the normative scientific explanation, as well as some non-normative explanations. As discussed in Sherin et al. [21], during an interview, students tend to move among DMCs. In some cases, students do begin the interview with what appears to be a fully-formed explanation. In other cases, a student might construct an explanation during the interview, slowly converging on an explanation they find reasonable. Finally, students can be to seen to shift from one DMC to another, sometimes in response to a challenge from the interviewer. 3.3 Example interviews. Now I will briefly discuss a few example interviews. These examples will play a role as important reference points when I discuss the automated analysis. In this first example, a student, Edgar, began by giving an explanation focused on the fact that the Earth rotates, and he stated that light would hit more directly on the side facing the sun. He made the drawing shown in Figure 2, as he commented: E: Here’s the earth slanted. Here’s the axis. Here’s the North Pole, South Pole, and here’s our country. And the sun’s right here [draws the circle on the left], and the rays hitting like directly right here. So everything’s getting hotter over the summer and once this thing turns, the country will be here and the sun can't reach as much. It's not as hot as the winter. After a brief follow up question by the interviewer, Edgar seemed to recall that the Earth orbited the sun, in addition to rotating. He then shifted to a closer-farther type explanation: E: Actually, I don't think this moves [indicates Earth on drawing] it turns and it moves like that [gestures with a pencil to show an orbiting and spinning Earth] and it turns and that thing like is um further away once it orbit around the s- Earth- I mean the sun. I: It’s further away? E: Yeah, and somehow like that going further off and I think sun rays wouldn’t reach as much to the earth. In addition, initial attempts by Gregory Dam and Stefan Kaufmann to apply LSA to my research team’s data proved promising, and thus justified further exploration [28]. Dam and Kaufmann employed techniques based on one variant of LSA to apply a given coding scheme to an earlier subset of this corpus. Thus Edgar’s interview illustrates a case in which a student began with a side-based explanation and transitioned to a closer-farther explanation. It is also worth noting that Edgar’s language was halting, imprecise, and made significant use of gestures and his drawings. These are features that might well pose difficulties for automated analysis. Figure 2. Edgar's drawing. I want to briefly introduce interviews with two other students from the corpus, both of whom gave variants of tilt-based explanations. The first example is from an interview with Caden. I: So the first question is why is it warmer in the summer and colder in the winter? C: Because at certain points of the earth’s rotation, orbit around the sun, the axis is pointing at an angle, so that sometimes, most times, sometimes on the northern half of the hemisphere is closer to the sun than the southern hemisphere, which, change changes the temperatures. And then, as, as it’s pointing here, the northern hemisphere it goes away, is further away from the sun and get’s colder. I: Okay, so how does it, sometimes the northern hemisphere is, is toward the sun and sometimes it’s away? C: Yes because the at—I’m sorry, the earth is tilted on its axis. And it’s always pointed towards one position. Note that, in Caden’s explanation, the tilt of the earth affects temperature because the hemisphere tilted toward the sun is closer to the sun, and the hemisphere tilted away is farther from the sun. (This is not correct.) In contrast, another student, Zelda gave a tiltbased explanation, but her explanation made use of the fact that the tilt of the earth causes rays to strike the surface more or less directly, and this is what explains the seasons. Z: Because, I think because the earth is on a tilt, and then, like that side of the Earth is tilting toward the sun, or it’s facing the sun or something so the sun shines more directly on that area, so its warmer. Thus, Caden and Zelda both gave tilt-based explanations, but they differed in how exactly the tilt of the earth affected the seasons. For Caden the tilting causes one hemisphere or the other to be closer to the sun. For Zelda, the tilting causes parts of the earth to receive the sun’s rays more or less directly. This illustrates some of the types of features we would like the automated analysis to resolve. 4. VECTOR SPACE ANALYSIS OF THE SEASONS CORPUS. In order to captured students’ conceptions expressed in the seasons interviews, my team explored the use of techniques from statistical natural language processing. In particular, we explored the use of vector space models, augmented with cluster analysis. These choices make sense for a number of reasons. As mentioned above, one type of vector space model, LSA, has already been employed, with some success, in applications that are in some respects close to my own [10, 14-17, 19, 27]. The work described in this manuscript extends the work of Dam and Kaufmann in several respects. First, Dam and Kaufmann’s analysis did not discover student conceptions in the data corpus. Instead, it began with the conceptions identified by human analysts and used those conceptions to code transcript data. Second, unlike Dam and Kaufmann, I will be exploring the use of simpler vector space models, rather than LSA. Third, Dam and Kaufmann were primarily concerned with coding at the level of students. Each student was coded by the computer in terms of just one of three possible explanations of the seasons. The success of this analysis was judged by comparison to an analysis of these same transcripts by human coders, restricted to the same set of three explanations. However, this type of analysis represented a drastic simplification over our earlier qualitative analyses of the corpus. As exemplified in the description of Edgar’s interview above, the explanations given by students over the course of an interview were quite clearly dynamic. Thus, assigning a single code to each manuscript was often a dramatic simplification. In this new work, all of my analysis is done at a finer time scale; I look to identify student ideas only in small segments of text. In the rest of this section, I describe an exploratory analysis of our data. Here, I restrict myself to one pathway through the analysis, using one set of parameters and algorithms. In Section 5, I briefly describe the results I obtain when employing different parameters and algorithms. 4.1 The basics: Converting text to vectors. The central idea underlying any vector space model of text meaning is relatively simple: Every passage of text—whether it is a word, sentence, or essay—is mapped to a single vector. The direction in which this vector points is taken to be a representation of the meaning of the passage. More precisely, the similarity between two passage vectors is quantified as the cosine of the angle between the two vectors (or, equivalently, the dot product of the vectors if we assume the vectors are of unit length). Table 1. Partial vocabulary and sample counts. The question, of course, is how we go about converting a passage of text to a vector. In the most rudimentary forms of vector space models, this mapping is accomplished in a rather straightforward manner. First, we look across the entire corpus of text that we wish to include in our analysis, and we compile a vocabulary, that is, a complete list of all of the words that appear somewhere in the corpus. This vocabulary is then pruned using a “stop list†of words. This stop list consists primarily of a set of highly common “non-content†words, such as the, of, and because. For the corpus used in this work, this resulted in a vocabulary consisting of 647 words. (The stop list used contained 782 terms.) If the vocabulary is sorted from the most common to least common words, the top 10 words correspond to the list shown in the left hand column of Table 1. This vocabulary can be used to compute a vector for a passage from an interview transcript as follows. First, we take the transcript and remove everything except the words spoken by the student. Any portion of the remaining text can now be converted to a vector. To do so, we go through the entire vocabulary, counting how many times each word in the vocabulary appears in the text being analyzed. When this is done, we get a list of 647 numbers. If, for example, we process the portion of Caden’s transcript presented above, we obtain the values listed in the middle column of Table 1 for the 10 most common words in the larger corpus. Finally, in most vector space analyses, the raw counts are modified by a weighting function. In the analyses reported on in this section, I replaced each count with (1 + log(count)). This has the effect of dampening the impact of very frequent words. (Raw counts of 0 were just left as 0.) Appropriately weighted values are shown in the third column of Table 1. 4.2 Using passage vectors to discover meanings in the data corpus. We now have a means of mapping a passage of text to a vector consisting of 647 numbers. This capability can now be used to discover units of meaning that exist across the 54 interviews that comprise my data corpus. This process involves four steps which I will now discuss: (1) preparing and segmenting the corpus, (2) mapping segments to vectors, (4) clustering the vectors, and (5) interpreting the results. 4.2.1 Preparing and segmenting transcripts. First, as discussed above, the transcripts are reduced to that they include only the words spoken by the student during the interview. Next, recall that, in our earlier analyses of this corpus conducted by my research team, we found that students could be seen to construct explanations of the seasons out of large number of knowledge resources, and that their explanations could shift as an interview unfolded. We thus need a way to attach meanings to small parts of an interview transcript. This requires a means of segmenting a transcript into smaller parts. In keeping with my goal of using simple methods, I segmented the transcripts by breaking each transcript into 100-word segments. In order to lessen problems that might be caused by the fact that this introduces arbitrary boundaries, I chose to employ overlapping 100-word segments, with the start of each segment beginning 25 words after the start of the preceding segment. So the first segment of a transcript would include words 1-100, the second words 26-125, the third 51-150, etc. When all of the 54 interview transcripts were segmented in this manner, I ended up with 794 segments of text. These specific choices for segment size and step size are, of course, somewhat arbitrary. In Section 5, I will briefly present results with different values of these parameters. 4.2.2 Mapping segments to vectors. The next step in the analysis is to map each of these 794 segments to a vector. To accomplish this, I employ precisely the method described above. The result is 794 vectors, each consisting of a list of 647 numbers. However, here I must introduce one complication. There is one inherent problem with applying vector space models to an analysis of this sort of data. Vector space models such as LSA were originally developed as a means to find documents in a large corpus that pertain to a given topic. They were thus not developed for finding fine distinctions in meaning among documents pertaining to very similar topics. However, all of the documents involved in my analysis are about very similar subject matter; they all explain the seasons, and they almost all do so by talking about the position and motion of the earth in relation to the sun. In fact, the clustering analysis (described in the next section) does not produce meaningful results if I use the raw document vectors that are produced by the method described above. (I will say more about this problem in Section 5.) Instead, I need a means of modifying the vectors so that they highlight their more unique features—the features that, on average, tend to differentiate the segment from the other 793 segments of text. For that purpose, I compute what I call deviation vectors. To compute the deviation vectors for two vectors V1 and V2, I first find their average, and then break each vector into two components, one that lies along the average, and another that is perpendicular to the average (refer to Figure 3). The perpendicular components, V1' and V2', are the deviation vectors. If we use these deviation vectors in place of the original vectors, the result is that V1 and V2, have each been replaced by the component that defines its unique piece – a piece that characterizes how it differs from the average. The same procedure can be employed with any number of vectors. For the next steps of the analysis, I replaced the 794 segment vectors in just this way; I found their average, and then replaced each vector with its deviation from this average. Figure 3. How to compute deviation vectors. 4.2.3 Clustering the vectors. Now each of the 794 segments has been mapped to a vector that we understand as representing the meaning of that segment. The next step is to identify common meanings amongst these segments. To do that, we look for natural clusterings of the 794 vectors. To cluster the transcript vectors, I employed the very general technique called hierarchical agglomerative clustering (HAC). In HAC, we begin by taking all of the items to be clustered, and placing each of these items in its own cluster. Thus, we begin with a number of clusters equal to the total number of items. Then we pick two of those clusters to combine into a single cluster containing two items, thus reducing the total number of clusters by one. The process then iterates; we again pick two clusters to combine, and the total number of clusters is decreased by one. This repeats until all of the items are combined into a single cluster. The result is a list of candidate clusterings of the data, with each candidate corresponding to one of the intermediate steps in this process. A central issue in applying this algorithm is determining which clusters to combine on each iteration. In practice, there are many rules that can be applied. Throughout my discussions here, I will present results that were obtained using a technique called centroid clustering. At each step in the iteration, I first find the centroid of each cluster (the average of all of the vectors currently in the cluster). Then I find the pair of centroids that are closest to each other, and merge the associated clusters. An explanation of centroid clustering, including its application to vector space models, can be found in [29]. 4.2.4 Determining the number of clusters. The result of the clustering analysis can be thought of as a table with 794 rows. At the top is a row in which each segment is in a single cluster. At the bottom is a row in which all of the segments are in a single cluster. Table 2 displays the results for just a part of this large table. The bottom row, for example, shows the results when the segments are grouped into three clusters that contain 271 segments, 279 segments, and 244 segments respectively. As you move up the table the number of clusters grows, and the size of each cluster shrinks. In each row of Table 2, clusters contain segments that have been grouped together because, from the point of view of our vector space model, they have similar meanings. This means that each row in Table 2, constitutes a candidate coding scheme—it is a scheme for sorting segments into categories. The puzzle, of course, is which row to select. Table 2. Sizes of clusters for selected clusterings. Unfortunately, there is no simple answer to this question. In general, there is a tradeoff. When the number of clusters is high, we obtain a better fit to the data. However, we get this better fit at the expense of a more complex model. Because each cluster is described by a list of 647 values, each additional cluster represents a dramatic increase in model complexity. Here, as elsewhere, I make my choice in a heuristic manner. Across multiple analyses, I have found that working with a set of about 7 clusters strikes a workable balance. With 7 clusters, it is possible to resolve interesting features of the data, while producing results (in the form of graphs) that are not overly difficult to interpret. 4.2.5 What do the clusters mean?. We now have grouped the 794 segments into 7 clusters, each containing between 44 and 211 segments (refer to Table 2). The next question we must answer is: What do these clusters mean? Each of the 7 clusters can be thought of as defined by its centroid vector—the average of all of the vectors that comprise the cluster. These centroids each, in turn, are described by a list of 647 entries, each of which corresponds to one of the words in the vocabulary. One way to attempt to understand the meaning of the clusters, then, is to look at the words that have the largest value in each centroid vector. When this is done I obtain the results shown in Figure 4. For each cluster, I list the 10 words that are most strongly associated with that cluster, ignoring words that appeared less than 30 times in the overall corpus. In addition, the second column in each table has the value from the centroid vector corresponding to this word. The third column in each table lists the total number of times that the word appears across the entire corpus. 4.2.6 Interpreting the clusters based on the word lists. In many respects, the lists of words shown in Figure 4 clusters are suggestive. First, several of the clusters seem to align with the three broad classes of seasons explanations listed in Section 3. For example, it seems natural to associate Cluster 1, which starts with the words tilted, towards, and away, "
About this resource...
Visits 196
Categories:
Tags:
0 comments
Do you want to comment? Sign up or Sign in