Abstract
When learners interact with technologies and the learning context, a large amount of data is created. The collection, analysis, and utilization of those educational data has provided opportunities for learning technology (and CCI) research. In this chapter, we will discuss how learning systems produce and utilize educational data. In particular, we will discuss contemporary developments in the fields of learning analytics, educational data mining, and learner modelling; and how those advancements have impacted the design and functionalities of learning technologies.
You have full access to this open access chapter, Download chapter PDF
Keywords
4.1 Educational Data and Learning Analytics
In the second chapter, we introduced the notion of learning analytics, a central concept of which is the “learning trace,” or, more generally speaking, the “user trace.” Those traces are left behind when learners interact with technologies and the learning context in general (e.g., other learners, an instructor, or nondigital learning materials), and are represented by different datasets. Learner interaction is often complex (e.g., watching a video or answering a multiple-choice question), and traditional analytics model those interactions as a sequence of logs (e.g., video navigation, response times, and response correctness). These learning traces and the respective representations (visualizations, graphs, or diagrams) are used to improve the system’s functionalities and intelligence (through, for example, recommender systems or visualization of an individual’s progress) and the respective pedagogy, allowing learners and teachers to be aware of the possible misunderstandings and challenges associated with different content areas. Figure 4.1 depicts how typical use of learning systems (from instructors and learners) produce data that are processed from data analysis methods with an ultimate goal to support learning and instruction.
The datasets employed in learning technology platforms usually follow standards such as the sharable content object reference model or SCORM (SCORM, 2004) or other standardized data structures and formats. SCORM is the most widely used specification, with the goal of interoperability and smooth data and content exchange between learning technologies. Most LMSs use (or are at the very least compliant with) SCORM, which has its origins in a cooperation between the Institute of Electrical and Electronic Engineers (IEEE) Learning Technology Standards Committee (https://ltsc.ieee.org/), the IMS Global Learning Consortium (www.IMSproject.org), and the Alliance of Remote Instructional Authoring and Distribution Networks for Europe (ARIADNE). Independently of the standards employed, educational data also employ data structures in various formats, such as JavaScript object notation (JSON), extensible markup language (XML), or comma-separated values (CSV), which enable both the system and the researcher to carry out visualization and analysis. For example, you can find hereFootnote 1 example events exemplify how you can utilize learners’ tracking logs, those examples are from edX (an American MOOC provider created by Harvard and MIT), edX has an open-source platform (open edX) that powers edX courses. Those events (edX.log files) have no personally identifiable information, but exemplify how they can help us to gain insight into leaners and teachers. Learning interaction data sometimes might be provided in a relatively “primitive” form (the same applies for edX), however, there are several technological tools, such as HarvardX Tools (developed from Jim Waldo, see here: http://github.com/jimwaldo/HarvardX-Tools) that allow us to package, analyze, and manipulate by converting tracking log data to standard and manageable formats (e.g., csv based on ADL’s xAPI (https://adlnet.gov/projects/xapi/)). Another useful resource is the Pittsburgh Science of Learning Center’s DataShop (https://pslcdatashop.web.cmu.edu/) that hosts datasets from learning systems (most of them are from Intelligent Tutoring Systems) and provide them as a service to the learning technology community (e.g., so researchers can store or request and research on learning interaction data).
Learning analytics are then utilizing learning traces via the respective produced datasets to exploit opportunities for improving learning and instruction, to do so learning analytics employ computational analysis techniques coming from data science and AI. The majority of the learning analytics studies utilize descriptive statistics and basic visualization techniques (Fig. 4.2). For example, frequencies (the number of times a particular score or value is found in the data set), percentages (a set of scores or values as a percentage of the whole), means (numerical average of the scores or values for a particular variable) and medians (the numerical midpoint of the scores or values that is at the center of the distribution of the scores) are used often to support learners and instructors. Other statistical techniques such as correlational and regression analysis are also used, however, there are limited studies employing advanced data analysis methods in the context of learning analytics. Figure 4.2 categorizes the results from a literature review conducted from Misiejuk and Wasson (2017), that depict the frequency of the data analysis methods used. Moreover, Fig. 4.3 summarizes the advanced computational analysis techniques that can be used to support learning (Daud et al., 2017). The comparison indicates that learning analytics research and practice makes use of advanced computational analysis techniques, but to a limited extend (this indicates the status until 2017). Although this might connect with researchers’ preferences, experience and aspirations; this might also connect with the fact that basic descriptive statistics (e.g., median, mean, frequency) might be easier for the end-users (e.g., students, teachers) to sense-make and act upon.
4.2 Learner Modeling
In the process of standardizing how this information can be used to support the system (e.g., by providing intelligence) or the users (e.g., students and teachers), learning technology research came up with the term “learner model” (or “student model”) (Bull, 2020). Learner modeling stems from user modeling, which describes the process of building and updating a conceptual understanding of the user (Fischer, 2001); by analogy, a learner model describes the process of building and updating a conceptual understanding of the user as a learner. To do this, it models attributes such as skills, competences, understanding of concepts, and misconceptions, as well as noncognitive attributes such as motivation, engagement, and effort (Bull, 2020). Learner models use a range of data, including response time, response correctness, number of attempts to solve a problem, time spent interacting with learning resources, navigation to various learning resources, activity on the various communication functionalities (e.g., forums), and other learning trace data (Bull, 2020). Learner models are (usually) automatically and dynamically generated and updated (that is, inferred from the data). However, there are models that use manual input from learners’ records or the teacher. The main objective of the learner model is to enable the learning system’s intelligence and system functionalities (e.g., personalized learning, recommender systems, dashboards, and adaptivity) to support end-users, be they children, students, teachers, or parents. Bull and Kay (2010) provide several examples of learner models designed to support various users, including children, peers, parents and teacher.
Such functionalities allow technologies to support learners’ educational needs, with learner models being a core component in the development of several learning technologies, such as MOOCs (e.g., Cook et al., 2015), LMSs (e.g., Chou et al., 2015), and, in particular, intelligent tutoring systems (ITSs) (Woolf, 2010). During the last years, we have mainly seen open learner models, which externalize their functionalities in a way that allows users to interpret them through, for example, visualizations. In addition, there are several open source tools that allow us to include such functionalities in our systems, such as the open learner model (OLM) application, developed by Susan Bull in the context of an EU project called Next-Tell (http://next-tell.eu/portfolios/olm/). Learner models visualize individual learners’ current understanding (knowledge mastery) of a topic (see Fig. 4.4 for some examples). These models are powered automatically from a variety of sources, and they visualize the most up-to-date information about learner competency to allow learners and instructors to identify strengths and areas for further attention.
4.3 Educational Dashboards and Visualization
For the end-users of learning technologies (learners, teachers, and administrators), it is extremely useful to have information presented in an understandable way that supports their objectives of learning, teaching, or making administrative decisions about learning and teaching. Visualizations such as charts, graphs, and maps provide accessible ways to see and understand useful trends and patterns in the data. However, a single chart, graph, or map cannot contain all the information and insights needed to support end-users’ informed decision making. The solution is to use combined information visualization techniques with the use of a dashboard, so that the different end-users no longer need to “drive blind” (Duval, 2011).
In the literature, dashboards have been defined as “an easy to read, often single-page, real-time user interface, showing a graphical presentation of the current status (snapshot) and historical trends of an organizations key performance indicators (KPIs) to enable instantaneous and informed decisions to be made at a glance” (Brouns et al., 2015). In the context of learning technology, a learning dashboard has been defined in the context of a recent literature review as “a single display that aggregates different indicators about learner(s), learning process(es) and/or learning context(s) into one or multiple visualizations” (Schwendimann et al., 2016). The design and use of learning dashboards has increased tremendously in recent years, but there are still several important decisions that need to be made by the designer of the dashboard. For instance, what is the “right” information to visualize for the different end-user groups? How does this information need to be visualized, in which part of the system (which UI), and at which point of use (which part of the storyline)? Although there is unlikely to be a single answer for every learning dashboard, it is important to consider five main points:
-
1.
the purpose of the dashboard (e.g., awareness, reflection, and/or guidance);
-
2.
the intended end-user (student and/or teacher);
-
3.
the educational data available;
-
4.
the affordances of the technology involved (e.g., LMSs, games, and MOOCs); and
-
5.
the context of use (e.g., the university context).
In recent years, various learning dashboards have been designed to support teaching and learning in different contexts. A pioneering project was the EU project ROLE (Responsive Open Learning Environments). Fig. 4.5 provides a range of visualizations employed: (a) in the context of the ROLE project (Santos et al., 2011) and (b) in the context of the work from Luick & Monsen (2022) (10 years later). In line with the first principle of designing a learning dashboard (that it be purposeful), we can easily identify the goal for these different visualizations. For instance, we can see that most of the students were most active in the period May–July (top-left visualization), that they entered chatrooms much more often than they posted messages (second and third line in middle-left visualization), and that the most “productive” was Monday (bottom-left visualization). In another project conducted at NTNU 10 years later, by Luick & Monsen (2022), we see dashboards that allow teachers to follow students’ progress over time (up-right visualization in Fig. 4.5), across different topics of a course and providing insights about the learning content (bottom-right visualization in Fig. 4.5). Such information can help students to reflect on the activities they engage with and the behaviors they exhibit. It can also help the teacher to reorganize and/or redesign the learning activities in a way that is more engaging for the students, and focus on topics that are difficult to master.
Recent developments in the design of learning dashboards have incorporated capabilities such as social comparison (where individuals can see their own KPIs together with the cumulative KPIs of the classroom/group), with the goal of supporting self-regulated learning and student engagement. One example is Mastery Grids, which uses learners’ data to provide interactive and adaptive visualizations that support their engagement, performance, and motivation (Guerra et al., 2016). Mastery Grids combines open learner model with social comparison, with an ultimate goal to enable the learner to be aware of their own strength and weakness, and empower them. It compares learners’ knowledge level (mastery) by colored grids as shown in Fig. 4.6. The four levels portray: student’s progress (“Me”), a comparison between the user and other learner in the group (“Me vs group”), group level progress (“Group”), and overall progress of the learners in the class. Mastery Grids was developed in the Personalized Adaptive Web Systems Lab of the University of Pittsburgh, and is openly available to everyone (see: http://adapt2.sis.pitt.edu/wiki/Mastery_Grids_Interface).
Another example has been implemented in the context of the Programming Tutoring System (ProTuS). It allows students to see how they have performed in the different parts of the course compared to their colleagues in the classroom (see Fig. 4.7). The dashboard uses the results from various quizzes associated with different areas of the content of the course. The ProTuS interface and the analytics component has been employed in the web technologies course at the Norwegian University of Science and Technologies (Vesin et al., 2018).
Research on the design, development and use of learning dashboards is at the forefront of both learning technology and CCI. With contemporary research suggesting that future generation of learning dashboards need to be actionable, tailored to the needs of end-users, responsible, configurable, interactive, integrated and embedded in both the learning and visual sciences (Verbert et al., 2020).
References
Brouns, F., et al. (2015). D2.5 learning analytics requirements and metrics report [online]. https://repositorio.unican.es/xmlui/handle/10902/15231.
Bull, S. (2020). There are open learner models about! IEEE Transactions on Learning Technologies, 13(2), 425–448.
Bull, S., & Kay, J. (2010). Open learner models. In Advances in intelligent tutoring systems (pp. 301–322). Springer.
Chou, C. Y., Tseng, S. F., Chih, W. C., Chen, Z. H., Chao, P. Y., Lai, K. R., et al. (2015). Open student models of core competencies at the curriculum level: Using learning analytics for student reflection. IEEE Transactions on Emerging Topics in Computing, 5(1), 32–44.
Cook, R., Kay, J., & Kummerfeld, B. (2015). MOOClm: User modelling for MOOCs. In International conference on user modeling, adaptation, and personalization (pp. 80–91). Springer.
Daud, A., Aljohani, N. R., Abbasi, R. A., Lytras, M. D., Abbas, F., & Alowibdi, J. S. (2017). Predicting student performance using advanced learning analytics. In Proceedings of the 26th international conference on world wide web companion (pp. 415–421).
Duval, E. (2011). Attention please! Learning analytics for visualization and recommendation. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 9–17).
Fischer, G. (2001). User modeling in human–computer interaction. User Modeling and User-Adapted Interaction, 11(1), 65–86.
Guerra, J., Hosseini, R., Somyurek, S., & Brusilovsky, P. (2016). An intelligent interface for learning content: Combining an open learner model and social comparison to support self-regulated learning and engagement. In Proceedings of the 21st international conference on intelligent user interfaces (pp. 152–163).
Luick, A., & Monsen, F. (2022). Teaching dashboard (Master’s thesis), NTNU.
Misiejuk, K., & Wasson, B. (2017). State of the field report on learning analytics. Retrieved from http://bora.uib.no/handle/1956/17740.
Santos, J. L., Verbert, K., Govaerts, S., & Duval, E. (2011). Visualizing PLE usage. In Proceedings of EFEPLE11 1st workshop on exploring the fitness and evolvability of personal learning environments (Vol. 773, pp. 34–38).
Schwendimann, B. A., Rodriguez-Triana, M. J., Vozniuk, A., Prieto, L. P., Boroujeni, M. S., Holzer, A., et al. (2016). Perceiving learning at a glance: A systematic literature review of learning dashboard research. IEEE Transactions on Learning Technologies, 10(1), 30–41.
SCORM. (2004). 4th Edition Version 1.1 overview. Retrieved December 09, 2021, from. https://adlnet.gov/projects/scorm/#scorm-2004-4th-edition
Verbert, K., Ochoa, X., De Croon, R., Dourado, R. A., & De Laet, T. (2020). Learning analytics dashboards: the past, the present and the future. In Proceedings of the tenth international conference on learning analytics & knowledge (pp. 35–40).
Vesin, B., Mangaroska, K., & Giannakos, M. (2018). Learning in smart environments: User-centered design and analytics of an adaptive learning system. Smart Learning Environments, 5(1), 1–21.
Woolf, B. P. (2010). Student modeling. In Advances in intelligent tutoring systems (pp. 267–279). Springer.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Authors
About this chapter
Cite this chapter
Giannakos, M. (2022). Educational Data, Learning Analytics and Dashboards. In: Experimental Studies in Learning Technology and Child–Computer Interaction. SpringerBriefs in Educational Communications and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-14350-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-14350-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14349-6
Online ISBN: 978-3-031-14350-2
eBook Packages: EducationEducation (R0)