Abstract
User interfaces (UI) are an inherent part of any technology with human end-users. The design of the UI depends heavily on the intended end-user and is therefore extremely important for research in both learning technology (where the learner is the end-user) and CCI (where the child is the end-user). Another important concept of learning technology and CCI research (and also in neighboring fields) is that of “artifact”. Artifacts correspond to novel designs (which may be prototype systems, interfaces, materials, or procedures) that have a certain set of qualities or components (such as functionalities and affordances) and that allow us to experiment (e.g., to isolate and test certain components). This chapter describes how researchers can design educational interfaces, visualizations, and other artifacts to support their experiments and enhance learners’ and children’s experience with technology.
You have full access to this open access chapter, Download chapter PDF
Keywords
3.1 Design of Educational Interfaces
User interfaces are an inherent part of any technology with human end-users. The role of an interface is to facilitate efficient communication and information exchange between the machine (the technology) and the user (the human). User interfaces (UIs) rely on what we call “interface metaphors,” sets of visuals, actions, and procedures incorporated into the UI that exploit specific knowledge that users already have of other domains, such as their homes and working environments. The use of proper interface metaphors allows users to predict the functionalities of each element of the interface (metaphor), resulting in more intuitive use of the interface and more predictable system behavior. Confusion is avoided, as there is no need for explanations of the various elements of the UI, and users are aware of the impact that their actions will have on the system. A time-tested example is the “desktop” metaphor, which portrays the operating system as similar to objects, tasks, and behaviors found in physical office environments (Neale & Carroll, 1997).
The appropriate selection and application of UI metaphors make systems easy to use, and so we need to understand how metaphors are perceived by our targeted end-users. Good understanding will allow us to incorporate metaphors efficiently into our UIs. Below, we provide some commonly used metaphors that allow UI designers to develop intuitive interfaces. As technology advances and different applications are developed (including new ways of working, living, learning, and communicating), new metaphors need to be established to increase the usability of those applications. The examples show the centrality of metaphor to UI and the importance of drawing on real-world analogies (Table 3.1).
The selection of metaphors and the design of the UI depend heavily on the intended end-user and are therefore extremely important for research in both learning technology (where the learner is the end-user) and CCI (where the child is the end-user). For example, a straightforward note-making metaphor (e.g., for presenting new information) might be good for a technology that targets teachers but less effective for a technology that targets doctors (where a Post-it metaphor might work better). The same applies to all user groups, although learners and children are particularly interesting end-users. Learning is not always an easy process. It is associated with many aspects of interaction and cognition (including difficult mental operations and cognitive friction), and these differ across the different developmental phases of a child. For instance, for very young children, even time-tested metaphors such as “desktop” can fail to convey the intended information. Therefore, it is important to work closely with the end-user to develop an appropriate set of visuals, actions, and procedures that can be incorporated into the UI to achieve the intended objectives (see for example, Fig. 3.1, Asheim, 2012; Høiseth et al., 2013). Moreover, learning takes place in and across diverse contexts (e.g., online or in classrooms, labs, and maker spaces), and the content area (e.g., math, language, or art) plays an important role in the mental models generated by the user during learning and the ways in which those models need to be taken into consideration to facilitate learning.
The main focus of metaphors is ease-of-use, usability, and utility for representing a system’s functionality (Kuhn & Blumenthal, 1996). However, the capacity of UI metaphors to facilitate learning has been historically recognized and valued by both the learning technology and the HCI communities (e.g., see Carroll & Mack, 1985; Neale & Carroll, 1997). Metaphors facilitate learning by leveraging existing mental models or previously learned information and applying them to new contexts (Bruner, 1960). Learning is accelerated when metaphors are used, because they draw on existing knowledge bases to reason about new problems (Streitz, 1988).
Contemporary research and practice recognize the importance of iteration and end-user participation during the UI design (e.g., DiSalvo et al., 2017). Processes from HCI, such as rapid prototyping and low-fidelity paper prototyping (Wilson & Rosenberg, 1988), are commonly used in educational UI. Those practices are advantageous because of their simplicity, low cost (no need for many working hours or materials/tools), and the ease of obtaining early feedback from the end-user. They also adopt the main steps of established instructional system models, such as ADDIE (analysis, design, development, implementation, and evaluation) (Branch, 2009), which allows the necessary steps to unfold iteratively. The powerful progression from a low-fidelity, pen-and-paper prototype to a working system is shown in Fig. 3.2 through two examples, one on the development of a UI for a multi-touch gamified quiz system that supports learning in museums (Noor, 2016), and one on the development of a UI for a self-assessment technology that supports online learning (Westermoen & Lunde, 2020). As the figure shows, the initial low-fidelity ideation is created using only pen and paper. The sketches are very basic, but they also useful for determining how the user will interact with the interface; because the sketches in this phase of the design are low-fidelity, it is easy and “cheap” to change them. After the first iteration, some of the features are developed and tested with a few end-users, but even then it remains easy to test the basic functionalities and accommodate the results from the testing (e.g., in terms of metaphors used, information visualized, and actual functionalities). As the fidelity of the interface increases and more interactive functionalities (and the respective wireframes) are incorporated, it becomes more difficult and costly to accommodate structural changes. In the final stages of the process, we have a working system that can be tested through experimentation.
Within the progression from low fidelity to high fidelity and ultimately the complete UI, the designer needs to make progress in the development of the navigation thread. The storyboarding/navigation thread will cover all the possible use cases and scenarios and the interconnections within the wireframes. Figure 3.3 shows the storyboarding of a self-assessment UI (adapted from Westermoen & Lunde, 2020). During the design of the educational UI, the designer needs to keep in mind who the intended end-users are (e.g., children, other learners); what their characteristics are (age, background knowledge); the expected objectives (learning goals, competence development); the different types of constraints (learning constraints, technological constraints, teachers’ competence); the delivery options and expected role of the technology; and its pedagogical underpinning. In addition to answering these very important questions, the UI designer needs to be able to gather information from end-users and test their ideas.
As a result of the iterative design process and storyboarding, a collection of wireframes representing each possible view a user might encounter is created. The final UIs need to consider the context of use and provide the necessary guidelines for the implementation of the application. Figure 3.4 shows an example set of UIs in the context of mobile learning in higher education. (More information about this example can be found in Pappas et al., 2017 and Cetusic, 2017)
3.2 Artifacts and Treatment Design
One of the first notions the researcher needs to understand in learning technology and CCI research (and also in neighboring fields) is the unit of analysis (UoA). The UoA is the object that is the target of the experimentation (and whose data we use as a unit in our analysis). The UoA can be an individual, a small group, an organization (a school), or the users of certain technology. For instance, if we are interested in studying the effect of the use of a dashboard or a representation (an avatar) on students’ learning outcomes or attitudes, then the UoA is the student, since we will use the score of each student. If we want to study the introduction of novel technology to support collaboration in dyads or triads, the UoA is the dyad or triad, since we will use the score of each dyad or triad (e.g., scores from a common assignment). Even objects can serve as a UoA; if we want to make an interface more attractive to students, then the UoA is the group of students who use the interface. Identifying the UoA can be complex, as it is not always static. It is common in a study with a specific dataset to have different UoAs. For example, an analysis of student scores can be based on the scores of individuals, of classes (if we want to compare the practice of different teachers), or of different groups.
Another important concept that is a cornerstone in learning technology and CCI research (and also in neighboring fields) is that of “artifact” (or “artefact” in British English spelling) (Carroll & Rosson, 1992). Artifacts correspond to novel designs (which may be prototype systems, interfaces, materials, or procedures) that have a certain set of qualities or components (such as functionalities and affordances) and that allow us to experiment (e.g., to isolate and test certain components). Such experimentation serves to advance both empirical and theoretical knowledge, but it also supports the practice of a user (such as a learner or a child) and empowers them to achieve their potential. Artifacts allow us to formulate the necessary conditions by isolating certain functionalities and testing our hypotheses through experimentation. Each experimental study has its own value and should contribute to the main body of knowledge by validly testing theories that are contingent on designed artifacts, or by producing findings that may be reused to support the design of future artifacts in the form of lessons learned or design implications (Sutcliffe, 2000).
Contemporary learning technology and CCI research focuses on conducting “artifact-centered evaluations” that use artifacts in the experimental process. The most common approaches cascade the experimentation process within a broader research procedure, with the intention of producing new knowledge and models and informing theories and practices. Such approaches inherit the characteristics of design research and are iterative. For instance, design-based research (DBR) is a common approach in learning technology, whereas the task–artefact cycle is commonly employed in HCI (see Fig. 3.5). Such research approaches are important, as they go beyond responding to a particular hypothesis, instead seeking to advance theoretical knowledge in the field by exploring and confirming various hypotheses and relationships in different contexts (see a representation in Fig. 3.6).
Going back to the important role of artifacts in conducting empirical studies, we now provide some examples of how artifacts allow us to move from observations to designing treatments and testing hypotheses. A very common interface in learning technology research is the dashboard. Dashboards are used in learning management systems (LMSs) such as Canvas, Moodle and Blackboard Learn, and also in the majority of learning technologies (e.g., digital games, educational apps). Although there are differences in the information included, the visualizations employed, and the moments at which the dashboard appears, most dashboards include information related to learners’ activity, progress, and learning history. The information provided to the learner (and teacher) is intended to activate their awareness, reflection, and judgment (i.e., metacognition), and ultimately to support their potential (by informing them about the amount of time spent on a task, the difficulty of a particular question, and so on). Providing this information in an efficient manner will support learners’ self-regulation and motivation, and teachers learning design and decision making, allowing them to make appropriate decisions about allocation of effort, time-management, and skills development (Lonn et al., 2015).
Figure 3.7 (up) shows a learning dashboard, taken from a previously introduced example (Westermoen & Lunde, 2020), this dashboard has been designed and introduced to support students’ self-assessment. The dashboard was introduced to one of two groups of students, and a mixed methods study was conducted to investigate the role of the dashboard in digital self-assessment activities (Westermoen & Lunde, 2020; Papamitsiou et al., 2021). Fig. 3.7 (down) shows a teacher dashboard, this dashboard has been designed and introduced to support teachers’ decision making (e.g., identifying students’ weaknesses and misconceptions, or students who need additional support). The dashboard was evaluated with experienced teachers to identify its usefulness and ability to support decision making and instruction (Luick & Monsen, 2022).
Another example is provided by artifacts that lie at the intersection of CCI and learning technology. To investigate the effect of avatar self-representation (ASR) (the extent to which the user/child is represented by an avatar) in learning games, we used three games that follow similar game mechanics but have a different approach to user ASR. ASR is classified as low, moderate, or high, according to the degree of visual similarity (i.e., appearance congruity) between the avatar and the user and the precision and breadth of movement (i.e., movement congruity). Figure 3.8 gives a detailed description of the ASR classifications and the respective game interfaces.
The group of children experienced all three ASRs (conditions), and during the treatment (i.e., a within-subjects experiment) we carried out various data collections, with the goal of determining the role of ASR in children’s affect and behavior in motion-based educational games. The results indicated that moving from low ASR (a cursor) to moderate ASR (a puppet) and then to high ASR (an image of the actual user) decreased users’ stress and increased their cognitive load (see Fig. 3.9). You can find the complete study, with all the details and results, in Lee-Cultura et al. (2020).
The use of artifacts is powerful, but it also has limitations. For example, the results are associated with the particular artefact under study, and any knowledge obtained is not necessarily reusable or generalizable to other contexts. Nevertheless, artifacts allow us to conduct experiments and test hypotheses efficiently so as to enhance relevant practical and theoretical knowledge. In addition, there are certain time-tested approaches in both learning technology and CCI/HCI (e.g., DBR; Barab & Squire, 2004) and the task–artefact cycle (Sutcliffe, 2000) that allow us to leverage iterative experimentation to go beyond context-specific hypothesis testing and produce reusable/generalizable knowledge.
References
Asheim, J. (2012). Konsept for forbedret behandling av barn rammet av astma/RS-virus (Master’s thesis, Institutt for produktdesign). http://hdl.handle.net/11250/241157.
Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning Sciences, 13(1), 1–14.
Branch, R. M. (2009). Instructional design: The ADDIE approach (Vol. 722). Springer Science & Business Media.
Bruner, J. S. (1960). The process of education. Oxford University Press.
Carroll, J. M., & Mack, R. L. (1985). Metaphor, computing systems, and active learning. International Journal of Man-Machine Studies, 22(1), 39–57.
Carroll, J. M., & Rosson, M. B. (1992). Getting around the task-artifact cycle: How to make claims and design by scenario. ACM Transactions on Information Systems (TOIS), 10(2), 181–212.
Cetusic, L. (2017). Mobile learning ecosystem to enhance students learning-lessons learnt from an empirical study at NTNU (Master’s thesis, NTNU). https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2442324
DiSalvo, B., Yip, J., Bonsignore, E., & Carl, D. (2017). Participatory design for learning (pp. 3–6). Routledge.
Høiseth, M., Giannakos, M. N., Alsos, O. A., Jaccheri, L., & Asheim, J. (2013). Designing healthcare games and applications for toddlers. In Proceedings of the 12th international conference on interaction design and children (pp. 137–146).
Kuhn, W., & Blumenthal, B. (1996). Spatialization: Spatial metaphors for user interfaces. In Conference companion on human factors in computing systems (pp. 346–347).
Lee-Cultura, S., Sharma, K., Papavlasopoulou, S., Retalis, S., & Giannakos, M. (2020). Using sensing technologies to explain children’s self-representation in motion-based educational games. In Proceedings of the interaction design and children conference (pp. 541–555).
Lonn, S., Aguilar, S. J., & Teasley, S. D. (2015). Investigating student motivation in the context of a learning analytics intervention during a summer bridge program. Computers in Human Behavior, 47, 90–97.
Luick, A., & Monsen, F. (2022). Adaptive teaching technologies to create and monitor learning activities (Master’s thesis), NTNU.
Neale, D. C., & Carroll, J. M. (1997). The role of metaphors in user interface design. In M. G. Helander, T. K. Landauer, & P. V. Prabhu (Eds.), Handbook of human-computer interaction (pp. 441–462). North-Holland.
Noor, J. (2016). Pervasively gamifying the museum experience-an empirical investigation of knowledge gain and engagement (Master’s thesis), NTNU. https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2442326
Papamitsiou, Z., Lunde, M., Westermoen, J., & Giannakos, M. N. (2021). Supporting learners in a crisis context with smart self-assessment. In D. Burgos, A. Tlili, & A. Tabacco (Eds.), Radical solutions for education in a crisis context (pp. 207–224). Springer.
Pappas, I. O., Cetusic, L., Giannakos, M. N., & Jaccheri, L. (2017). Mobile learning adoption through the lens of complexity theory and fsQCA. In 2017 IEEE global engineering education conference (EDUCON) (pp. 536–541). IEEE.
Sharma, K., Leftheriotis, I., & Giannakos, M. (2020). Utilizing interactive surfaces to enhance learning, collaboration and engagement: Insights from learners’ gaze and speech. Sensors, 20(7), 1964.
Streitz, N. A. (1988). Mental models and metaphors: Implications for the design of adaptive user-system interfaces. In Learning issues for intelligent tutoring systems (pp. 164–186). Springer.
Sutcliffe, A. (2000). On the effective use and reuse of HCI knowledge. ACM Transactions on Computer-Human Interaction (TOCHI), 7(2), 197–221.
Westermoen, J., & Lunde, M. (2020). Smartu investigating the effects of visualizations in adaptive self assessment systems (Master’s thesis), NTNU. https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/2777507
Wilson, J., & Rosenberg, D. (1988). Rapid prototyping for user interface design. In M. G. Helander, T. K. Landauer, & P. V. Prabhu (Eds.), Handbook of human-computer interaction (pp. 859–875). North-Holland.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Authors
About this chapter
Cite this chapter
Giannakos, M. (2022). Educational Interface Design and the Role of Artifacts. In: Experimental Studies in Learning Technology and Child–Computer Interaction. SpringerBriefs in Educational Communications and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-14350-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-14350-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14349-6
Online ISBN: 978-3-031-14350-2
eBook Packages: EducationEducation (R0)