Abstract
In this chapter, we present three topics that are of great importance to CCI and learning technology researchers. The first topic is concerned with the role of “context” in experimental studies, and CCI and learning technology in general. The second topic is concerned with the ethical considerations in experimentation in human-factors IT-related research. The third topic focuses on researchers conducting experimental studies with children, and the need to employ different methods, approaches, and techniques. Although those are the three topics I decided to include in this book, I also believe that additional topics can complement this list.
You have full access to this open access chapter, Download chapter PDF
Keywords
10.1 Context in Experimental Studies
CCI and learning technology research is commonly constructed in the interplay of actors (learners, children, teachers, and parents), activities, and technology. It is informed by theory, conducted following experimental research methods, and reflects on epistemological stances. In most cases, the research is situated in particular contexts, which may be cultural, technological, infrastructural, or organizational. With the ultimate objective of making discoveries and contributing new and valid knowledge, the research can have significant implications for how people live and learn in technology-rich environments. Therefore, the knowledge obtained needs to be relevant and useful (i.e., contextualized) (Davison & Martinsons, 2016), as well as carrying a certain degree of validity (i.e., generalizability) (Cheng et al., 2016).
Validity in research is often referred to in terms of generalizability and universalizability. A definition that is easy to understand and adequate for the learning technology and CCI fields calls validity the “act of arguing, by induction, that there is a reasonable expectation that a knowledge claim already believed to be true in one or more settings is also true in other clearly defined settings” (Seddon & Scheepers, 2012). In learning technology and CCI, researchers may generalize knowledge in various ways, such as from one learning design to another, from one educational level to another, from one culture to another, from one context to the development of a new theory, and from one context to the extension of an existing theory. The same is true of other human-factors IT-related fields (Lee & Baskerville, 2012). However, an important question that is often posed in those fields is whether validity can reasonably be expected to extend to other contexts, given the well-defined contexts in which most research is conducted (Davison & Martinsons, 2016; Cheng et al., 2016).
The importance of contextualization and generalizability (sometimes referred to as particularism and universalism) has been extensively debated in several research fields (e.g., Deaton, 2010; Lee & Baskerville, 2012; Davison & Martinsons, 2016; Cheng et al., 2016). There has been a similar discussion in the field of learning technologies and HCI, with some studies focusing on achieving generalizability of their results (Sao Pedro et al., 2013) and others on producing contextually rich findings (Ferguson et al., 2014). There is general recognition that knowledge comes in various forms, ranging from highly general knowledge (e.g., universal laws) to highly contextualized insights (Höök et al., 2015; Höök & Löwgren, 2012). In addition, in the field of learning technologies, there are subcommunities (e.g., LAK and EDM) that adopt different stances and observe nuances in those two notions (Siemens & Baker, 2012). Regardless of one’s stance toward those two very important notions, it is generally agreed that context matters in learning technologies and CCI research, and the importance of generalizability should not be downplayed. Researchers need to understand the research context fully, as this, in combination with replication and triangulation, can contribute to the (cautious) construction of intermediate- and higher-level knowledge (Polit & Beck, 2010) of how humans learn, play, communicate, and live in technology-rich environments.
Because of the data-intensive nature of contemporary research and its focus on interventions (e.g., collecting LMS analytics, as opposed to older relatively static approaches such as end-of-treatment surveys or interviews), the notions of contextualization and generalizability are of particular importance. Contemporary data collection has the capacity to bridge those two notions by reinforcing their complementarities, rather than contributing to a debate that treats them as two antagonistic notions. In particular, the capabilities of automated data collections (Sharma & Giannakos, 2020) afford a high degree of context-awareness (e.g., GPS, motion trackers, and accelerometers) and generalizability (e.g., measures with high internal and external validity, such as eye-tracking). Seminal work (Sharma et al., 2020) has provided evidence of the ability to support both context-awareness and generalizability. Therefore, learner and user analytics have the capacity to empower researchers to focus on the degree of contextualization and generalizability that is appropriate for the type of knowledge or theory they want to develop. Nevertheless, it should be emphasized that researchers must give explicit consideration to their research design, the details of the context in which the research will be conducted, and the contexts for which the findings may reasonably be relevant and useful.
10.2 Ethical Considerations
Ethical considerations are always relevant and mandatory for any human-factors IT-related research (as well as any research with human subjects in general; Belmont Report, 1979). The Norwegian National Committees for Research Ethics provide four general principles for conducting researchFootnote 1: respect (participants shall be treated with respect), good consequences (researchers shall seek to ensure that their activities produce good consequences and that any adverse consequences are within the limits of acceptability), fairness (research projects shall be designed and implemented fairly), and integrity (researchers shall comply with recognized norms and behave responsibly, openly, and honestly toward their colleagues and the public). For experimental studies, these principles are of paramount importance, since researchers may willfully manipulate the independent variable with the goal of observing a change (Shadish et al. 2002). As per the European Commission’s report on Ethics for Researchers (European Commission, 2013), three main hallmarks of ethical research underpin the notion of “informed consent”: adequate information (being provided with all the necessary information), voluntariness (agreeing voluntarily to take part), and competence (being capable of grasping fully the potential risks of participation).
Ethical and methodological considerations are central when designing an experiment. For example, measures used to increase validity (e.g., deception of participants by using cover stories to orient them away from understanding the RQs) have been criticized (große Deters et al., 2019), as have approaches that seek to increase ecological validity by waiving informed consent (Grimmelmann, 2015). Nevertheless, there is a consensus that explaining and debriefing participants after the experiment is mandatory in all cases (Belmont Report, 1979). Today, the involvement of and approval from an independent ethics committee is mandatory before conducting experimental research. Different countries employ different approaches on how to form and include ethics committees. For instance, some countries have Institutional Review Boards (IRBs), whereas others have national review boards. Nevertheless, today there are established institutional, national, and international regulations, such as the EU’s General Data Protection Regulations (GDPR; https://gdpr.eu/), which provide guidelines for human-factors research such as in the fields of learning technology and CCI.
In the context of digital learning and learner-generated data, we have seen a number of endeavors and tools during the last decade. For instance, Slade and Prinsloo (2013) introduced a framework with a focus on ethics in digital learning and learning analytics. Other notable contributions are the JISC code of practiceFootnote 2 and the DELICATE framework (see Drachsler & Greller, 2016), which are useful tools to support learning technology research and practice. More recently, the International Council for Open and Distant Education (ICDE) produced a set of guidelines for ethically informed practice that is expected to guide research in digital learning and learning analytics across the world (Slade & Tait, 2019). In summary, the main ethical considerations in relation to learner-generated data can be grouped into the following categories.
-
Privacy considerations: how personal data is being observed and protected from unauthorized use. Practices of un-linking linked data, anonymization, and codification are often used (when possible).
-
Data ownership considerations: information about the ownership, use, and distribution of data. This is another important consideration that protects participants’ rights, for example, by ensuring that data will not be passed on or used for unintended additional purposes.
-
Consent considerations: mandatory provision of documentation that clearly describes the processes involved in data collection and analysis. Consent must be received from each individual participant (or, in the case of children, assent from the individual and consent from the legal guardian) before any experimental study.
-
Transparency considerations: providing the necessary information and being transparent with respect to which data will be collected, why and how they are going to be analyzed, and under what conditions.
Although this point has already been mentioned, it is important to emphasize that some categories of participants require special attention.
-
Children. This is the most relevant category for this chapter, since it is central to CCI as well as to learning technology (e.g., in K-12 education). The European Commission’s report on Ethics for Researchers (European Commission, 2013) clarifies that when children are involved in research, care and consideration are pivotal. In addition, it requires a clear justification for involving children in research (“the involvement of children in the research must be absolutely necessary and, if so, all particular ethical sensitivities that relate to research involving children must be identified and taken into account”) and provides a detailed section on the use of children in research in the European Textbook on Ethics in Research (European Commission, 2010, pp. 65–74).
-
Vulnerable adults. This category includes, but is not limited to, elderly people, people with learning difficulties, and severely injured patients.
-
People from certain cultural or traditional backgrounds. In some communities, notions of individuality, written permission, or written agreement do not exist, and certain groups (such as women) may not be permitted to act autonomously. In such communities, the European Commission (2013) clarifies that “strategies must be developed to address these issues with respect for the specificities of the situation.”
In addition to these categories that clearly require special attention, it is important for the researcher to consider any potential unequal power relationships (Levine et al., 2004). For example, students, teachers, and children might find themselves in a situation where they experience discomfort (e.g., having to act in certain ways in front of their teachers or parents) or even disadvantages (e.g., being socially excluded if they decide not to participate in the study). This puts the voluntariness of their participation in question, and researchers should take all appropriate measures to avoid potential negative effects of participation, emotional stress, or other discomfort (große Deters et al., 2019).
In recent years, we have seen extensive discussion on ethical challenges in the design and use of interactive technologies for children (Hourcade et al., 2017). We have also seen a slight shift in publication venues with respect to ethical considerations in human-factors IT-related research. For example, most publication venues do not require an ethical statement from the authors, which means that ethical issues experienced during the studies may not have been properly reported. However, some publication venues, such as the IDC and IJCCI,Footnote 3 now require a dedicated section (called, for example, “Selection and participation”) in which the authors of the paper describe how the participants were selected, what assent/consent processes were used (i.e., what the participants were told), how the participants were treated, how data sharing was communicated, and any additional ethical considerations. Although the introduction of such a mandatory section in research papers (as with any other regulation-driven checkbox exercise) cannot enforce in-depth consideration of the potential ethical challenges that might emerge from experimentation, it definitely helps by providing a baseline and a certain level of awareness in research communities.
Before closing this subsection, it is important to note that important issues such as children’s privacy, AI, social media, and media sharing have not been extensively covered in this book, owing to limitations of space and scope. However, we would like to bring out some of these issues in this final paragraph. Today’s children are growing up with technologies that use sensor data and data-driven interactions (e.g., multitouch technology and motion-based technology). Their dispositions over the use of their personal data (e.g., voice interfaces and other affordances that rely on biometric recognition) might be different from those of adults. Therefore, these technological advancements pose fundamental questions as to which technological futures we should be developing and how we face and mediate ethical issues and dilemmas when doing research or designing technology to support children’s learning, play, and living (Antle et al., 2021; Eriksson et al., 2021). Contemporary technologies are often “invisible” (e.g., ubiquitous systems), and their intelligence is fueled by unconsciously produced data and sophisticated AI techniques that evolve continually and are in daily use. Future work should consider the ethical issues and dilemmas that emerge from this, and we must proceed with care and responsibility around the potential implications of our research designs, methods and practices, and the resulting technologies.
10.3 Working with Children
Researchers conducting experimental studies with children might be required to employ different methods, approaches and techniques, as observed in much of the CCI research literature and a recent dedicated chapter (Markopoulos et al., 2021). Nevertheless, we would like to offer a summary of the motivations for and importance of employing child-centered approaches that focus on individual abilities. One example is the use of a traditional verbal questionnaire; such an instrument assumes that respondents are able to think abstractly about their experience. However, children younger than 12 (i.e., those in middle childhood, or at the stage of concrete operations in the Piagetian tradition) have not yet developed these skills; instead, their thinking processes are based on mental representations that relate to concrete events, objects, or experiences. This must be taken into account when adapting the measurement method to the level of cognitive development of the child participant. Following this line of reasoning and related work in child development and psychology (Harter & Pike, 1984), most CCI research methods (e.g., smileyometers and fun sorters; Read & MacFarlane, 2006; see also Fig. 10.1) use visual methods (or observations and qualitative, checklist-based measurements), which we know are more effective than verbal methods (Döring et al., 2010). Such visual analogs represent specific situations, behaviors, and people to whom the child can easily relate.
Besides the actual instruments used, it is important for CCI and learning technology researchers to consider potential collusion (e.g., when administering questionnaires to a group of children in one place). When it comes to open-ended questions and embodied communication, it is likely that the researcher will be unable to work out what all the words and body signals mean. Moreover, some children will choose to skip some tasks, not follow the depicted usage scenario, or not answer all the questions. This often happens in CCI research, and it is important for the researcher to be able to orchestrate the experiment in real time while considering potential reasons and interpreting the results accordingly. Potential complications can be that the children are tired or bored, they cannot read or understand the question, they do not know the answer or how to write it, or any combination of these reasons (Markopoulos et al., 2021). In recent years, we have seen a plethora of tools used to collect children’s opinions and experiences (e.g., the Fun toolkit and laddering). There are also different ways to adapt or modify an instrument from research with adults so that it can support CCI research with children as participants.
References
Antle, N. A., Frauenberger, F., Landoni, M., & Fails, J. A. (2021). Ethics in CCI [special issue]. International Journal of Child-Computer Interaction, 32, 100386.
Belmont Report. (1979). Ethical principles and guidelines for the protection of human subjects of research. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, U.S. Department of Health & Human Services.
Cheng, Z., Dimoka, A., & Pavlou, P. A. (2016). Context may be king, but generalizability is the emperor! Journal of Information Technology, 31(3), 257–264.
Davison, R. M., & Martinsons, M. G. (2016). Context is king! Considering particularism in research design and reporting. Journal of Information Technology, 31(3), 241–249.
Deaton, A. (2010). Instruments, randomization, and learning about development. Journal of Economic Literature, 48, 424–455.
Döring, A. K., Blauensteiner, A., Aryus, K., Drögekamp, L., & Bilsky, W. (2010). Assessing values at an early age: The picture-based value survey for children. Journal of Personality Assessment, 92, 439–448. https://doi.org/10.1080/00223891.2010.497423
Drachsler, H., & Greller, W. (2016). Privacy and analytics: It’s a DELICATE issue a checklist for trusted learning analytics. In Proceedings of the sixth international conference on learning analytics & knowledge (pp. 89–98).
Eriksson, E., Barendregt, W., & Torgersson, O. (2021). Ethical dilemmas experienced by students in child-computer interaction – A case study. International Journal of Child-Computer Interaction, 100341.
European Commission. (2013). Ethics for researchers. Facilitating research excellence in FP17. Retrieved June 2, 2021, from http://ec.europa.eu/research/participants/data/ref/fp7/89888/ethics-for-researchers_en.pdf.
European Commission. Directorate General for Research. (2010). European textbook of ethics in research. European Commission. Retrieved June 2, 2021, from https://op.europa.eu/en/publication-detail/-/publication/0f37f142-c333-40a8-90a7-bba25c314720/language-en
Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting learning analytics in context: Overcoming the barriers to large-scale adoption. In Proceedings of the 4th international conference on learning analytics and knowledge (pp. 251–253).
Grimmelmann, J. (2015). The law and ethics of experiments on social media users. Colorado Technology Law Journal, 13, 219.
große Deters, F., Tams, S., Johnston, A., & Thatcher, J. (2019). Designing experimental studies. In ICIS 2019. https://aisel.aisnet.org/icis2019/pdws/pdws/8
Harter, S., & Pike, R. (1984). The pictorial scale of perceived competence and social acceptance for young children. Child Development, 55, 1969–1982. https://doi.org/10.2307/1129772
Höök, K., & Löwgren, J. (2012). Strong concepts: Intermediate-level knowledge in interaction design research. ACM Transactions on Computer-Human Interaction (TOCHI), 19(3), 1–18.
Höök, K., Dalsgaard, P., Reeves, S., Bardzell, J., Löwgren, J., Stolterman, E., & Rogers, Y. (2015). Knowledge production in interaction design. In Proceedings of the 33rd annual ACM conference extended abstracts on human factors in computing systems (pp. 2429–2432).
Hourcade, J. P., Zeising, A., Iversen, O. S., Pares, N., Eisenberg, M., Quintana, C., & Skov, M. B. (2017). Child-computer interaction sig: Ethics and values. In Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems (pp. 1334–1337).
Lee, A. S., & Baskerville, R. L. (2012). Conceptualizing generalizability: New contributions and a reply. MIS Quarterly, 36(3), 749–761.
Levine, C., Faden, R., Grady, C., Hammerschmidt, D., Eckenwiler, L., & Sugarman, J. (2004). The limitations of “vulnerability” as a protection for human research participants. The American Journal of Bioethics, 4(3), 44–49.
Markopoulos, P., Read, J. C., & Giannakos, M. (2021). Design of digital technologies for children. Handbook of human factors and ergonomics, 1287–1304.
Polit, D. F., & Beck, C. T. (2010). Generalization in quantitative and qualitative research: Myths and strategies. International Journal of Nursing Studies, 47(11), 1451–1458.
Read, J. C., & MacFarlane, S. (2006). Using the fun toolkit and other survey methods to gather opinions in child computer interaction. In Proceedings of the 2006 conference on Interaction design and children (pp. 81–88).
Sao Pedro, M. A., Baker, R. S., & Gobert, J. D. (2013). What different kinds of stratification can reveal about the generalizability of data-mined skill assessment models. In Proceedings of the 3rd international conference on learning analytics and knowledge (pp. 190–194).
Seddon, P., & Scheepers, R. (2012). Towards the improved treatment of generalization of knowledge claims in IS research: Drawing general conclusions from samples. European Journal of Information Systems, 21(1), 6–21.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
Sharma, K., Niforatos, E., Giannakos, M., & Kostakos, E. (2020). Assessing cognitive performance using physiological and facial features: Generalizing across contexts. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4, 1–41.
Sharma, K., & Giannakos, M. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning?. British Journal of Educational Technology, 51(5), 1450–1484.
Siemens, G., & Baker, RSD (2012). Learning analytics and educational data mining: Towards communication and collaboration. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 252–254).
Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529.
Slade, S., & Tait, A. (2019). Global guidelines: Ethics in learning analytics. https://www.icde.org/icde-news/new-report-on-ethics-in-learning-analytics
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Authors
About this chapter
Cite this chapter
Giannakos, M. (2022). Issues to Consider as a CCI and Learning Technology Researcher. In: Experimental Studies in Learning Technology and Child–Computer Interaction. SpringerBriefs in Educational Communications and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-14350-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-14350-2_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14349-6
Online ISBN: 978-3-031-14350-2
eBook Packages: EducationEducation (R0)