Abstract
As interdisciplinary research fields, child-computer interaction (CCI) and learning technologies have the advantage of enhancing their methods by borrowing from related fields. They represent a research stream that began by applying theories, methods, and tools from a variety of fields, such as the learning sciences, human-computer interaction, design, and the social sciences. Experiments and appropriate experimental methods have been employed extensively over many years to build scientific knowledge and to understand how to design technology to support human needs (e.g., learning, socializing, and wellbeing). This chapter discusses how experiments and experimental studies can be employed in the context of research on learning technology and/or CCI.
You have full access to this open access chapter, Download chapter PDF
Keywords
Scientific research follows an iterative process of observation, rationalization, and validation (Bhattacherjee, 2012). As the name suggests, during an observation, we observe (experience/sense) the phenomenon (e.g., event, behavior, or interaction) of interest, and we form an initial research question (RQ). In many cases, the initial question is anecdotal (e.g., you noticed that students who use dashboards complete more assignments, or the ones participating to more classroom quizzes have better mid-term or final grades), but it can also be based on some data (e.g., you see that the scores of students who complete tasks in the labs are higher than those of students who complete tasks in the classroom). In the rationalization phase, we try to understand a phenomenon by systematically connecting what we have observed, and this might lead to the formation or concretization of a theory or scientific inquiry (e.g., research hypotheses). Finally, the validation phase allows us to test potential research hypotheses and/or theories using an appropriate research design (e.g., data collection and analysis).
The research process should be based on the principles of design research with a very intensive collaboration between practitioners and researchers. The ultimate goal is to build strong connection between research and practice. An emphasis should be placed on the iterative nature of the research process that does not just “test” a technology or a process, but refines the technology or process while also producing new knowledge (e.g., best practices, design principles) that can support future research and development. Closely situating your work in real-world settings and collaborating with stakeholders, allow you to both clearly identify the problem that you seek to solve, and deploy and evaluate our research in their intended environments. Therefore, the proposed iterative process of observation, rationalization, and validation (Fig. 1.1), should be employed in a way to leverage collaboration among researchers and practitioners in real-world settings and lead to contextually-sensitive knowledge, design principles and theories.
The research that is put into practice varies in type. For instance, the researcher can conduct further observations to rationalize the observations already made (something we used to call inductive research) or test the theory or scientific inquiry of interest (something we used to call deductive research). The selection of the type of research depends on the researcher’s standpoint on the nature of knowledge (epistemology) and reality (ontology), which is shaped by the disciplinary areas the researcher belongs to. Given their interdisciplinary nature, the fields of child–computer interaction (CCI) and learning technology follow both the inductive and the deductive research traditions. Although parts of this book can apply to both types of research, its focus is more on deductive research and how this can be functionalized through experimental studies.
Experimental research has been used extensively as one of the primary methodologies for a wide range of disciplines, from chemistry to physics to psychology to human–computer interaction (HCI) to the learning sciences (LS). The inherent connections between CCI and learning technology, on the one hand, and HCI and LS, on the other hand, as well as the strong links of all these disciplines to the behavioral sciences, have resulted in the use of experimental studies as one of the predominant modes of research. Experimental studies are often considered to be the “gold standard” (most rigorous) of research designs (Christensen et al., 2011), and from the early 1900s onward, experimental research methods received strong impetus from behavioral research and psychology. The goal of experimental research is to show how the manipulation of a variable of interest (e.g., the resolution of a video lecture) has a direct causal influence on another variable of interest (e.g., students’ perception of fractions). For instance, we can consider the following research question: “How does the visualization of students’ learning scores via a dashboard affect their future learning performance?”
To conceptualize the RQ, the researcher investigates the effect of the experimental/independent variable on the dependent/outcome variable through an induced “treatment” (a procedure that holds all conditions constant except the independent/experimental variable). Therefore, any potential significant difference identified when comparing the group with the induced experimental treatment (the experimental group) to the group without the treatment (the control group) is assumed to have been caused by the independent variable (see Fig. 1.2 for a graphical representation.) Such an experiment ensures high internal validity (the degree to which the design of the experiment controls for extraneous factors). Therefore, in contrast to other types of research, such as descriptive, correlational, survey, and ethnographic studies, experiments create conditions where the outcome can be confidently attributed to the independent variable rather than to other factors. Simply put, an experiment is “a study in which an intervention is deliberately introduced to observe its effects” (Shadish et al., 2002, p. 12).
Experiments are not always easy to define, as they depend on the domain, the RQs, and even the scientist (Cairns et al., 2016). They rely heavily on craft, skill, and experience, and they put tests into practice to trial ideas. In the case of CCI and learning technology, those trials are employed to evaluate existing or new technologies and interfaces, establish guidelines, and understand how learners/children use technology. The main strength of the experimental paradigm derives from its high internal validity, which allows experimentation to be viewed as an “acceptable” research practice (Hannafin, 1986). Experimental research gives less emphasis to external validity, which concerns the degree to which the results of a study can be generalized to other situations, particularly realistic ones, a focus that is at the center of other research designs and approaches that are commonly employed in CCI and learning technologies (e.g., Barab & Squire, 2004).
As interdisciplinary research fields, CCI and learning technologies have the advantage of enhancing their methods by borrowing from related fields. They represent a research stream that began by applying theories, methods, and tools from a variety of fields, such as LS, HCI, design, and the social sciences. It is not difficult to see the nature and benefits of interdisciplinarity in CCI and learning technologies that results from the integration of qualities from different fields (e.g., user/learner-centeredness, internal validity, external validity, and accounting for context) and allowing researchers to leverage and combine a wide range of methods, theories, and tools.
The purpose of this book is not to promote or criticize experimental methods, but rather to provide insights for their effective use in CCI and learning technology research. It is important to highlight the importance of “method pluralism” and “letting method be the servant” (Firebaugh, 2018). As in work on experimental methods in human-factor IT-related fields that has criticized “the man of one method or one instrument” (e.g., Hornbæk, 2013; Gergle & Tan, 2014), we want to emphasize the risks of adopting a method-oriented research practice rather than a problem-oriented one. Method-oriented practice is likely to drive researchers to conduct experiments that force-fit the data (Ross & Morrison, 2013) or to dissuade them from conducting experiments when needed, instead relying on methods that center on the experience of the researcher or lead to results that cannot be replicated. As Platt (1964, p. 351) stated, “the method-oriented man is shackled; the problem-oriented man is at least reaching freely toward what is most important.”
Experimental studies allow us to isolate which components (e.g., functionalities or affordances) of the technology, the medium, the end-user (e.g., the learner, teacher, or child) or the environment affect the intended goal (e.g., learning or social interaction) and in what ways. In this book, my approach is to present experimental methods as valuable tools for CCI and learning technology research, through the lens of the data-intensive nature of contemporary research. In addition, I emphasize the role of the researcher in using, adapting, altering, and accommodating contextual complexion, relevant theories, and the scientific inquiry of focus.
References
Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the Learning Sciences, 13(1), 1–14.
Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices. Global Text Project.
Cairns, P., Soegaard, M., & Dam, R. F. (2016). Experimental methods in human-computer interaction. In M. Soedergaard & R. Dam (Eds.), Encyclopedia of human-computer interaction (2nd ed.). Interaction Design Foundation.
Christensen, L. B., Johnson, B., Turner, L. A., & Christensen, L. B. (2011). Research methods, design, and analysis. University of South Alabama.
Firebaugh, G. (2018). Seven rules for social research. Princeton University Press.
Gergle, D., & Tan, D. S. (2014). Experimental research in HCI. In Ways of knowing in HCI (pp. 191–227). Springer.
Hannafin, M. J. (1986). Ile status and future of research in instructional design and technology. Journal of Instructional Development, 8, 24–30.
Hornbæk, K. (2013). Some whys and hows of experiments in human–computer interaction. Foundations and Trends in Human-Computer Interaction, 5(4), 299–373.
Platt, J. (1964). Strong inference. Science, 146(3642), 347–353.
Ross, S. M., & Morrison, G. R. (2013). Experimental research methods. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 1007–1029). Routledge.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Miffl in.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Authors
About this chapter
Cite this chapter
Giannakos, M. (2022). Introduction. In: Experimental Studies in Learning Technology and Child–Computer Interaction. SpringerBriefs in Educational Communications and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-14350-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-14350-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14349-6
Online ISBN: 978-3-031-14350-2
eBook Packages: EducationEducation (R0)