Keywords

Scientific research follows an iterative process of observation, rationalization, and validation (Bhattacherjee, 2012). As the name suggests, during an observation, we observe (experience/sense) the phenomenon (e.g., event, behavior, or interaction) of interest, and we form an initial research question (RQ). In many cases, the initial question is anecdotal (e.g., you noticed that students who use dashboards complete more assignments, or the ones participating to more classroom quizzes have better mid-term or final grades), but it can also be based on some data (e.g., you see that the scores of students who complete tasks in the labs are higher than those of students who complete tasks in the classroom). In the rationalization phase, we try to understand a phenomenon by systematically connecting what we have observed, and this might lead to the formation or concretization of a theory or scientific inquiry (e.g., research hypotheses). Finally, the validation phase allows us to test potential research hypotheses and/or theories using an appropriate research design (e.g., data collection and analysis).

The research process should be based on the principles of design research with a very intensive collaboration between practitioners and researchers. The ultimate goal is to build strong connection between research and practice. An emphasis should be placed on the iterative nature of the research process that does not just “test” a technology or a process, but refines the technology or process while also producing new knowledge (e.g., best practices, design principles) that can support future research and development. Closely situating your work in real-world settings and collaborating with stakeholders, allow you to both clearly identify the problem that you seek to solve, and deploy and evaluate our research in their intended environments. Therefore, the proposed iterative process of observation, rationalization, and validation (Fig. 1.1), should be employed in a way to leverage collaboration among researchers and practitioners in real-world settings and lead to contextually-sensitive knowledge, design principles and theories.

Fig. 1.1
A flowchart for the different steps carried out in observation, rationalization, and validation for the improvement of relationship between researchers and practitioners.

The iterative process of observation, rationalization, and validation

The research that is put into practice varies in type. For instance, the researcher can conduct further observations to rationalize the observations already made (something we used to call inductive research) or test the theory or scientific inquiry of interest (something we used to call deductive research). The selection of the type of research depends on the researcher’s standpoint on the nature of knowledge (epistemology) and reality (ontology), which is shaped by the disciplinary areas the researcher belongs to. Given their interdisciplinary nature, the fields of child–computer interaction (CCI) and learning technology follow both the inductive and the deductive research traditions. Although parts of this book can apply to both types of research, its focus is more on deductive research and how this can be functionalized through experimental studies.

Experimental research has been used extensively as one of the primary methodologies for a wide range of disciplines, from chemistry to physics to psychology to human–computer interaction (HCI) to the learning sciences (LS). The inherent connections between CCI and learning technology, on the one hand, and HCI and LS, on the other hand, as well as the strong links of all these disciplines to the behavioral sciences, have resulted in the use of experimental studies as one of the predominant modes of research. Experimental studies are often considered to be the “gold standard” (most rigorous) of research designs (Christensen et al., 2011), and from the early 1900s onward, experimental research methods received strong impetus from behavioral research and psychology. The goal of experimental research is to show how the manipulation of a variable of interest (e.g., the resolution of a video lecture) has a direct causal influence on another variable of interest (e.g., students’ perception of fractions). For instance, we can consider the following research question: “How does the visualization of students’ learning scores via a dashboard affect their future learning performance?”

To conceptualize the RQ, the researcher investigates the effect of the experimental/independent variable on the dependent/outcome variable through an induced “treatment” (a procedure that holds all conditions constant except the independent/experimental variable). Therefore, any potential significant difference identified when comparing the group with the induced experimental treatment (the experimental group) to the group without the treatment (the control group) is assumed to have been caused by the independent variable (see Fig. 1.2 for a graphical representation.) Such an experiment ensures high internal validity (the degree to which the design of the experiment controls for extraneous factors). Therefore, in contrast to other types of research, such as descriptive, correlational, survey, and ethnographic studies, experiments create conditions where the outcome can be confidently attributed to the independent variable rather than to other factors. Simply put, an experiment is “a study in which an intervention is deliberately introduced to observe its effects” (Shadish et al., 2002, p. 12).

Fig. 1.2
A graphical representation of the dependent and independent variables within a control group. The control group consists of experimental groups under different treatment numbers.

Typical representation of an experiment

Experiments are not always easy to define, as they depend on the domain, the RQs, and even the scientist (Cairns et al., 2016). They rely heavily on craft, skill, and experience, and they put tests into practice to trial ideas. In the case of CCI and learning technology, those trials are employed to evaluate existing or new technologies and interfaces, establish guidelines, and understand how learners/children use technology. The main strength of the experimental paradigm derives from its high internal validity, which allows experimentation to be viewed as an “acceptable” research practice (Hannafin, 1986). Experimental research gives less emphasis to external validity, which concerns the degree to which the results of a study can be generalized to other situations, particularly realistic ones, a focus that is at the center of other research designs and approaches that are commonly employed in CCI and learning technologies (e.g., Barab & Squire, 2004).

As interdisciplinary research fields, CCI and learning technologies have the advantage of enhancing their methods by borrowing from related fields. They represent a research stream that began by applying theories, methods, and tools from a variety of fields, such as LS, HCI, design, and the social sciences. It is not difficult to see the nature and benefits of interdisciplinarity in CCI and learning technologies that results from the integration of qualities from different fields (e.g., user/learner-centeredness, internal validity, external validity, and accounting for context) and allowing researchers to leverage and combine a wide range of methods, theories, and tools.

The purpose of this book is not to promote or criticize experimental methods, but rather to provide insights for their effective use in CCI and learning technology research. It is important to highlight the importance of “method pluralism” and “letting method be the servant” (Firebaugh, 2018). As in work on experimental methods in human-factor IT-related fields that has criticized “the man of one method or one instrument” (e.g., Hornbæk, 2013; Gergle & Tan, 2014), we want to emphasize the risks of adopting a method-oriented research practice rather than a problem-oriented one. Method-oriented practice is likely to drive researchers to conduct experiments that force-fit the data (Ross & Morrison, 2013) or to dissuade them from conducting experiments when needed, instead relying on methods that center on the experience of the researcher or lead to results that cannot be replicated. As Platt (1964, p. 351) stated, “the method-oriented man is shackled; the problem-oriented man is at least reaching freely toward what is most important.”

Experimental studies allow us to isolate which components (e.g., functionalities or affordances) of the technology, the medium, the end-user (e.g., the learner, teacher, or child) or the environment affect the intended goal (e.g., learning or social interaction) and in what ways. In this book, my approach is to present experimental methods as valuable tools for CCI and learning technology research, through the lens of the data-intensive nature of contemporary research. In addition, I emphasize the role of the researcher in using, adapting, altering, and accommodating contextual complexion, relevant theories, and the scientific inquiry of focus.