Keywords

8.1 Common Criteria

The above materials give you enough information to carry out the whole process of observation, rationalization, and validation, as well as the necessary supporting processes (e.g., artefact design, data analysis, and reporting). At the end of the process, everything is documented in a comprehensive report or a paper, and the respective prototypes, datasets, and practical information are kept on file. An important question, however, remains: In the context of CCI and learning technology research, what are the main reasons for reviewers rejecting a paper or asking for revisions? Drawing on our own experience and on various guides on how to review papers in CCI, learning technology, and neighboring fields (e.g., HCI,Footnote 1 CSCW,Footnote 2 and RLTFootnote 3), we believe that the following criteria and pitfalls are common to CCI and learning technology venues (although their relative importance and level of explicitness may vary).

The following list gives the most common criteria applied when reviewing papers in CCI and learning technology venues.

  • Relevance. Substantive research and/or design knowledge contributions should be concerned with “the phenomena surrounding the interaction between children and computational and communication technologies” (for CCI) and “advances in learning technologies and their applications” (for learning technology). Simply using children as end-users or a technology that might be able to support learning to study a general or educational phenomenon is generally not enough.

  • Importance/significance. Research should address a significant problem of important and lasting value. This criterion can be met by, for example, motivating RQs or hypotheses in terms of learning or HCI theory, by interpreting results in such terms, or by responding to a challenge that has been discussed and debated in the literature.

  • Grounding in the literature. Grounding in prior research literature is very important. A reference list that seems to omit the most important works or is extremely short could be grounds for desk rejection (i.e., because of the paper’s obvious lack of relevance/thoroughness/importance, the editor decides not to send it for peer review).

  • Scientific rigor. A paper should use methodology that is appropriate for the RQs. A general report on the experience of a practitioner or an instructor in implementing an innovation, or an obscure form of data collection, is generally not enough.

  • Write-up and structure. A paper must be clearly written in appropriate language and be properly structured (e.g., Introduction, Related work, Methods, Findings, Discussion, Conclusion).

  • Research ethics. There should be some discussion of the ethics of working with children as research participants/teachers/partners (whenever this is relevant). For example, has approval from an ethics board (or institutional review board, IRB) been obtained for the research? If not, it is common to reject the paper immediately and not allow resubmission until a statement of approval has been obtained.

These criteria are commonly used by editors, program chairs, and reviewers to evaluate a paper in the area of CCI and learning technology. It is necessary to weigh up the criteria realistically; for example, a paper that is not well-written may nevertheless contain important results. However, it is important for papers to satisfy most of the criteria, with the potential to satisfy all of them after revision.

8.2 Potential Pitfalls

Along with the aforementioned criteria, it is important to be aware of pitfalls when designing, conducting, and reporting CCI and learning technology studies. The following list gives the most common pitfalls in such research.

  • Insufficient theoretical base, literature grounding, or rationale. The basis for a study is the formulation of certain RQs or hypotheses from a relevant theoretical base, previously published studies, and/or a rationale and argumentation from the researcher’s observations. Most studies use a combination of theory, related work, and rationale to ground their hypotheses and provide rock-solid motivation. Example: Observations conducted throughout the semester on students who used adaptive assessment questions (questions that are assigned to students by taking into consideration what they have mastered and the difficulty of the questions) and relevant theoretical concepts (e.g., zone of proximal development and flow state) motivate our work on the benefits of adaptive content. Therefore, we hypothesize that students who receive adaptive content will have significantly better LPS than students who receive content procedurally.

  • Low internal validity of conditions and/or subjects. Conditions and/or subjects are not uniformly implemented, such that certain groups have an advantage on a particular condition. Example: The experimental group receives a task/condition that needs less time to be processed (low internal validity of condition), or the experimental group consists of older students who have more developed cognitive skills (e.g., faster reading speeds). Other reasons for low internal validity include a lack of randomization (e.g., allowing students to select the group they join, such that high performers might select the experimental group) and developing unequal treatments.

  • Failure of the developed artefact to support the intended testing. This is a common pitfall for CCI and learning technology studies. Artifacts have a certain set of qualities or components (e.g., functionalities and affordances) that allow us to experiment by isolating and testing certain components. Nevertheless, when artifacts fail to isolate the components we intended, we introduce bias or confounds (mixing the effect of the exposure of primary interest with extraneous risk factors). As a result, we cannot test effectively the components we want to test. Example: The study introduces a visual dashboard that presents different information compared to the nonvisual (control) dashboard. Therefore, the researcher cannot determine whether the observed effects are associated with the different information presented or the visualization of the dashboard.

  • Measurement bias. Variables and other outcomes are not measured in a proper scientific way (as when, in a qualitative study, no standardized scales are used, or when, in a quantitative study, observations and analysis are carried out by the single author without any reliability checks). Example: In a quantitative study, the measures employed do not correspond to the variable in the research question, or they can be interpreted from the participants’ responses in different ways. This situation would arise in a qualitative study of teachers’ use of technology where the researcher who conducted the observations of the teachers was also working as a teacher in the same school.

  • Low external validity (low/no generalization). The topic is not important, or the results are weak and not generalizable to other contexts. Low external validity makes it more difficult to identify potential implications than in an externally valid study, and this limits the contribution to the literature. Nevertheless, it is important to emphasize that a study (e.g., a laboratory-based study) may have low external validity but high internal validity, and some types of journals welcome such studies.

  • Trivial outcomes. The outcomes of the study constitute a “self-fulfilling prophecy.” Example: A group of students at a formal operational stage (aged 12 and over) perform mathematical operations faster than a group of students who are at a concrete operational stage (aged 7–11).

  • Problems with data analysis. The analyses necessary to address the RQs are not applied properly or are not well described. Example: A quantitative study uses statistical tests that depend on certain parametric assumptions, but the authors did not check whether those assumptions were met; or although the RQs require statistical analysis of causal effects, the authors have conducted correlational analyses instead.

  • Poor writing or inadequate description of methodology. This problem arises when the writing style is unclear, the language quality (syntax) is poor, the paper is badly structured, and/or important methodological details have been omitted. Examples: The method section contains no subheadings and mixes the variables, descriptions of participants, and analysis; the results are presented in a very opinionated manner (mixed with discussion); the discussion section is missing (i.e., there is no interpretation of the results); or obvious limitations of the selected methodology are not discussed.

8.3 Useful Practices

There are several detailed guides to help learning technology and CCI researchers to understand how to carry out their research and provide them with appropriate practices and approaches (e.g., Hudson & Mankoff, 2014; McKenney & Reeves, 2018). In the introduction to this book, we also describe the main steps of the research process. The purpose of this section is slightly different, namely to offer some practical advice to new CCI and learning technology researchers.

When planning your research, it is important to be able to provide a visual summary of your research design and the underlying idea. As the researcher, you should be able to provide a brief but clear motivation for the proposed research and your methodological decisions. Your motivation can be supported by related work and learning/HCI theories. Typical questions to ask yourself at this step include: What is the main motivation and goal of this research? Is the idea materialized with a technological or other innovation? What does the literature say? For instance, your motivation might be to provide timely feedback to your students, and therefore you want to test a new clicker technology that provides immediate feedback, unlike previous technologies that only provide summary feedback at the end of the class hour.

Next, you need to formulate your RQs clearly, so that they are properly scoped and capable of being answered. For instance, what is the role of immediate feedback in students’ learning performance and attitude during lectures? You then need to think of your target population (e.g., university students), the instruments and data collection methods you want to use (e.g., log data or pre-post survey), the analysis methods you expect to use (e.g., independent t-test on students’ response times), and the outcomes you expect to find (e.g., students will respond more slowly but their accuracy and attitude will improve). At the end of this exercise, you will have a summary like Table 8.1 that allows you to reflect on, explain, and discuss your research proposal.

Table 8.1 Overview of data analysis procedures used in learning technology and CCI research

Although this is not a comprehensive technique for representing a detailed research proposition, it is a practical way to summarize and communicate your proposal. Similar diagrams have been recommended in support of different goals (e.g., writing proposals for funding MSc/PhD thesis studies) and different stages of research (e.g., brainstorming or data analysis).Footnote 4