Abstract
Conducting experimental studies in learning technology and CCI research entails an iterative process of observation, rationalization, and validation. Although data collection and data analysis procedures may vary widely in complexity, their selection is based on the research objectives, RQs or hypotheses. So the researchers need to carefully select them and make sure that the research design decisions of data collection and analysis, are adequate for the goals of the study. This chapter provides information on the various data collections and analyses that are usually employed in learning technology and CCI research. This chapter is intended to serve as a guide for CCI and learning technology researchers, and help them deciding what data they need to collect and how they should analyze them to address the goals of their study.
You have full access to this open access chapter, Download chapter PDF
Keywords
As mentioned in the introduction, conducting experimental studies in learning technology and CCI research entails an iterative process of observation, rationalization, and validation (see Fig. 6.1). More detailed processes with additional steps, such as conducting a literature review, have been proposed (e.g., Ross & Morrison, 2013). Nevertheless, no matter how detailed description we have, determining, conducting, and reporting the data analysis is fundamental. Although data analysis procedures vary widely in complexity, selection of the appropriate analysis is usually based on two aspects: the RQs/hypotheses and the type of data involved. To clarify the process, Fig. 6.1 shows the steps typically needed to determine the process and conduct the data analysis.
6.1 Data Collection
There are different ways that researchers collect data. Whether it is qualitative, quantitative or mixed research design, researchers need to collect data that are going to support the rationalization of the study (e.g., respond to the hypotheses or the RQs). In particular, in human-factors IT-related fields, we usually see different quantitative (e.g., log files/analytics, questionnaire data, sensor data) or/and qualitative (e.g., interviews, field notes) data collections taking place. Although it is possible to follow some of the principles described in this section with qualitative data as well, through different forms of data quantification (e.g., annotations, text mining, expert analysis); most of the practices described here concern quantitative data collections. In several cases those data collections are associated with specific measurements (which are associated with the RQs), in some data collections the measurements are predefined (e.g., questionnaire data, some log files) in some other data collections the measurements are post-computed (e.g., from sensor data), and in some other data collections there are no measurements (e.g., this is common in qualitative research studies). In this section, we will see some example data collections that are relevant for a learning technology and CCI researcher.
Questionnaire Data (Also Known as Survey Data)
The use of questionnaires (also called surveys) has a long history in both HCI and learning technology research. The goal is to understand users/learners attitudes and perceptions toward an artifact, or/and a procedure. Questionnaires are also allowing us to gather information about users’ backgrounds (e.g., habits, technology use), demographics and awareness. Questionnaires have been used for several years across different fields, such as social psychology, behavioral research and marketing, and can be put into practice in a pen and paper form or as a part of the system (e.g., integrated questionnaires). Several standardized questionnaires have been developed to gather information about system’s perceived usability (e.g., System Usability Scale (SUS) (Brooke, 1996), Computer System Usability Questionnaires (CSUQ) (Lewis, 1995)), users’ perceived effort (e.g., NASA Task Load Index (NASA-TLX) (Hart & Staveland, 1988)), and users’ attitudes and perceptions (e.g., perceived usefulness, perceived ease of use (Davis, 1989). Questionnaires are a direct means of measuring users’ perceived experience such as satisfaction, enjoyment, ease of use, with many of them having high level of standardization in HCI research (e.g., satisfaction is part of the ISO 9241). In the same vein, questionnaires are systematically used to assess learning experience, several questionnaire instruments have been developed and widely used in the past (e.g., to evaluate a learning system or different aspects of the learning design) (see: Kay & Knaack, 2009; Henrie et al., 2015). Questionnaires is a commonly accepted measure of users’ and learners’ experience (at least the perceived one), and despite some criticism (e.g., overuse or overreliance on questionnaires), questionnaires will probably continue to be a valid approach for externalizing and quantifying users’ perceived experience. Below in Fig. 6.2 you can see two standard questionnaires for measuring system’s usability (left) and users’ mental effort (right).
In this book we are not going to have a deep discussion on the role of questionnaires in IT related research, such a discussion can be found in Müller et al. (2014) and Groves et al. (2011). However, we will briefly discuss how questionnaires can help us collecting useful data and what are the most common measurements we see in learning technology and HCI research. The most common conceptual constructs (measurements) are multi item (multi questions), so several similar question are used to construct the measurement of the construct. In most of the cases, they are measured using Likert scales (e.g., five or seven point are the most common) and the wording of the scales can be configured to match the question.Footnote 1 Although no strict requirements exist, in large scale studies (usually survey studies) we see expectations for ten respondents per item (question). In experimental designs we see studies with less respondents per item. However, the researchers need to be considerable of the “ecology” of the measurements (e.g., should have a manageable number of questions that allow the user to understand, reflect and respond). Beyond the care the researchers need to pay during the research design of a study, there are also procedures for assessing the convergent validity of questionnaire measurements used in a study, for instance Fornell and Larcker (1981) proposed the following three routines, composite reliability of each measurement, usually Cronbach α above 0.7; item/question reliability of the measure, usually factor loading of 0.7 and above for each question (with no cross loadings) is a good indicator; and the Average Variance Extracted (AVE) of the measure, usually it is expected that AVE is equal or exceeds 0.50. In the following table (Table 6.1) we provide some examples of commonly used (in learning technology and HCI) measurements. Those items are properly contextualized and provided as options to a general question such as “Please indicate how much you agree or disagree with the following statements based on your experience with [the artifact]:”. Whenever [the artifact] the researchers can use the artifact of interest (e.g., the XYZ mobile application, the avatar, the dashboard).
Analytics (Also Known as User Logs)
In the fourth chapter we discussed about user traces that are left behind when users interact with technologies, and the implications those traces have for learning technology and CCI research. Those traces produce a wide range of insights, including users’ response time, response correctness, number of attempts to solve a problem, time spent interacting with learning resources, navigation to various learning resources, activity on the various communication functionalities (e.g., forums), and other learning trace data. Besides the ways systems’ can develop intelligence when leveraging on these data, such data can also be used to enrich measurements when conducting experimental studies. As we discussed in the fourth chapter tracking logs are powerful (you can see examples from edX MOOCs hereFootnote 2) and can help us to infer useful measurements, see services that host and provide access to learning interaction data such as Pittsburgh Science of Learning Center’s DataShop (https://pslcdatashop.web.cmu.edu/). Although a perfect one-to-one relationship between “measurements” and “conceptual constructs” is practically impossible, we see that very close relationships (i.e., analytics that capture the target construct to a great extent) exist and are heavily used to CCI and learning technology research (e.g., learning performance that is defined as the scores of the user in the assessment tasks). This allows us to capture those useful measurements intuitively (e.g., via the log files). Although such measurements can be post-computed from the tracking logs of the technology and the respective database schema; it is also possible and significantly more practical to “architect” analytics when designing and developing the technology. By architecting the analytics, you can develop relational database schemas that organize the data with respect to your needs and meaningful measurements (e.g., see Pardos et al., 2016), architecting analytics is also powerful when you have to work with learning eco-systems, where analytics across systems need to be captured and make sense (Mangaroska et al., 2021). The use of analytics in measurements during experimentation is an interesting and complex topic. The goal of this book, is not to go deep in this topic, but provide some examples of commonly used analytics based measurements in the context of learning technology and CCI (see Table 6.2). These selection of those measurements needs to take into consideration the context of the study of the technology and be relevant with the intended RQ.
Sensor-Based Analytics (Sensor Data)
Advances in sensors, social signal processing and computational analyses have demonstrated the potential to help us understand user and learning processes which were either not-possible to be captured or “too complex” for traditional analytics. For example, psychomotor learning with physical objects needs high frequency data and analyses can now happen in a reasonable time-window (Sharma & Giannakos, 2020). Due to the need for combining different expertise (e.g., learning scientists, data scientists, computer scientists), the collection, analysis and interpretation of sensor data in CCI and learning contexts have been a challenging endeavor. Nevertheless, over the last years the Multimodal Learning Analytics (MMLA) research community has managed to gather diverse research expertise’s (e.g., educational, computational, psychological), and contributed with rich measurements with respect to HCI and learning. A perfect one-to-one relationship between sensor-based measures and conceptual constructs does not exist (Giannakos et al., 2022), however, MMLA research is achieving acceptable levels of reliability and validity, allowing us to use measurements that provide useful insights (e.g., from eye-activity, facial expression or users’ motions and gestures). Table 6.3 depicts some examples of commonly used sensor based measurements in the context of learning technology and CCI. Once again, the selection of those measurements needs to take into consideration the context of the study of the technology and be relevant with the intended RQ, moreover, the researchers also need to consider the level of intrusiveness (the extent to which a measurement is ecologically valid, e.g., does not interfere with the task or impose obtrusive conditions). In different sub-domains of learning technology and HCI, we see researchers coining measurements that align with the objectives of those sub-domains. For example, in the context of Computer Supported Collaborative Learning (CSCL) research, we find researchers using a measurement called Joint Visual Attention (JVA) (i.e., the moments more than one users look at the same area) or “with-me-ness” (i.e., the moments the learner is looking on the content delivered by the teacher, e.g., how much the learner follows the teacher), although those measurements are not as general or widely used as the ones we identify in Table 6.3, they are very important for the challenges of this particular sub-domain (Sharma et al., 2014, 2017).
Pictorial Self-Report Data
Traditional verbal questionnaires assume that respondents are able fully grasp a question and think abstractly about their experience. However, several populations (e.g., children younger than 12) have not yet developed these skills or are in conditions that do not allow them to respond those instruments in a valid manner (e.g., a user who has dyslexia or is very tired from the main task); instead, their thinking processes are based on mental representations that relate to concrete events, objects, or experiences. This must be taken into account when adapting the measurement method to meet participants’ needs. Following this line of reasoning and related work in child development and psychology (Harter & Pike, 1984), there is an number of instruments that use visual methods (or observations and qualitative, checklist-based measurements), which we know are more effective than verbal methods (Döring et al., 2010). Such visual analogs represent specific situations, behaviors, and people to whom a user can easily relate.
Such visual analogs are usually employed to collect data during evaluation of an artefact (as well as during the lifetime of an application). We have seen pictorial questionnaires popping up while we are using an application or at the end of an activity (e.g., after we try a resource that has been recommended to us). Similarly with the verbal questionnaires, pictorial questionnaires are used to qualtify users’ perceived experience such as satisfaction, enjoyment, ease of use and alike. Although pictorial questionnaires usually do not follow the multi item (multi question) paradigm of the verbal ones (so the validity is not always being assessed), however, it is easier to employ pictorial questionnaires “on the spot” and capture temporal experience of the users. Moreover due to their usually short reading time, it is also easy to employ them either in selected critical moments (when the user finished a task) or in a random manner during the activity so we can get repeated measurements (Fig. 6.3).
Pictorial questionnaires are not meant to substitute verbal questionnaires, those two types of self-reporting instruments have been designed to address different research needs. Verbal questionnaires can use the specificity of verbal communication to extract exact information and the widely used measurements have been extensively validated and standardized. Pictorial questionnaires are used when “verbal communication becomes a challenge” and have the benefits of not increasing users’ cognitive load and overall burden, and reducing the time-to-complete. Similarly with verbal questionnaires, the pictorial questionnaires should be properly contextualized and sometimes complemented with minimal text such as “what do you think about [the artifact]:”. Whenever [the artifact] the researchers can use the artifact of interest (e.g., the XYZ mobile application, the avatar, the dashboard). Nevertheless, pictorial questionnaires should be self-standing, even if the user cannot read the provided text, depending the end-user, sometimes researchers need to use oral communication to explain what aspects we are asking the end-user to rate with the visual analogs. Similarly with verbal questionnaires, pictorial questionnaires can be used in both pen and paper and in a digital version, however, some of the advantages of digitally administering pictorial questionnaires to assess software (e.g., temporality, overall burden) might be lost or weakened. Table 6.4 depicts some examples of commonly used pictorial questionnaire measurements in the context of learning technology and CCI.
6.2 Data Analysis
To make the process clearer and to provide additional resources, Table 6.5 summarizes the most common data analysis procedures used in learning technology and CCI research. Let us now think of a simple between-subjects design with one control group (e.g., no use of dashboard in the LMS) and one experimental group (e.g., a simple dashboard that provides students’ previous test scores), with students’ weekly test scores as the dependent variable. In this case, a t-test for independent samples is needed (provided that parametric assumptions are met) to test the hypothesis that introducing a simple dashboard affects students’ learning performance. Adding a second experimental group (i.e., a third treatment group) with a dashboard that not only provides but also visualizes students’ scores will require a different analysis. In that case, we will need a one-way analysis of variance (ANOVA) (provided that parametric assumptions are met) to compare the three means; if the results of the ANOVA are significant, we can conduct a follow-up Tukey or REGWQ post-hoc comparison of means to find the pairwise differences. Learning technology and CCI researchers do not have to be data analysts or statisticians, but it is important to provide clear RQs and hypotheses and to follow a few basic rules and guidelines during the data analysis. Clearly formulated RQs will also make it possible to work with data analysts or statisticians if more sophisticated analyses are required that go beyond the scope of this book.
Most of the studies in learning technology and CCI employ the null hypothesis significance testing (NHST)Footnote 3 approaches and analyse data using the variance-based methods we present in Table 6.5. Despite the usefulness of variance-based approaches we have seen an increasing the need for new methods as well as combinations of different methods and approaches that can reduce biases and help us obtain a more holistic understanding of the phenomenon. Examples of such methods that we see being increasingly used in HCI and learning technology/ analytics are Bayesian methods (Robertson & Kaptein, 2016), fuzzy-set qualitative comparative analysis (fsQCA, or simpler versions of it such as QCA) (Pappas et al., 2019; Papamitsiou et al., 2018), process mining (Sharma et al., forthcoming), Hidden Markov Models (Sharma et al., 2020), or different machine learning methods (Kidziński et al., 2016).
As mentioned above, to support learning technology/CCI researchers, we provide a comprehensive how-to guide that allows them to choose between the various analyses by looking at the data types, the function of each of the analyses, working examples, the main conditions and assumptions, and resources for step-by-step implementation. Novice researchers should be aware that in order to explore causal relationships (cause-and-effect relationships) on the basis of experimental designs that compare outcomes associated with treatments, it is necessary to use tests that test causal effects (e.g., t-tests or ANOVAs) rather than correlational tests (e.g., Pearson correlations).
Notes
- 1.
Examples of Likert Scaled Responses Used in Data-Gathering: https://mwcc.edu/wp-content/uploads/2020/09/Likert-Scale-Response-Options_MWCC.pdf
- 2.
- 3.
NHST is statistical inference by which an experimental factor is tested against a hypothesis.
References
Amos, B., Ludwiczuk, B., & Satyanarayanan, M. (2016). Openface: A general-purpose face recognition library with mobile applications. CMU School of Computer Science, 6(2), 20.
Barthakur, A., Kovanovic, V., Joksimovic, S., Siemens, G., Richey, M., & Dawson, S. (2021). Assessing program-level learning strategies in MOOCs. Computers in Human Behavior, 117, 106674.
Basjaruddin, N. C., Syahbarudin, F., & Sutjiredjeki, E. (2021). Measurement device for stress level and vital sign based on sensor fusion. Healthcare Informatics Research, 27(1), 11–18.
Baumgartner, J., Frei, N., Kleinke, M., Sauer, J., & Sonderegger, A. (2019, May). Pictorial system usability scale (P-SUS) developing an instrument for measuring perceived usability. In Proceedings of the 2019 chi conference on human factors in computing systems (pp. 1–11).
Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49–59.
Broekens, J., & Brinkman, W. P. (2013). AffectButton: A method for reliable and valid affective self-report. International Journal of Human-Computer Studies, 71(6), 641–667.
Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189, 194.
Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 189–211.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.
Desmet, P. M. A., Vastenburg, M. H., & Romero, N. (2016). Pick-A-Mood manual: Pictorial self-report scale for measuring mood states. Delft University of Technology.
Döring, A. K., Blauensteiner, A., Aryus, K., Drögekamp, L., & Bilsky, W. (2010). Assessing values at an early age: The picture-based value survey for children. Journal of Personality Assessment, 92, 439–448. https://doi.org/10.1080/00223891.2010.497423
Duchowski, A. T., Krejtz, K., Krejtz, I., Biele, C., Niedzielska, A., Kiefer, P., … & Giannopoulos, I. (2018). The index of pupillary activity: Measuring cognitive load vis-à-vis task difficulty with pupil oscillation. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–13).
Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial action coding system: Facial action coding system: The manual: On CD-ROM. Research Nexus.
Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). Sage.
Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R. Sage.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50.
Giannakos, M. N., Chorianopoulos, K., & Chrisochoides, N. (2015). Making sense of video analytics: Lessons learned from clickstream interactions, attitudes, and learning outcome in a video-assisted course. The International Review of Research in Open and Distance Learning, 16(1), 260–283.
Giannakos, M. N., Papavlasopoulou, S., & Sharma, K. (2020). Monitoring children’s learning through wearable eye-tracking: The case of a making-based coding activity. IEEE Pervasive Computing, 19(1), 10–21.
Giannakos, M., Spikol, D., Di Mitri, D., Sharma, K., & Ochoa, X. (2022). Introduction to multimodal learning analytics. In Multimodal learning analytics handbook. Springer.
Girard, S. A. S. (2011). Traffic lights and smiley faces: Do children learn mathematics better with affective open-learner modelling tutors? (Doctoral dissertation, University of Bath).
Groves, R. M., Fowler, F. J., Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2011). Survey methodology. Wiley.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): Results of empirical and theoretical research. Human Mental Workload, 1, 139–183.
Harter, S., & Pike, R. (1984). The pictorial scale of perceived competence and social acceptance for young children. Child Development, 55, 1969–1982. https://doi.org/10.2307/1129772
Haslwanter, T. (2016). An introduction to statistics with python. With applications in the life sciences. Springer International Publishing.
Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24(4), 470–497.
Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education, 90, 36–53.
Herborn, K. A., Graves, J. L., Jerem, P., Evans, N. P., Nager, R., McCafferty, D. J., & McKeegan, D. E. (2015). Skin temperature reveals the intensity of acute stress. Physiology & Behavior, 152, 225–230.
Kay, R. H., & Knaack, L. (2009). Assessing learning, quality and engagement in learning objects: The learning object evaluation scale for students (LOES-S). Educational Technology Research and Development, 57(2), 147–168.
Kidziński, Ł., Giannakos, M., Sampson, D. G., & Dillenbourg, P. (2016). A tutorial on machine learning in educational science. State-of-the-Art and Future Directions of Smart Learning, 453–459.
Klug, B. (2017). An overview of the system usability scale in library website and system usability testing. Weave: Journal of Library User Experience, 1(6).
Kovanović, V., Gašević, D., Joksimović, S., Hatala, M., & Adesope, O. (2015). Analytics of communities of inquiry: Effects of learning technology use on cognitive presence in asynchronous online discussions. The Internet and Higher Education, 27, 74–89.
Lee-Cultura, S., Sharma, K., Papavlasopoulou, S., Retalis, S., & Giannakos, M. (2020). Using sensing technologies to explain children’s self-representation in motion-based educational games. In Proceedings of the interaction design and children conference (pp. 541–555).
Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57–78.
Mangaroska, K., Vesin, B., Kostakos, V., Brusilovsky, P., & Giannakos, M. N. (2021). Architecting analytics across multiple E-learning systems to enhance learning design. IEEE Transactions on Learning Technologies, 14(2), 173–188.
Mangaroska, K., Sharma, K., Gašević, D., & Giannakos, M. (2022). Exploring students’ cognitive and affective states during problem solving through multimodal data: Lessons learned from a programming activity. Journal of Computer Assisted Learning, 38(1), 40–59.
Müller, H., Sedley, A., & Ferrall-Nunge, E. (2014). Survey research in HCI. In Ways of knowing in HCI (pp. 229–266). Springer.
Papamitsiou, Z., Economides, A. A., Pappas, I. O., & Giannakos, M. N. (2018). Explaining learning performance using response-time, self-regulation and satisfaction from content: an fsQCA approach. In Proceedings of the 8th international conference on learning analytics and knowledge (pp. 181–190).
Pappas, I. O., Giannakos, M. N., & Sampson, D. G. (2019). Fuzzy set analysis as a means to understand users of 21st-century learning systems: The case of mobile learning and reflections on learning analytics research. Computers in Human Behavior, 92, 646–659.
Pardos, Z. A., Whyte, A., & Kao, K. (2016). moocRP: Enabling open learning analytics with an open source platform for data distribution, analysis, and visualization. Technology, Knowledge and Learning, 21(1), 75–98.
Read, J. C., & MacFarlane, S. (2006). Using the fun toolkit and other survey methods to gather opinions in child computer interaction. In Proceedings of the 2006 conference on Interaction design and children (pp. 81–88).
Robertson, J., & Kaptein, M. (Eds.). (2016). Modern statistical methods for HCI. Springer.
Roca, J. C., Chiu, C. M., & Martínez, F. J. (2006). Understanding e-learning continuance intention: An extension of the technology acceptance model. International Journal of Human-Computer Studies, 64(8), 683–696.
Ross, S. M., & Morrison, G. R. (2013). Experimental research methods. In Handbook of research on educational communications and technology (pp. 1007–1029). Routledge.
Sharma, K., & Giannakos, M. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning? British Journal of Educational Technology, 51(5), 1450–1484.
Sharma, K., Jermann, P., & Dillenbourg, P. (2014). “With-me-ness”: A -measure for students’ attention in MOOCs. In International conference of the learning sciences (No. EPFL-CONF-201918).
Sharma, K., Jermann, P., Dillenbourg, P., Prieto, L. P., D’Angelo, S., Gergle, D., et al. (2017). CSCL and eye-tracking: Experiences, opportunities and challenges. International Society of the Learning Sciences.
Sharma, K., Papamitsiou, Z., Olsen, J. K., & Giannakos, M. (2020). Predicting learners’ effortful behaviour in adaptive assessment using multimodal data. In Proceedings of the tenth international conference on learning analytics & knowledge (pp. 480–489).
Sharma, K., Papamitsiou, Z., & Giannakos, M. (forthcoming). When is the best moment to give feedback? A pattern-based approach with multimodal data.
Tisza, G., & Markopoulos, P. (2021). FunQ: Measuring the fun experience of a learning activity with adolescents. Current Psychology, 1–21.
Venkatesh, V., Speier, C., & Morris, M. G. (2002). User acceptance enablers in individual decision making about technology: Toward an integrated model. Decision Sciences, 33(2), 297–316.
Zamecnik, A., Kovanović, V., Joksimović, S., & Liu, L. (2022). Exploring non-traditional learner motivations and characteristics in online learning: A learner profile study. Computers and Education: Artificial Intelligence, 3, 100051.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Authors
About this chapter
Cite this chapter
Giannakos, M. (2022). Data Collection and Analysis in Learning Technology and CCI Research. In: Experimental Studies in Learning Technology and Child–Computer Interaction. SpringerBriefs in Educational Communications and Technology. Springer, Cham. https://doi.org/10.1007/978-3-031-14350-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-14350-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14349-6
Online ISBN: 978-3-031-14350-2
eBook Packages: EducationEducation (R0)