Abstract
Peer learning is an umbrella term covering diverse strategies supporting students to learn from each other. Studies highlight the power of combining two intertwined models of peer learning, namely peer assessment/feedback and collaborative team-based learning, to prepare graduates for the world of work and encourage acceptable social behaviours. Nevertheless, this approach comes with distinct challenges of marking bias, implementation difficulties, quality, trust and other issues. Studies addressing these challenges in the collaborative teamwork context are sparse and fail to consider the complex and intertwined challenges. Responding to this need, we propose a four-pillar framework comprising veracity, validity, volume and literacy to provide a strong footing on which to base future work in this area. Each of the pillars supports specific but overlapping aspects of peer assessment including assessment design (veracity pillar); implementation considerations (validity pillar); technology factors (volume pillar); and roles and responsibilities (literacy pillar). The framework aims to support educators, policymakers and scholars in mitigating challenges to reimagine and renew peer learning practices to effect positive change.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Peer learning, in the form of various collaborative learning models, has become a dominant approach in higher education to foster learning, engagement, and development of well-rounded graduates. Peer learning refers to “the acquisition of knowledge and skills through active helping and supporting among status equals or matched companions” (Topping, 2005, p. 631). The popularity of peer learning is evident from the extant literature surrounding the adoption of a repertoire of nuanced strategies including peer mentoring, teaching, coaching, review, assessment and feedback, study-buddy support, team-based learning, collaborative learning, cooperative learning, reciprocal peer learning, amongst others (Boud et al., 2014).
Nevertheless, the challenges surrounding peer learning strategies, particularly those entailing formal assessment, are problematic and complex since assessment is pivotal to the success of higher education systems (Strijbos & Sluijsmans, 2010). Students are very sensitive to assessment strategies, affecting emotional well-being (Jones et al., 2021), learning experiences, satisfaction and learning outcomes (Li et al., 2020). Additionally, wide variation in peer learning practices and ambiguities surrounding its effect on learning outcomes adds to implementation difficulties (Panadero, 2016).
In this context, peer learning models that combine peer assessment and peer feedback in collaborative teamwork (CTW) contexts embracing formal assessment methods provide a mechanism to fulfill a myriad of social, professional and educational goals (Planas-Lladó et al., 2021). Peer assessment refers to grading of peers while peer feedback entails giving, receiving and using qualitative comments by peers to support learning (Hoo et al., 2021). For the purposes of this chapter, peer assessment subsumes both peer rating and peer feedback. CTW is a structured form of collaborative learning requiring members to work together in small groups to achieve a common goal.
This combination cannot only strengthen the holistic development of knowledge, skills and abilities sought by students, employers and accrediting bodies (Planas-Lladó et al., 2021) but may also compensate for inherent limitations of individual strategies (Li et al., 2020). Peer assessment can influence the product quality from CTW tasks through leveraging individual accountability (Jacobs & Renandya, 2019), interdependent behaviour and strengthening learning (Planas-Lladó et al., 2021).
This chapter focuses on the peer assessment of process in producing a tangible artifact in both the formative and summative context. In CTW, this approach has been identified as more appropriate, as students are best positioned to assess their peers’ behaviours and dispositions owing to the proximal working relationship with team members (Sridharan et al., 2019). Nevertheless, this approach faces distinct challenges such as marking bias, implementation difficulties, engagement issues, quality and usability of feedback, trust issues and others (Oakley et al., 2004).
These challenges point to the need for an effective peer learning model to have impactful outcomes. Yet, studies exploring such an arrangement in CTW are sparse. Panadero (2016) stresses the need for considering social and human factors on peer assessment research as it generates psychological and emotional reactions. Scholars have identified gaps between theory and practice, and superficial implementation of CTW (Lawlor et al., 2018). Moreover, existing models predominantly focus on peer assessment in a cognitive context and therefore its direct and nuanced applicability to CTW is limited (Adachi et al., 2018; Gielen et al., 2011; Topping, 1998). To this end, we propose a framework specifically focussing on CTW and orienting it to specific peer assessment challenges and resolutions.
In this chapter, we set the scene by establishing the key impediments of CTW and peer assessment as the potential solutions to the impediments based on existing studies. This is followed by distilling the range of peer assessment challenges articulated in the existing literature to determine key themes. Next, adopting a systematic approach to develop pragmatic solutions to overcome peer assessment challenges, we propose a four-pillar framework. Finally, we draw upon the findings to summarise the implications, practical recommendations and limitations of the framework.
2 Impediments and Solutions for CTW
Recognising the intertwined landscape of CTW and peer assessment, holistic understanding of CTW impediments is fundamental, without which solutions to peer assessment challenges may become ineffective. Several impediments to effectively transforming CTW are evident despite the growing adoption of group work in the higher education curriculum (Rubin & Dierdorff, 2009). Impediments affecting student satisfaction and experience arise from tensions surrounding cognitive, affective and behavioural dimensions (Salas et al., 2015).
2.1 Cognitive, Affective and Behavioural Impediments
Prior literature reveals an array of cognitive impediments in CTW around poor adoption of pedagogical approaches (Hansen, 2006; Marasi, 2019). Asking students to work in groups without adequately building teamwork skills will not guarantee desired outcomes (McKendall, 2000; Opdecam & Everaert, 2018). Oakley et al. (2007, p. 270) contend, “students are not born knowing how to work effectively in teams” and underscore the poor instructional model as a root cause of student dissatisfaction. Likewise, Loughry et al. (2014) claim poor peer learning experiences due to the teacher’s adoption of a ‘sink or swim’ approach and lack of engagement or support, particularly during times of conflict (Moore & Hampton, 2015a). The potential harmful effects of CTW on learning can surface without instructor guidance, accountability processes and value propositions for students (Oosthuizen et al., 2021).
Impediments stemming from affective dimensions include lack of psychological safety (Salas et al., 2018), unfair grading (Stover & Holland, 2018), and lack of trust and conflict issues (O’Neill & Mclarnon, 2018). Salas et al. (2018) posit ‘the license to speak up’ is a critical factor to deter worries of being judged and ridiculed by team members. Student frustration and negative attitudes towards teamwork surface when all members get the same reward irrespective of their contribution or non-contribution (Mihelič & Culiberg, 2019). Lack of trust and conflict can also lead to knowledge hoarding, non-cooperation and conflict issues (Banihashem et al., 2012; Latifi & Noroozi, 2021; Latifi et al., 2021; Taghizadeh et al., 2022).
Behavioural impediments contributing to student dissatisfaction and negative attitudes towards CTW (El Massah, 2018) include free riding and social loafing (Oakley et al., 2004); lone wolf or silo working tendencies (Opdecam & Everaert, 2018); and dominant or inactive and uncooperative tendencies (Planas-Lladó et al., 2021). It is important to recognise the underlying causes of such behaviours to overcome these impediments. For example, non-contribution could arise from ‘imposter syndrome’ (doubting one’s abilities) (Chapman, 2017), fear of criticism or the fear of becoming a ‘sucker’ (Sridharan et al., 2019). On the other hand, over or under-valuing one’s own contribution can occur owing to the ‘Dunning-Kruger’ effect (cognitive bias in estimation) (Schlösser et al., 2013) or inherent competitive tendency of individuals creating an imbalance in individual contributions.
2.2 Strategies to Overcome CTW Impediments
Scholars have proposed a range of strategies to address CTW impediments. To tackle the cognitive impediments, effectively considering pedagogical approaches to curriculum design covering training, task design and facilitating environment is imperative. Key learning and teaching strategies supporting CTW training include highlighting the importance and relevance of CTW; and embedding team building activities; and team debriefing exercises (Hansen, 2006; McKendall, 2000). Critical task design strategies require assessment design that demands teamwork (work in collaboration) as opposed to group work (work independently) (Riley & Ward, 2017); application-based tasks; incentives to quality individual contributions (Bravo et al., 2019) and other context specific parameters such as cohort type, year level, task complexity and intended learning outcomes (Bravo et al., 2019). The provision of tools to collaborate and communicate can also foster a cohesive teamwork culture (Oosthuizen et al., 2021).
Mitigating the affective impediments, providing a conducive and psychologically safe environment enabling open and honest communication is critical (Salas et al., 2018) to develop trust, resolve conflicts, and enhance performance (Frazier et al., 2017). Defining roles and responsibilities and setting ground rules and expectations can help shape a unified team ethos (Bell et al., 2018). Additionally, dynamic team configuration considering both similar traits (values, attitudes and abilities) and dissimilar (complementary skills) individual characteristics (Oakley et al., 2004) can pave the way for creating a cohesive environment.
Combating the behavioral impediments, peer assessment has the power to prevent unacceptable student behaviours, particularly when direct observation by instructors is not feasible (Sridharan et al., 2019). Peer assessment can enhance learning to address underlying causes of such behaviours through assessees receiving feedback to take corrective actions, and assessors developing self-awareness, self-regulated learning and evaluative judgement capabilities (Dochy et al., 1999).
Nevertheless, prior research has identified limitations of peer assessment including variability (Willey & Gardner, 2009), student resistance (Topping, 2005), lack of honesty (Panadero et al., 2013), reliability and validity (Falchikov & Goldfinch, 2000), poor understanding and lack of knowledge and skills (Sridharan & Boud, 2019; Winstone et al., 2019) and lack of mutual respect (Zhou et al., 2020). While other studies posit various solutions to these challenges, they rarely attempt to address the broad scope of nuanced challenges relating to peer assessment in the CTW context.
3 Peer Assessment Challenges in CTW Context
Exploring the existing literature and evidence base, several peer assessment challenges have been identified. These are logically classified into four thematic clusters: quality and standards; validity and reliability; scalability and sustainability; and literacy.
3.1 Quality and Standards
Peers’ capabilities, behaviours and attitudes in accurate, honest judgment of each other and genuine engagement are critical for guaranteeing the quality and standards of peer assessment, without which it is wasted effort and resources. However, prior studies indicate a number of challenges impacting accuracy, honesty, engagement and overall trustworthiness of peer marking (Sridharan et al., 2019). In terms of capability, evaluative judgements and providing effective and usable feedback to others are complex and must be learned (Boud et al., 2018). Behavioural concerns include: incentives to mismark (competition); giving low marks to high performing students; over-generous marking (particularly friends); sabotage (overrating self and underrating peers) to create self-advantage; collusion with a tendency to mark similarly to others (Sridharan et al., 2019). Moreover, psychological safety factors such as fear of disapproval, social pressure and discomfort in marking peers can negatively impact honest assessment of peers (Vanderhoven et al., 2015). This is even more problematic when the peer assessment process is not anonymous leading to assessees preconceived perceptions of the assessor and unwillingness to open disclosure of behavioural issues (Anson & Goodman, 2014). Attitude challenges include non-engagement or untruthful engagement with the peer assessment activity, particularly in the formative context (either non-completion or random or insincere completion) (Sridharan & Boud, 2019).
3.2 Validity and Reliability
Validity and reliability are central to enhancing peer assessment effectiveness. Validity refers to use of an accurate unbiased relevant and aligned instrument to gain process and stakeholder acceptance (Speyer et al., 2011). Reliability requires consistency in marking (avoidance of arbitrary marking and absence of measurement error) irrespective of who does the peer assessment. Factors affecting reliability include biased marking as a result of friendship, vindictiveness, reciprocity, poor understanding of quality and standards, amongst others (Sridharan et al., 2019). Reliability can be enhanced through adoption of effective calibration and moderation practices, however, it requires effort, time and positive disposition by stakeholders. Other challenges include thoughtful consideration of peer assessment design decisions surrounding: sufficient number of peer assessors, incentives for taking it seriously, and anonymity to encourage honesty to ensure students trust in the system (Freeman & McKenzie, 2002).
3.3 Scalability and Sustainability
Scalable and sustainable practices through embedding formative and summative assessment with multiple exposures across the curriculum is vital for impactful outcomes. Stakeholder uptake is a challenge owing to administrative burdens of operationalising. This can be even more challenging in large classes owing to the time and effort-intensive nature of using traditional paper-based methods (Anson & Goodman, 2014). Technology can overcome these limitations, however, usability challenges surrounding stakeholder dispositions (perceived usefulness) and learning capabilities (perceived ease of use) can affect uptake (Salloum et al., 2019).
3.4 Assessment and Feedback Literacy
The two areas of literacy, namely, assessment and feedback literacy, are critical to ensure greater validity, reliability, consistency and to have a positive impact on learning. Assessment literacy is “the ability to design, select, interpret, and use assessment results appropriately for education decisions” (Quilter & Gallini, 2000, p. 116). Unpacking two types of assessment literacy are critical in CTW context: collaborative learning assessment (Meijer et al., 2020) and peer assessment. The former refers to appropriate choice of assessment methods to align with the goals of collaborative learning. Both entail the capacity of students and instructors to understand the purpose and processes of assessment, as well as to accurately determine ‘quality’ in their (and others’) work (Smith et al., 2013). Evidence suggests lack of clear understanding of the purpose and value of the process by students and instructors (Meijer et al., 2020). Instructor-student partnership in co-creating assessment rubrics are found to be effective but are relatively uncommon in practice (Deeley & Bovill, 2017).
Feedback literacy refers to the abilities and dispositions to seek, generate, understand and utilise feedback towards learning benefit, and develop academic judgement capacities (Molloy et al., 2020). Poor feedback literacy can lead to lack of pedagogical consideration and poor engagement (Koh et al., 2021), emotional distress (Zhou et al., 2020), ineffective past-oriented feedback and poor implementation of feedback practices (Winstone et al., 2019). Koh et al. (2021) found that lack of authentic ownership and engagement of teachers can lead to poor educational outcomes. Likewise, the importance of a clear understanding of pedagogy, technology and content knowledge, and the need for unfolding the teacher’s role are critical to mitigate assessment and feedback literacy limitations (Moore & Hampton, 2015b).
4 Framework Development
Analysis of the literature reveals a dearth of focused frameworks specifically addressing peer assessment challenges in CTW context. For example, Gielen et al.'s (2011) typology explores the diversity of peer assessment in a broader context by extending Topping’s (1998) typology classifying 20 variables into five clusters (peer assessment decisions, link between assessment and learning environments, peer interaction, composition, and management of procedure) with a single reference to peer assessment of behaviour. Adachi et al. (2018) framework extends this, incorporating 19 contextual elements covering broader peer assessment context, with peer assessment of process cited once. Overall, existing frameworks fail to consider the complexities of peer assessment in the CTW context.
To fill this gap, this chapter proposes a framework which is designed to mitigate specific challenges surrounding peer assessment in the CTW context to enable deeper understanding of conditions for success, appropriate decisions by key stakeholders to derive best outcomes, and enhance enabling factors to facilitate successful learning. The framework is designed to aid educators and policymakers in determining how best to implement peer assessment which enhances student learning and outcomes.
The framework responds to the needs of key stakeholders: students by supporting peer learning through addressing accountability, engagement and emotional issues; accreditation bodies in authentic provision of assurance of learning evidence; employers by equipping students with work and life-ready skills, and educators, scholars and policymakers in facilitating effective operationalisation of peer learning strategies.
4.1 Design
Empowering students to understand quality and standards is imperative to transform learning through efficacious peer assessment design strategies including: demystifying assessment criteria (to ensure accuracy); anonymity (to promote honesty); and incentives (to enhance engagement). Demystifying assessment criteria has the potential to ensure students can more accurately judge the work of others and trust their peers to evaluate their work. Students understanding of assessment criteria/rubrics is critical given they have the power to reward or penalise their teammates (Sridharan et al., 2019). Learning activities entailing co-creation or discussion of rubrics along with examples may foster a shared understanding of quality and standards (Jopp, 2020). Ashton and Davies (2015) found that training students to assess improves their ability to differentiate quality between novice, intermediate, and advanced levels and provide quality feedback information. Likewise, assessor-training and calibration practices can diminish capability challenges (Li et al., 2020).
Anonymity in peer assessment offers advantages in terms of positive attitudes towards feedback, enhanced student learning, improved quality of feedback, and prevention of undesirable social effects like peer pressure and favouritism (Panadero & Alqassab, 2019; Rotsaert et al., 2018). However, Rotsaert et al. (2018) contend that anonymity can prevent students from a two-way interactive feedback dialogue. On the other hand, anonymity can overcome the psychological safety challenges in truthful peer assessment (Vanderhoven et al., 2015). Besides, anonymity may help students to focus on the content of the feedback rather than the source, especially when there may be emotional tension arising from receiving and acting on feedback from a peer who is of equal status (Anson & Goodman, 2014). Indeed, while there are many positive features on feedback not being anonymous in situations without summative assessment, there are circumstances in which anonymity is needed.
Incentives to engage with both formative and summative practices is a critical aspect of successful peer assessment. To enhance student engagement, Gillanders et al. (2020) stress the need for detailed guidance for students, lecturer accessibility and exemplars. Stepanyan et al. (2009) identified four key components to engagement, including: supportive tutors; anonymity; accessing peer work; and the allocation of marks and in-class activities. Mark allocation can help students determine the value and overall importance of assessment tasks (Sridharan et al., 2019). While there can be no perfect breakdown/weight, the weighting allocation should: (a) reflect the goals for student learning and outcomes; and (b) seek to motivate students to produce high quality of work.
4.2 Implementation
Prior studies propose several strategies to tackle the validity and reliability concerns of peer assessment, classified into instrument validity, marking method validity and moderation process. Instrument validity refers to the choice of fit-for-purpose items with good measurement properties along with a well-defined rating scale. In this regard, Loughry et al. (2007) proposed an empirically tested and robust instrument comprising 87 items covering five dimensions based on extensive theoretical and empirical research. This has been integrated into the CATME tool, used extensively for practical implementation of self and peer assessment (Loughry et al., 2014). Similarly, Lejk and Wyvill (2001) reported the effectiveness of a holistic and category-based peer assessment instrument covering six dimensions.
Marking method validity refers to the appropriate choice of a marking calculation method that leads to consequential learning. To address integrity challenges, diverse calculation methods have been proposed such as weighted marks (Freeman & McKenzie, 2002), procedures to correct for marker biases (Li, 2001) and relative performance factors (Willey & Gardner, 2009) to deal with variation in marking standards and quality within and between groups.
Two popular choices are considering peer assessment of process and adjusting CTW product mark by individual process marks. Peer assessment of process has a number of benefits including tackling teamwork challenges and providing assurance of learning evidence for accreditation bodies (Loughry et al., 2014). Figure 1.1 provides an overview of diverse calculation options with progressively increasing complexity and validity, adopting both holistic and criterion-based peer rating methods. While holistic marking is easy to implement, evidence suggests lack of mark differentiation compared to criterion-referenced approach (Lejk & Wyvill, 2001). Another limitation of holistic marking is the inability to provide information on specific areas for improvement. Criterion-based marking has the potential to reduce marking bias if implemented effectively and help identify weak areas. Calculations based on individual performance relative to the group performance can be more reliable as this addresses issues of variation in marking standards. Relative performance factor (RPF) is calculated as follows:
\(\begin{aligned} {\text{RPF factor}} & = {\text{Total ratings for individual team member}} \\ & \quad \div {\text{Average of total rating for all team members}} \\ \end{aligned}\)
Adjusting product marks by process mark enables allocation of individual marks for a CTW task based on individual contributions. Figure 1.2 provides more nuanced methods for adjusting product by process marks using types of calculation methodsFootnote 1 with varying degrees of penalties for poor behaviours in working as a team. Specifically, the three methods for calculating RPF include: non-linear (square root of ratio of RPF); linear (simple ratio—RPF formula); and curvilinear method (linear formula for RPF scores below 1 and non-linear formula for RPF above 1). The non-linear model is less punitive than the linear model for under-contributors. The linear model is less punitive for over-contributors. The curvilinear model penalises both under and over-contributors. It might therefore be appropriate to adopt the non-linear method for first year students, the linear method for second year students and the curvilinear for final year and post graduate students.
Moderation process requires the shared understanding of quality and standards to address reliability concerns and instil confidence amongst students in peer rating. Sadler (2010) advocates the development of “appraisal expertise” to ensure students have the capacity to judge their own performance as well as that of their peers. Increased reliability can be realised through repeated exposure and provision of explicit rubric criteria (De Wever et al., 2011).
In this regard, three types of moderation activities are beneficial: pre-moderation (before marking commences), peri-moderation (during marking) and post moderation (after marking). Pre and peri-moderation activities require student engagement and post-moderation requires instructor engagement in adjusting the mark based on evidence provided by students. Pre-moderation activities include demystifying quality expectations, peer-rater calibration practices, and peer-rating training (Li et al., 2020). Peri-moderation could take the form of formative assessment by providing exposure to peer marking without penalty as well as developing self-awareness and taking corrective actions. Post-moderation requires instructors addressing marking variation within and between groups by using triangulation evidence from the system and students. Automated peer assessment tools such as CATME and FeedbackFruits have the power to provide additional information on students marking behaviours such as over-rating, colluding, and under-rating. This along with instructors’ tacit knowledge and reflection activities could be used to moderate individual scores.
4.3 Technology
Embracing automation technology can alleviate scalability, sustainability and usability challenges (Anson & Goodman, 2014). Scalability relates to the capacity to implement peer assessment in large classes and multiple units of study. Sustainability refers to maintaining initiatives across the curriculum continuously for long-term success. Usability refers to positive user experience and satisfaction to support sustained technology adoption.
A range of technologies and supporting functionalities need to be considered in choosing a system to mitigate these challenges. These include provision for: team formation, calibration exercises, peer assessment, giving and receiving feedback, feedback on feedback, team and individual reflection, and communities of inquiry activities. For example, SPARKPLUS, CATME, FeedbackFruits, amongst other tools, have been used to support peer assessment and feedback activities (Loughry et al., 2014; Willey & Gardner, 2009). Institutional Learning Management Systems (LMS) tools such as discussion forums and Wikis can support communities of inquiry activities, brainstorming, exchanging ideas and information. Likewise, most LMS provide facilities for basic team formation such as self-selection, random allocation and teacher allocation for group formation.
Most self and peer assessment systems have advantages and disadvantages (See Fig. 1.4). CATME has unique functionality for dynamic team configuration enabling mixing homogenous and heterogeneous individual characteristics. Similarly, Feedbackfruit’s unique feature is its ability to interact with institutional LMS. Both CATME and SPARKPLUS can automatically calculate a relative performance factor. Many of these technologies facilitate the automatic generation of results for individuals to compare their self-score against aggregate peer scores. ‘Team charter’ from CATME can support team meetings, setting out roles, expectations, and processes, and laying foundations for teamwork which have been identified to enhance teamwork effectiveness (Bell et al., 2018). Additionally, some of these technologies can classify students based on their marking pattern (such as overconfident, underconfident, manipulator, conflict, clique) using a powerful algorithm, which can be useful for instructor post-moderation processes.
These technologies also help develop lifelong skills; namely evaluative judgement (the ability to judge the quality of one’s own and others’ work) (Boud et al., 2018). However, effective use of these to derive benefit relies upon ease of use of the tool, stakeholder engagement, pedagogical underpinning and ownership of implementation. For example, it is crucial to consider the trade-off between usability and functionality of these systems for securing institutional licensing.
4.4 Roles and Responsibilities
The development of knowledge, skills and ability of both instructors and students, is critical to address peer assessment literacy challenges, to effectively fulfil their respective functions through partnership and shared roles and responsibilities. In particular, the two areas of literacy, namely, assessment and feedback literacy, are critical as the evidence suggests making evaluative judgements and providing effective feedback are complex and must be learned (Boud et al., 2018).
Assessment literacy is critical and viewed by some as a sine qua non for instructors as inadequate knowledge in assessment impacts the overall quality of education (Popham, 2009). According to Pastore and Andrade (2019), assessment literacy helps instructors use critical information about student learning to teach more effectively, enabling them to respond to students’ learning needs. For students, assessment literacy relates to three key factors according to Smith et al., (2013, p. 1): (1) understanding the purpose of assessment and how it connects to their learning overall; (2) awareness of the process of assessment; and (3) the opportunity to practice making judgements about quality and areas for improvement.
To support peer assessment, Meijer et al. (2020) stress the importance of appreciating the rationale and purpose of collaborative learning and assessment between instructor-students and among students to develop assessment literacy. Deeley and Bovill (2017) argue the need for instructor-student partnership and its orientation for learning through engaging students as ‘partners in assessment’. Peer assessment training has been found to increase perceptions of psychological safety which leads to increased confidence and trust in peer assessors (Cheng et al., 2015). Considering students’ roles as assessee and assessor requires both emotional strength and resilience; training, monitoring and providing guidance in peer assessment is imperative (Gielen et al., 2011; Panadero, 2016).
Students need to be trained in assessment, feedback and evaluative judgement skills to improve peer assessment validity and reliability. Developing stakeholders skills in feedback provision to focus on task/process (not on person), orientation (forward-oriented) and specificity (areas for improvement) are critical to influence positive impact on learning and behaviour. The provision of exemplars, calibration and formative assessment tasks, co-designing evaluation tools are powerful mechanisms in developing evaluative judgements around what constitutes ‘quality’. Carless and Boud (2018) highlight the teacher’s role in modelling the uptake of feedback by encouraging students to seek, use, generate and act on feedback. Developing skills around peer feedback is critical to ensure effective elicitation, process and enaction by students (Malecka et al., 2020). Peer assessment skills could be further enhanced through reflecting on feedback and feedback on feedback.
In summation, the roles and responsibilities of both instructors and students broadly relate to: (a) capacity building and engagement with resources to develop peer assessment literacy; (b) engagement in calibration exercises, formative assessment, summative assessment, giving feedback, use of technology; (c) proactively seeking, engaging and acting on feedback; and (d) reflecting and taking actions for continuous improvement and lifelong learning.
Based on the above analysis of literature, we propose a a four-pillar framework by holistically considering complex and intertwined challenges of peer assessment in formal CTW assessment context. This is designed to provide guidance to educators and scholars for navigating various peer learning challenges and creating a stable and sustainable peer learning ecosystem model to have an impactful outcome as shown in Fig. 1.3. However, we acknowledge the need for adaptation to align with the context and purpose of the peer learning to effect change.
5 Discussion
The framework presented four key pillars (veracity, validity, volume and literacy) based on themes emerged from a critical review of the literature contributing to scholarship encompassing a broad scope of enabling strategies to mitigate challenges associated with peer assessment in CTW, which few existing models do. We contend that when designed and implemented effectively, peer assessment in CTW can become a powerful strategy to instil a range of soft skills including teamwork, leadership, negotiation, conflict resolution, amongst others. The framework has the potential to influence key stakeholders to advance deeper understanding of challenges and opportunities in embracing effective peer assessment practices in CTW. The key implications for pragmatic application of the framework are summarised below.
To mitigate the capabilities and behavioural challenges, intervention strategies in the veracity pillar include demystifying expectations, anonymity and incentives. However, there is no ‘one solution fits all’ strategy to tackle the challenges. For example, a partnership approach to co-creation as a mechanism for developing shared understanding of quality and standards demands shift in perceptions of stakeholders (Bovill et al., 2016). Anonymity can tackle inhibitions in honest marking and reduce anxieties of retaliation from peers, however, it prevents serious engagement and dialogic conversation, which are critical for learning (Rotsaert et al., 2018). Formative assessment is powerful to support peer learning, however, lack of incentives can impede engagement. Introducing it as a hurdle task may solve this challenge. On the other hand, incentives in the form of summative assessment may lead to competition instead of collaboration. Integrating criteria for collaboration and cooperation can address this issue.
Approaches proposed in the validity pillar include robust implementation decisions about assessment instrument, marking method and moderation process with careful consideration to context and constraints. For instance, instructors need to carefully consider several factors: alignment with learning outcomes, choice of methods conducive for learning and adopting appropriate moderation practices. To impact consequential learning, a range of solutions are proposed including a diverse choice of instruments, calculation methods such as weighted marks (Freeman & McKenzie, 2002), procedures to correct for marker biases (Li, 2001), use of a relative performance factor (Willey & Gardner, 2009) and moderation activities (pre, peri and post). To avert students turning against peer assessment without exposure, use of lenient marking methods for first year students and a firmer approach for mature students can be considered.
Enabling scalability and sustainability, volume pillar considers a scaffolded approach and multiple exposures to peer assessment. Effective practices can be achieved through technology affordances and instructors’ ownership for successful implementation (Koh et al., 2021). A comparison of functionalities of three popular technologies namely SparkPLUS, the CATME, FeedbackFruits is provided to make informed decisions in choosing a tool. Even with technology support, peer assessment can be a time-consuming task for novice instructors (Anson & Goodman, 2014). Recognition of this in workload models and capacity building sessions can pave the way for change. Additional program level policy decisions to scaffold across the course will enable authentic transformation of CTW skills and genuine uptake of peer assessment activities.
Developing a deeper understanding of formative and summative functions of assessment by key stakeholders is underscored in the literacy pillar. This requires both cognisance and application of the formative and summative assessment tasks and feedback practices to avert harmful effect on learning (Boud et al., 1999). Strategies to achieve this include assessment bootcamp sessions to explicate the purpose and processes; integrative assessment practices which requires actioning on feedback before attempting follow-on task; reflective writing on how they used the feedback; post-feedback proforma activities on the value and use of feedback; feedback on feedback to encourage deep engagement; developing students’ capacity to give, receive and act on feedback; and mindful growth mindset feedback practices without invoking self-esteem issues. Developing appropriate institutional policies around reframing effective assessment, feedback and professional development practices can significantly resolve these challenges.
5.1 Usage of the Framework
The functioning of the framework has implications for a range of stakeholders including educators, policy makers and scholars. For educators, the framework offers a distilling of the extant research on the tensions, possible ways to overcome challenges, and purpose-fit approach to effective adoption of peer assessment in the CTW context. A critical factor in the effective use of the framework is building the capacity of both educators and students in understanding the complexities and pedagogical underpinnings of peer assessment. Once educators are equipped with the necessary skills, they need to ensure students are also sufficiently trained in the skills required to effect change. Educators need to develop clear procedures and processes for students, and the framework may assist by functioning as an overview and checklist of critical points. In its comprehensive insights into the complex and multifaceted components, the framework may serve as a useful aid for educators in determining how best to implement peer assessment to enhance student learning and outcomes.
For institutional policy makers, the framework presents a pathway for addressing the tensions and developing policies and institutional support for mainstream adoption of best practices in peer assessment. Policy makers are often the way to ensure impactful outcomes at an institutional level. The framework proposes a comprehensive overview of challenges and resolutions around peer assessment, which may help inform best practices.
For researchers, the framework offers a useful distilling of the extensive body of extant literature around peer learning and assessment in the CTW context. It may prove useful in informing considerations of innovative initiatives and approaches in peer assessment moving forward, as well as serving as a springboard to future research.
5.2 Limitations
The proposed framework is not without its limitations. Firstly, it has emerged from work in a CTW context, which may mean it may not apply to all peer assessment contexts. Secondly, while it traverses a spectrum of significant challenges and mitigating factors, the framework may not address them all. Finally, successful implementation requires attention to the context in which peer assessment is being implemented.
5.3 Further Research
Further research has the potential to refine the framework, empirically test the effectiveness of the proposed strategies to support pragmatic application. Implementation and monitoring will help flesh out its parameters and limitations and assist in its finessing. Another consideration is to elaborate on the capacity building of students in peer assessment and optimal conditions under which they can be supported to develop their feedback and assessment literacy.
6 Conclusion
This chapter offers guidance for the multitude of challenges of peer assessment in the CTW context. It does so by identifying the various tensions within CTW and challenges from each of the pillars along with proposing recommendations and fit-for-purpose approaches to tackle the issues to support an effective peer assessment ecosystem. This requires holistically considering its multifaceted aspects through a seamless integration of all four pillars: veracity, validity, volume, and literacy. We underscore the aligned roles of students, instructors, technology and institutional support as catalysing agents of change for transformational learning. Additionally, a significant cultural shift in reimagining assessment and feedback practices, renewal of institution policies and capacity building of key stakeholders will go a long way to effect positive change. Considering the complexities and multifaceted requirements of CTW, more research is required to deal with the challenges of practical implementation for each of the pillars.
References
Adachi, C., Tai, J., & Dawson, P. (2018). A framework for designing, implementing, communicating and researching peer assessment. Higher Education Research and Development, 37(3), 453–467. https://doi.org/10.1080/07294360.2017.1405913
Anson, R., & Goodman, J. A. (2014). A peer assessment system to improve student team experiences. Journal of Education for Business, 89(1), 27–34. https://doi.org/10.1080/08832323.2012.754735
Ashton, S., & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course. Distance Education, 36(3), 312–334. https://doi.org/10.1080/01587919.2015.1081733
Banihashem, S. K., Noroozi, O., van Ginkel, S., Macfadyen, L. P., & Biemans, H. J. (2022). A systematic review of the role of learning analytics in enhancing feedback practices in higher education. Educational Research Review, 100489. https://doi.org/10.1016/j.edurev.2022.100489
Bell, S. T., Brown, S. G., Colaneri, A., & Outland, N. (2018). Team composition and the ABCs of teamwork. American Psychologist, 73(4), 349–362. https://doi.org/10.1037/amp0000305
Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (2018). Developing evaluative judgement in higher education. Routledge.
Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4), 413–426. https://doi.org/10.1080/0260293990240405
Boud, D., Cohen, R., & Sampson, J. (2014). Peer learning in higher education: Learning from and with each other. Routledge.
Bovill, C., Cook-Sather, A., Felten, P., Millard, L., & Moore-Cherry, N. (2016). Addressing potential challenges in co-creating learning and teaching: Overcoming resistance, navigating institutional norms and ensuring inclusivity in student–staff partnerships. Higher Education, 71(2), 195–208. https://doi.org/10.1007/s10734-015-9896-4
Bravo, R., Catalán, S., & Pina, J. M. (2019). Analysing teamwork in higher education: An empirical study on the antecedents and consequences of team cohesiveness. Studies in Higher Education, 44(7), 1153–1165. https://doi.org/10.1080/03075079.2017.1420049
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment and Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
Chapman, A. (2017). Using the assessment process to overcome Imposter Syndrome in mature students. Journal of Further and Higher Education, 41(2), 112–119. https://doi.org/10.1080/0309877X.2015.1062851
Cheng, K.-H., Liang, J.-C., & Tsai, C.-C. (2015). Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity. The Internet and Higher Education, 25, 78–84. https://doi.org/10.1016/j.iheduc.2015.02.001
De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2011). Assessing collaboration in a wiki: The reliability of university students’ peer assessment. The Internet and Higher Education, 14(4), 201–206. https://doi.org/10.1016/j.iheduc.2011.07.003
Deeley, S. J., & Bovill, C. (2017). Staff student partnership in assessment: Enhancing assessment literacy through democratic practices. Assessment and Evaluation in Higher Education, 42(3), 463–477. https://doi.org/10.1080/02602938.2015.1126551
Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331–350. https://doi.org/10.1080/03075079912331379935
El Massah, S. S. (2018). Addressing free riders in collaborative group work: The use of mobile application in higher education. International Journal of Educational Management. https://doi.org/10.1108/IJEM-01-2017-0012
Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. https://doi.org/10.3102/00346543070003287
Frazier, M. L., Fainshmidt, S., Klinger, R. L., Pezeshkan, A., & Vracheva, V. (2017). Psychological safety: A meta-analytic review and extension. Personnel Psychology, 70(1), 113–165. https://doi.org/10.1111/peps.12183
Freeman, M., & McKenzie, J. (2002). SPARK, a confidential web–based template for self and peer assessment of student teamwork: Benefits of evaluating across different subjects. British Journal of Educational Technology, 33(5), 551–569. https://doi.org/10.1111/1467-8535.00291
Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment and Evaluation in Higher Education, 36(2), 137–155. https://doi.org/10.1080/02602930903221444
Gillanders, R., Karazi, S., & O’Riordan, F. (2020). Loss aversion as a motivator for engagement with peer assessment. Innovations in Education and Teaching International, 57(4), 424–433. https://doi.org/10.1080/14703297.2020.1726203
Hansen, R. S. (2006). Benefits and problems with student teams: Suggestions for improving team projects. Journal of Education for Business, 82(1), 11–19. https://doi.org/10.3200/JOEB.82.1.11-19
Hoo, H. -T., Deneen, C., & Boud, D. (2021). Developing student feedback literacy through self and peer assessment interventions. Assessment and evaluation in higher education, 1–14.https://doi.org/10.1080/02602938.2021.1925871
Jacobs, G. M., & Renandya, W. A. (2019). Student centered cooperative learning: Linking concepts in education to promote student learning. Springer.
Jones, E., Priestley, M., Brewster, L., Wilbraham, S. J., Hughes, G., & Spanner, L. (2021). Student wellbeing and assessment in higher education: The balancing act. Assessment and Evaluation in Higher Education, 46(3), 438–450. https://doi.org/10.1080/02602938.2020.1782344
Jopp, R. (2020). A case study of a technology enhanced learning initiative that supports authentic assessment. Teaching in Higher Education, 25(8), 942–958. https://doi.org/10.1080/13562517.2019.1613637
Koh, E. R., Tan, J. P. -L., Hong, H., Suresh, D., & Tee, Y. -H. (2021). Infusing the teamwork innovation my groupwork buddy in schools: Enablers and impediments. In Scaling up ICT-based innovations in schools (pp. 151–171). Springer.
Latifi, S., & Noroozi, O. (2021). Supporting argumentative essay writing through an online supported peer-review script. Innovations in Education and Teaching International, 58(5), 501–511. https://doi.org/10.1080/14703297.2021.1961097
Latifi, S., Noroozi, O., & Talaee, E. (2021). Peer feedback or peer feedforward? Enhancing students’ argumentative peer learning processes and outcomes. British Journal of Educational Technology, 52(2), 768–784. https://doi.org/10.1111/bjet.13054
Lawlor, J., Conneely, C., Oldham, E., Marshall, K., & Tangney, B. (2018). Bridge21: Teamwork, technology and learning. A pragmatic model for effective twenty-first-century team-based learning. Technology, Pedagogy and Education, 27(2), 211–232. https://doi.org/10.1080/1475939X.2017.1405066
Lejk, M., & Wyvill, M. (2001). Peer assessment of contributions to a group project: A comparison of holistic and category-based approaches. Assessment and Evaluation in Higher Education, 26(1), 61–72. https://doi.org/10.1080/02602930020022291
Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment and Evaluation in Higher Education, 45(2), 193–211. https://doi.org/10.1080/02602938.2019.1620679
Li, L. K. (2001). Some refinements on peer assessment of group projects. Assessment and Evaluation in Higher Education, 26(1), 5–18. https://doi.org/10.1080/0260293002002255
Loughry, M. L., Ohland, M. W., & DeWayne Moore, D. (2007). Development of a theory-based assessment of team member effectiveness. Educational and Psychological Measurement, 67(3), 505–524. https://doi.org/10.1177/0013164406292085
Loughry, M. L., Ohland, M. W., & Woehr, D. J. (2014). Assessing teamwork skills for assurance of learning using CATME team tools. Journal of Marketing Education, 36(1), 5–19. https://doi.org/10.1177/0273475313499023
Malecka, B., Boud, D., & Carless, D. (2020). Eliciting, processing and enacting feedback: Mechanisms for embedding student feedback literacy within the curriculum. Teaching in Higher Education, 1–15.https://doi.org/10.1080/13562517.2020.1754784
Marasi, S. (2019). Team-building: Developing teamwork skills in college students using experiential activities in a classroom setting. Organization Management Journal, 16(4), 324–337. https://doi.org/10.1080/15416518.2019.1662761
McKendall, M. (2000). Teaching groups to become teams. Journal of Education for Business, 75(5), 277–282. https://doi.org/10.1080/08832320009599028
Meijer, H., Hoekstra, R., Brouwer, J., & Strijbos, J.-W. (2020). Unfolding collaborative learning assessment literacy: A reflection on current assessment methods in higher education. Assessment and Evaluation in Higher Education, 45(8), 1222–1240. https://doi.org/10.1080/02602938.2020.1729696
Mihelič, K. K., & Culiberg, B. (2019). Reaping the fruits of another’s labor: The role of moral meaningfulness, mindfulness, and motivation in social loafing. Journal of Business Ethics, 160(3), 713–727. https://doi.org/10.1007/s10551-018-3933-z
Molloy, E., Boud, D., & Henderson, M. (2020). Developing a learning-centred framework for feedback literacy. Assessment and Evaluation in Higher Education, 45(4), 527–540. https://doi.org/10.1080/02602938.2019.1667955
Moore, P., & Hampton, G. (2015a). ‘It’s a bit of a generalisation, but…’: Participant perspectives on intercultural group assessment in higher education. Assessment and Evaluation in Higher Education, 40(3), 390–406. https://doi.org/10.1080/02602938.2014.919437
Moore, P., & Hampton, G. (2015b). ‘It’s a bit of a generalisation, but…’: Participant perspectives on intercultural group assessment in higher education. Assessment and Evaluation in Higher Education, 40(3), 390–406.
O’Neill, T. A., & Mclarnon, M. J. (2018). Optimizing team conflict dynamics for high performance teamwork. Human Resource Management Review, 28(4), 378–394. https://doi.org/10.1016/j.hrmr.2017.06.002
Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. (2004). Turning student groups into effective teams. Journal of Student Centered Learning, 2(1), 9–34.
Oakley, B. A., Hanna, D. M., Kuzmyn, Z., & Felder, R. M. (2007). Best practices involving teamwork in the classroom: Results from a survey of 6435 engineering student respondents. IEEE Transactions on Education, 50(3), 266–272. https://doi.org/10.1109/TE.2007.901982
Oosthuizen, H., De Lange, P., Wilmshurst, T., & Beatson, N. (2021). Teamwork in the accounting curriculum: Stakeholder expectations, accounting students’ value proposition, and instructors’ guidance. Accounting Education, 30(2), 131–158. https://doi.org/10.1080/09639284.2020.1858321
Opdecam, E., & Everaert, P. (2018). Seven disagreements about cooperative learning. Accounting Education, 27(3), 223–233. https://doi.org/10.1080/09639284.2018.1477056
Panadero, E. (2016). Is it safe? Social, interpersonal, and human effects of peer assessment. In Handbook of human and social conditions in assessment (pp. 247–266).
Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment and Evaluation in Higher Education, 44(8), 1253–1278. https://doi.org/10.1080/02602938.2019.1600186
Panadero, E., Romero, M., & Strijbos, J.-W. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39(4), 195–203. https://doi.org/10.1016/j.stueduc.2013.10.005
Pastore, S., & Andrade, H. L. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128–138. https://doi.org/10.1016/j.tate.2019.05.003
Planas-Lladó, A., Feliu, L., Arbat, G., Pujol, J., Suñol, J. J., Castro, F., & Martí, C. (2021). An analysis of teamwork based on self and peer evaluation in higher education. Assessment and Evaluation in Higher Education, 46(2), 191–207. https://doi.org/10.1080/02602938.2020.1763254
Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory Into Practice, 48(1), 4–11. https://doi.org/10.1080/00405840802577536
Quilter, S. M., & Gallini, J. K. (2000). Teachers’ assessment literacy and attitudes. The Teacher Educator, 36(2), 115–131. https://doi.org/10.1080/08878730009555257
Riley, J., & Ward, K. (2017). Active learning, cooperative active learning, and passive learning methods in an accounting information systems course. Issues in Accounting Education, 32(2), 1–16. https://doi.org/10.2308/iace-51366
Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: Its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. The European Journal of Psychology of Education, 33, 75–99. https://doi.org/10.1007/s10212-017-0339-8
Rubin, R. S., & Dierdorff, E. C. (2009). How relevant is the MBA? Assessing the alignment of required curricula and required managerial competencies. Academy of Management Learning and Education, 8(2), 208–224. https://doi.org/10.5465/amle.2009.41788843
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. In Approaches to assessment that enhance learning in higher education (pp. 55–70). Routledge.
Salas, E., Reyes, D. L., & McDaniel, S. H. (2018). The science of teamwork: Progress, reflections, and the road ahead. American Psychologist, 73(4), 593. https://doi.org/10.1037/amp0000334
Salas, E., Shuffler, M. L., Thayer, A. L., Bedwell, W. L., & Lazzara, E. H. (2015). Understanding and improving teamwork in organizations: A scientifically based practical guide. Human Resource Management, 54(4), 599–622. https://doi.org/10.1002/hrm.21628
Salloum, S. A., Alhamad, A. Q. M., Al-Emran, M., Monem, A. A., & Shaalan, K. (2019). Exploring students’ acceptance of e-learning through the development of a comprehensive technology acceptance model. IEEE Access, 7, 128445–128462. https://doi.org/10.1109/ACCESS.2019.2939467
Schlösser, T., Dunning, D., Johnson, K. L., & Kruger, J. (2013). How unaware are the unskilled? Empirical tests of the “signal extraction” counter explanation for the Dunning-Kruger effect in self-evaluation of performance. Journal of Economic Psychology, 39, 85–100. https://doi.org/10.1016/j.joep.2013.07.004
Smith, C. D., Worsfold, K., Davies, L., Fisher, R., & McPhail, R. (2013). Assessment literacy and student learning: The case for explicitly developing students ‘assessment literacy.’ Assessment and Evaluation in Higher Education, 38(1), 44–60. https://doi.org/10.1080/02602938.2011.598636
Speyer, R., Pilz, W., Van Der Kruis, J., & Brunings, J. W. (2011). Reliability and validity of student peer assessment in medical education: A systematic review. Medical Teacher, 33(11), e572–e585.
Sridharan, B., & Boud, D. (2019). The effects of peer judgements on teamwork and self-assessment ability in collaborative group work. Assessment and Evaluation in Higher Education, 44(6), 894–909. https://doi.org/10.1080/02602938.2018.1545898
Sridharan, B., Tai, J., & Boud, D. (2019). Does the use of summative peer assessment in collaborative group work inhibit good judgement? Higher Education, 77(5), 853–870. https://doi.org/10.1007/s10734-018-0305-7
Stepanyan, K., Mather, R., Jones, H., & Lusuardi, C. (2009). Student engagement with peer assessment: A review of pedagogical design and technologies. In International Conference on Web-Based Learning, Berlin, Heidelberg.
Stover, S., & Holland, C. (2018). Student resistance to collaborative learning. International Journal for the Scholarship of Teaching and Learning, 12(2), 8.
Strijbos, J.-W., & Sluijsmans, D. (2010). Unravelling peer assessment: Methodological, functional, and conceptual developments. Learning and Instruction, 20(4), 265–269. https://doi.org/10.1016/j.learninstruc.2009.08.002
Taghizadeh, K. N., Noroozi, O., Banihashem, S. K., Karami, M. & Biemans, H. J. A. (2022). Online peer feedback patterns of success and failure in argumentative essay writing. Interactive Learning Environments, 1–10. https://doi.org/10.1080/10494820.2022.2093914
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249
Topping, K. J. (2005). Trends in peer learning. Educational Psychology, 25(6), 631–645. https://doi.org/10.1080/01443410500345172
Vanderhoven, E., Raes, A., Montrieux, H., Rotsaert, T., & Schellens, T. (2015). What if pupils can assess their peers anonymously? A quasi-experimental study. Computers and Education, 81, 123–132. https://doi.org/10.1016/j.compedu.2014.10.001
Willey, K., & Gardner, A. (2009). Improving self-and peer assessment processes with technology. Campus-Wide Information Systems, 26(5), 379–399.
Winstone, N. E., Mathlin, G., & Nash, R. A. (2019). Building feedback literacy: Students’ perceptions of the developing engagement with feedback toolkit. Frontiers in Education, 4, 39. https://doi.org/10.3389/feduc.2019.00039
Zhou, J., Zheng, Y., & Tai, J. H. M. (2020). Grudges and gratitude: The social-affective impacts of peer assessment. Assessment and Evaluation in Higher Education, 45(3), 345–358. https://doi.org/10.1080/02602938.2019.1643449
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Sridharan, B., McKay, J., Boud, D. (2023). The Four Pillars of Peer Assessment for Collaborative Teamwork in Higher Education. In: Noroozi, O., De Wever, B. (eds) The Power of Peer Learning. Social Interaction in Learning and Development. Springer, Cham. https://doi.org/10.1007/978-3-031-29411-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-29411-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-29410-5
Online ISBN: 978-3-031-29411-2
eBook Packages: EducationEducation (R0)