Mirjam Neelen & Paul A. Kirschner
Peer assessment (PA) isn’t new. It’s been around in many different forms for a long time. PA can be broadly defined as “an educational arrangement where learners judge a peer’s performance quantitatively and/or qualitatively and which stimulates learners to reflect, discuss and collaborate” (Strijbos & Sluijsmans, 2010). While there’s lots of convincing research that shows that PA can support learning, PA practices vary so widely that it’s not really clear how PA contributes to learning. In order to be able to say with (more) confidence that PA can be implemented effectively, we need to explore the how in more depth and see what we can do to ensure that PA is valuable for learning.
Not surprising, Topping’s (2010) comparative study found that formative assessment aimed at monitoring and improving learning by providing continuous feedback is more likely to be effective than summative assessment which is aimed at evaluating learning by comparing it to a formal standard. In other words, assessment for learning works better than assessment of learning. It is not difficult to conclude that for whoever considers implementing PA, a formative assessment model is the way to go.
There are also many benefits to formative PA; one of which being that it is not a one-way street. Since both the assessor and the assessee need to actively process the to-be-learned materials, they both can profit from the formative PA process. Potentially, this can deepen both of their understandings of what is to be learned. Also, assessing a peer’s work offers the assessor the opportunity to retrieve and articulate her/his own domain-specific knowledge (Sandvoll, 2014). And last but not least, PA encourages participants to take responsibility for their own learning as well as for supporting their peers’ accomplishments. Real collaboration. Hosanna! Hallelujah! Right? Well, maybe not because there are also a number of important limitations with respect to PA.
Achieving and maintaining the benefits of PA is the flip side of the coin and that’s exactly where it gets tricky. Just to get the idea of the fiddly bits: How do you know if learners have the appropriate PA skills? How reliable are they as assessors or feedback providers? How will their peers perceive feedback or ratings? Would they trust it? Or would they be concerned about bias? To make a long story short; there are many snags. It’s kind of like the Internet. Just having it is not enough. You have to be equipped to use it properly.
Peer assessment stumbling blocks
Validity and reliability are two key stumbling blocks in PA. With respect to validity, for example, the question is whether the assessor has sufficient domain-specific skills and knowledge to assess her/his peer’s work? Is (s)he able to evaluate the peer and provide valuable feedback that can truly support learning?
The reliability refers to how consistently different assessors assess their peers (i.e., Do all assessors assess the same deliverable in the same way?) as well as how consistently the same assessor assesses her/his peers on different occasions.
Rubrics are often used in the PA process. A rubric is a tool to guide assessors in the process of assessing their peers. It usually provides the user with a set of criteria, quality definitions, and some type of rating strategy (Reddy & Andrade, 2010).
One of the main challenges when it comes to rubrics is that the language used in the rubric can be ambiguous or its rating scales can be inappropriate. If this is the case, different raters will interpret the rubric differently or even the same assessor may interpret it differently at a different moment or with a different assessee and, thus, the rating it won’t be reliable. A second snag is whether assessors have the skills to successfully evaluate and possibly more importantly, give quality feedback. Giving good feedback, that is to say giving feedback that is helpful but which is also accepted by the receiver, is difficult. Nicol and MacFarlane-Dick (2006), for example, present principles for good feedback practice, among which are:
- help clarify what good performance is,
- deliver high quality information to the other about her/his learning,
- encourage and engage in dialogue,
- encourage positive motivational beliefs and self-esteem,
- provide opportunities to close the gap between current and desired performance.
Clearly, this is not something everyone spontaneously can do!
In addition to validity and reliability challenges, there is third potential stumbling block: the learners’ attitudes towards PA which play a critical role in its potential for success. For example, the potential value of PA can be distorted if learners haven’t experienced assessment in general and PA in particular as an integral part of the learning experience (McGarr and Clifford, 2013). Also, learners may feel uncomfortable with assessing their peers or with being assessed by their peers (for example, with giving or receiving feedback). Also, learners often consider the feedback and assessment of instructors and tutors to be of higher quality and value that feedback and assessment by their peers (Guasch, Esposa, Alvareza, & Kirschner, 2013).
Move those blocks out of the way! There is stuff you can do
So, how can you overcome these hurdles and, for instance increase the validity and reliability of PA? For starters, a rubric can become more reliable and valid through a process of (user-) development and testing. This process starts with rubric design. This needs to be an iterative process where assessors and assessees all take part. The design process starts, for example, with questions such as: What’s the assignment goal? What tasks need to be selected? What are the criteria for the assignment task itself? Then, a first version of the rubric can be (user-) tested both for reliability and validity. Testing can be carried out through statistical methods but also through focus groups or interviews with assessors and assessees. Questions such as: Is the language is ambiguous?, or Are the answers and criteria in the rubric clear?, or Are the rating scales used appropriate? can be tackled here.
Based on test results, a second version of the rubric can be designed and the test process can be repeated until the rubric is considered by all to have the required quality. It will take matters way too far to explain the various statistical methods to support a more accurate estimate of a true score, but the point is that by combining statistical tests and user tests, over time a high quality rubric can be achieved and maintained.
Another finding is that training and experience positively influence the validity and reliability of peer assessment (Zundert et al., 2010). Training refers both to helping the learner gain an understanding of the PA process itself as well as knowledge of and experience with the practice of actual PA (i.e., the PA procedure). Another point is that, with complex tasks learners perform better when they have acquired domain-specific skills before PA skills. In other words, no surprise here, they need a certain level of subject matter expertise in order to be able to assess their peers. One option – as discussed above – is to engage learners in the set-up of assessment criteria to help them buy into and understand the content of and the rationale behind them. Learners also need to simply practise with the PA task itself; they will have to complete sample work. The instructor can then discuss learners’ practice assessments as well as their remarks and then explain why comments are more or less appropriate. This way, learners can increase their ‘feedback-giving’ skills.
With regard to learners’ attitudes, there are possibilities for enacting change. First, if learners have not experienced PA as an integral part of the learning experience, the fact that formative PA is continuous might help them to “cross the line”. After all, practice makes perfect, provided that the PA experience is a positive one. Also, learners buying into the assessment criteria while helping to co-create the assessment rubric can positively influence their attitude as well. Providing training and scaffolding the feedback process is another way to help learners feel more confident and comfortable in giving and receiving feedback. After all, knowing how to give feedback makes it easier to give it, and possibly also easier to accept is. When learners have positive PA experiences, their attitude might change, to the extent that they truly start to value PA and no longer consider feedback by an instructor to be superior to that of their peers.
Thus, when considering peer assessment you need to proceed with caution but if it’s well thought-through the gain is definitely worth the pain.
References
Guasch, T., Esposa, A., Alvareza, I. M., & Kirschner, P. A. (2013).Effects of feedback on collaborative writing in an online learning environment: Type of feedback and the feedback-giver. Distance Education, 34, 324–338. doi: 10.1080/01587919.2013.835772
McGarr, O., & Clifford, A.M., (2013). ‘Just enough to make you take it seriously’: exploring students’ attitudes towards peer assessment. Higher Education, 65, 677-693.
Nicol, D.J., & Macfarlane-Dick, D., (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31, p. 199-218.
Reddy, Y.M., & Andrade, H., (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35, p. 35-448.
Sandvoll, R., (2014). Feeling good, but missing the mark. What happened to the assessment in peer assessment? Journal of University Teaching & Learning Practice, 11, 1-12.
Strijbos, J.W, & Sluijsmans, D.M.A., (2010). Unravelling peer assessment: Methodological, functional, and conceptual developments. Learning and Instruction, 20, 265-269.
Topping, K.J., (2010). Methodological quandaries in studying process and outcomes in peer assessment. Learning and Instruction, 20, 339-343.
Yang, Y.F., & Tsai, C.C., (2010). Conceptions of and approaches to learning through online peer assessment. Learning and Instruction, 20, 72-83.
Zundert, M., Sluijsmans, D.M.A., & Van Merrienboer, J., (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279.