The Results are in...
Wednesday 8 August 2018
Results, data and what to do with it all.
I was talking recently to a Head of English who was scratching his head having received a huge spreadsheet of data on this year’s IB results from his school, including graphs showing results from the last 5 years, predicted versus awarded grades, component breakdowns, grades by class and by teacher, gender comparisons, and much, much more. The stated and well-intentioned aim of all this data is that it is to help him and his department analyse these results and see what they can change and implement in tems of their future teaching of the IB courses.
There is no doubt that some of the information in the data spreadsheet is useful and allows the department to see patterns more quickly than in the past. However, inevitably this data analysis is also linked to a discussion about the department’s goals in terms of improving students’ learning over the coming academic year, and this is where this Head of Department - along with many other HODs and teachers - started to scratch his head and feel a little bit lost.
Data and data analysis are buzzwords in schools these days but in my experience very few teachers feel confident or comfortable when it comes to looking at and discussing data. Technology has allowed us to become very proficient at producing data but very few people seem to know what we should actually do with all this information once we have it. In the complex context of education, it invariably raises more questions than it answers. For a start, how much do we value the limited nature of examination results data in the broader context of educating young people and aiming to instill in them a lifelong love of learning and - in our context - of reading and literature? Secondly, how much do we trust the source of the data in terms of the exam board, the examiners and the examination processes? If we have doubts about these - and no matter how rigorous an exam board is, there is always going to be subjectivity, especially in a subject like English - how far can we go in terms of using results to reflect on our own practice?
And that’s before we even start to consider our own students and how many other factors are at play in the way that an individual, or a group of students, will perform in a one-off examination. We look across their component breakdowns and puzzle over inconsistencies, the unexpected drops or the exceeding of expectations for certain individuals in certain parts of the course, and we search for patterns and explanations, but often we do not find any explanation beyond the fact that our students are fallible human beings being assessed by another group of fallible human beings. Arguably, it was easier in the old days, when each component was marked by the same examiner and so patterns of marking within a component could be seen and addressed more readily. But now all papers are distributed across the globe via the internet to many different examiners in many different places from many different educational backgrounds. Even if we are satisfied with all of the above, we might still ask how much merit there is in using the examination results of one group of students as the basis for planning the learning experience of a different group of students.
I am sure the Head of Department I know is typical of many in May session schools at this time of the year, being asked to analyse examination results as a basis for improving learning within his department. This particular Head of Department is fortunate in that he works in a school where it is understood and explicitly stated that the examination results are just one source of data and information amongst many, and the limitations of such data are appreciated. Despite the overwhelming amount of data he has been given from his administration, this Head of Department knows he is trusted to take what he can from this information, while also basing his plans about learning on a more complex, more human and more collaborative appreciation of what it means to educate young people about language and literature. However, I have heard of other schools where examination data is dealt with in a far less nuanced and human way, and even the most benign administration can fall into the trap of conflating examination performance with learning, and therefore the analysis of examination results can be seen as - or implied to be - the most effective means of improving learning.
For those in May schools, results are now in and the process of analysis, review and reflection will be well and truly underway. There will be success stories, surprises and disappointments, as well as many students who performed exactly as expected. As teachers, we inevitably have an emotional response to these results and feel inclined to take some credit when students do well, but also feel some culpability when they underperform. There is a tendency for some of us to pore and agonise over these results, especially in these first few weeks after they come out when we may have students contacting us with effusive words of thanks, or with disappointed enquiries about re-marks or re-sits. This is where a supportive discussion about examination results with colleagues is important, ideally as a department. This is necessary, not so much in terms of analysing data, but rather in order to remind us of the human and somewhat arbitrary nature of such results. A departmental discussion can be reassuring if we are feeling disappointed and at fault for some of our students' outcomes; equally, it can help bring us back down to earth if we are delighted with our results and starting to secretly suspect we might be the greatest English Literature teacher ever to walk the earth.
These conversations should be about more than a set of data on a spreadsheet but such data is a useful starting point. We must be careful that cynicism about data and data analysis does not lead us to throw the baby out with the bath water, for there are undoubtedly worthwhile discussions to be had around the information we get with any set of examination results. The final results for individual students are of course just one part of this and patterns across a cohort are always worth looking for and considering in terms of moving forwards. Furthermore, some of the most interesting information comes a little later, in the form of IA feedback reports and the examiners’ reports which contain specific advice about teaching and preparing students for different parts of the course.
Below I have included a few thoughts about each part of the Literature course in terms of what can be done in the post-results period of analysis and planning for the year ahead. I am sure many of you will already be doing all of this but I hope it will be of some use to those of you who are newer to the course and/or feeling a little uncertain about what to do once the results are in.
Internal Assessment: the IOC and IOP
This is all about the moderation of your marks based on the sample sent off to the IB. If you were asked to provide a further sample, you will already know that the moderator found the marking inconsistent in the sample provided, otherwise you can assume they found it consistent enough to make a moderation decision. This decision will be revealed to you in the component breakdown report where you will see that the original scores you gave (the raw mark) compared to the moderated mark; they will either have gone down (bad news), stayed the same (good news) or gone up (even better, but very rare news). There will be further information about this decision in the IA feedback report which is made available on IBIS soon after the exams; ask your IBDP Coordinator for this if they have not passed it to you. These reports are written by the individual moderator for your school’s IA and can vary in detail, ranging from the frustratingly brief and uninformative, to some which give very detailed, very useful feedback. As they are written by an individual moderator, you need to treat them accordingly and the best advice is only to react to issues that are new and seem reasonable or those which confirm your own concerns. It is also worth looking back at the sample that was sent and possibly listening to some or all of this again in the light of how you were moderated. The following questions might be worth considering:
- Do we need to make any changes in our teaching of the IOC texts?
- Do we need to make changes in the way we teach commentary skills?
- Do we need to make changes in the way we teach towards the discussion for HL?
- Do we need to spend longer on certain texts, skills or understandings?
- Does the structure of our course and/or timing of when we run the IOC/IOP allow students to perform to the best of their ability?
- Do we need to improve our processes and administration of the IOC?
- Do we need to make changes to the way we internally moderate the IOC?
- Can we use some of our IOCs (or parts of them) as samples for future students?
The Written Assignment
The component report will allow you to see the grade each student received for their Written Assignment but you will only see the criteria breakdown if you call for the papers back. With a number of different examiners marking your papers it can often be hard to find any patterns in these results but it is worth considering the following questions:
- Are there patterns in terms of text choices where students did well/did not do well?
- Are there patterns in terms of the types of questions chosen by the student in terms of where students did well/did not do well?
- Did the successful essays all adopt a similar approach to literary analysis? Did they use quotations and explicit literary terminology effectively? Did they structure and signpost well?
- Did the successful essays all adopt a similar approach to the Reflective Statement?
- Can we use the successful essays, or parts of them, as samples for future students?
With Paper 1 it is worth recalling the papers if you can afford to do so; this allows you to see which text each student chose as well as how they performed in the exam and how they were graded for each criterion. This might then allow you to see patterns in terms of the students who chose prose and poetry, or it could reveal that there are things to consider in the teaching towards one or both of these genres. Furthermore, it will allow you to look at students’ commentary and writing skills and aspects that you might want to focus on in these areas when it comes to the teaching of future cohorts. Getting the papers back also means you have a whole set of sample responses to use with future students.
As with Paper 1 and for the same reasons, if possible ask for the return of these papers. Once again it gives you a great resource in terms of sample answers, while also allowing you to see how your students performed on individual questions and - if different classes did different genres - whether there are any conclusions to be drawn about how you teach and how your students approach the different literary genres.
When all is said and done, when the results have been probed and picked over and pickled, the most important questions we can ask ourselves at this time are those alluded to earlier. Did our students finish their IB experience with a deep appreciation of language, literature and learning, or at least a greater appreciation than they began with? Have they improved their reading, writing, speaking and listening skills? Do they enjoy reading and will they continue to read for the rest of their lives? Have they learnt important things about what it means to be a human being through their study of literature? Have they experienced times, places, perspectives and events that they could only experience through literature? Has their study enhanced their sense of empathy? Has a book made them sad or happy or angry - has reading allowed them to feel what it is to be human? Have they felt a deep connection with an author, a character or an idea they have encountered?
If the answer to these questions is 'yes', then we should feel very happy with the great success of our 'results.'