05 May 2019

To Grade or not to grade, that is the question...

Getting Garrulous about Grading 

A snapshot of one section of one of my grade sheets... 

If you're ever stuck for a conversation starter in a room of educators, you can be sure mentioning the 'g word' will get people going. Grading seems to be a particularly contentious issue that is perennial and generally (in the circles I move in anyway) criticised vigorously

I've been teaching for nearly 25 years at the time of this post, and during that time, my feelings on the subject have vacillated wildly, but—and maybe this just the wisdom of age—while appreciating the various sides of the arguments, I can't help but return to the fact that to me, instinctively, intuitively, it just makes sense. But as in most issues of this nature, it's how you do it that counts

Many teachers who tell me they don't grade are just playing semantics, if they are any good (and most are) they are making judgements about the efficacy of the work their students do all the time. I’ve seen all sorts of attempts to avoid it, but other than prose comments (which takes ages, and students generally ignore) they’re all still essentially (and will be) forms of scoring, whether you use a rubric, continuum, thumbs up/down, traffic lighting, or even 'impression marking' unless you’re going to rely on screeds of text you'll end up with 'grading' in all but name. Unless you just do ... nothing? Does anyone think that is better? 

All grading is, is quantifying the qualitative, it's giving an assessment judgement a number/letter in order to make using it as a rich source of data easier. That's it. 


Hypocrisy

My main protestation about the use of grading for years has been its inauthenticity, its artificiality. After all (like examinations—which I still maintain intense disdain of) how many of us get 'graded' in the real world? For most of us the last day of exams, is the last day we'll be graded on anything. Once you stop being a student and enter the world of paid employment, the grading ceases. 

Unless of course you reenter the world of education as a student, in which case it can feel like a rude reawakening—at least that's how it felt for me. When I did my Master's degree a few years ago, I really resented the grading component (despite getting very good grades) for me it changed the whole dynamic, it undermined the focus. But that is because of the way it was graded... it felt like too much too late, and at least once, fundamentally unfair. But since returning to teaching (having been in a primarily coaching role for 8 years), I’ve returned to it with gusto. 

So to its irrelevance; actually if you think about it, in many ways grading is effectively a proxy for $, for adults $ is commonly an indicator of success. In many careers appraisal is common, and the results can often have $ implications. Even if you're not in a career where your 'performance' is assessed and rewarded financially, there are many where your 'performance' is your work to date, your reputation, and your performance at interview; the reward is the 'assessment' of your success, perhaps employment, and/or promotion. Students whose work is graded get a taste of reality, accountability, a 'real world' experience; in the world outside school, your work will be judged, and depending on how successful it is deemed to be, you will be rewarded. And if you're not (you don't get that job/promotion) you use that as feedback, you reflect, you make changes, and you move on. Life is a series of assessment experiences, the results of which shape the people we become. They may not always be represented (directly) with a number, but they are frequent, they are generally (relatively) low stakes, and preparing students for a world where this is the case is essential. 

There are some authoritative voices out there who defend the use of grading, and—of course—a considerable amount of research: 

Students Should Be Tested More, Not Less (Atlantic) 


"Complaints that excessive testing detracts from learning tend to be aimed at summative testing. As summative tests do not teach, and classroom hours spent engaged in summative assessments detract from hours a teacher has to educate her students, those complaints are probably well-founded.
“Formative assessments,” on the other hand, are designed to discover what students do and do not know in order to shape teaching during and after the test. Formative assessments are not meant to simply measure knowledge, but to expose gaps in knowledge at the time of the assessment so teachers may adjust future instruction accordingly. At the same time, students are alerted to these gaps, which allows them to shape their own efforts to learn the information they missed.

"Formative testing at its best is low-stakes and high-frequency."

"When teachers expose students to frequent low-stakes tests in order to reveal gaps and foster active, continuous engagement in the material, students are given more ownership and power over their education."

Making the Grade: What Benefits Students? (Educational Leadership)

"Grading enables teachers to communicate the achievements of students to parents and others, provide incentives to learn, and provide information that students can use for self-evaluation."

There is a great deal more, but the problem is, very few of the methodologies used in these studies actually (by their nature) use FLS, they assume a model where either high stakes examinations are the focus, or one-off summative testing. That is not what I'm talking about, the reasons why could be the subject of many ranty posts, but I'll leave that to others more qualified and more authoritative than me, no I want to focus on frequent, low-stakes, grading

Studies on the Relationship Between Frequent Low-Stakes Testing and Class Performance

A Meta-analytic synthesis of data from 52 independent samples from real classes (N = 7864) suggests a moderate association of d = .42 between the use of quizzes and academic performance. Effects are even stronger in psychology classes (d=.47) and when quiz performance contributed to class grades (d = .51). We also find that performance on quizzes is strongly correlated with academic performance (k = 19, N = 3814, r = .57) such that quiz performance is relatively strongly predictive of later exam performance. We also found that the use of quizzes is associated with a large increase in the odds of passing a class (Abstract) [paywall]

Sotola, L.K., Crede, M. Regarding Class Quizzes: a Meta-analytic Synthesis of Studies on the Relationship Between Frequent Low-Stakes Testing and Class Performance. Educ Psychol Rev (2020). https://doi.org/10.1007/s10648-020-09563-9

FLS (Frequent Low-Stakes) Grading

I haven't alway known that what I do has an acronym, but I've been instinctively using this approach for as long as I've been teaching. It just makes sense to me, low stakes, frequent grades that students know they can improve if they make the effort are very useful. Much more efficient than a prose laden alternative that dominates in schools and systems that purport to be 'grade-less'. 

FLS Transformed by Tech

I guess one thing that really excites me about this approach is the way tech can transform it. In the 'old days' the data that builds over time would only really be available to the teacher, I would share it with students and their parents, but it was tricky, as I needed to preserve the privacy of the rest of the students in the class. This invariably meant either making/printing individual copies, or fiddling about with folding over sections of sheets to hide information, even then a hard copy is quickly out of date if you're using FLS. But the advent of online platforms has radically changed this. For example with the platform we use in Middle and High School (Teamie) students have access to their Markbook, so they can see the data, and act on it accordingly. This runs the gamut from students who use it to moderate their efforts so they can work towards a pass (and maximise their time for other pursuits/subjects) or those who use it to constantly refine and revise specific tasks until they are as good as they can make them. It puts the student in the driving seat, and gives me rich data to use as a  basis for my teaching, and my conversation with each of my students about their next steps. 

Fortunately for me, it wasn't hard to find some clever people who have summarised this approach and its benefits for me: 

Benefits of low-stakes assignments


  • Gives students a realistic idea of their performance early in the term, enabling them to seek appropriate resources as needed
  • Opens up lines of communication between students and their instructors, and may increase students' willingness to ask for help
  • Allows instructors to direct students to resources if they need further assistance or support
  • Gives students an opportunity to be active participants in the evaluation of their own learning
  • Increases the likelihood that students will attend class and be active and engaged

"These exercises are low stakes, they can improve learning outcomes without increasing student anxiety. "

"Frequent, low-stake assessments as opposed to infrequent, high-stakes assessments actually decrease student anxiety overall because no single test is a make it or break it event."

"Feedback should be given often so that students can benefit from having multiple opportunities for improvement. Though given less weight, low-stakes assignments may be similar in type and kind to high-stakes assignments: they tend to reflect the kind of work students are going to be expected to do for a final exam, paper, or other summative project. All in all, early feedback is one of the most important contributions faculty can make towards helping students succeed in their classes and make critical progress…"

(Sarah Jones, Michigan State University)

02 February 2019

An Undistinguished Educator: Why I'm not an ADE

Why after over twenty years working to integrate digital technology in classroom K-12, I'm still not an Apple Distinguished Educator, or a Google Certified Educator. I'm happy to remain an undistinguished educator. 




So having an Apple logo appended to my signature makes me 'distinguished'? Passing a multiple choice test means Google will 'certify' my teaching efficacy? 

These ‘certifications’ have more to do with huge corporations influencing educators into a form of brand loyalty and in turn using those educators as 'influencers' than it does about genuine continuing professional development.

Now it could be said that it’s okay for me to take this position as I am fortunate to be working in an amazing school; if I was seeking employment I might have to swallow my pride and get me 'some o dat certification', just to satisfy the naive expectations of administrators and schools who should know better. And that may be true, however I’d also have to seriously question whether that the kind of school that values that kind of certification is the kind of environment I would like to work in.

Many moons ago, at the beginning of this branding exercise, I kept an open mind and attended sessions at tech conferences dedicated to both of these qualifications and was absolutely appalled at the focus they outlined; clearly designed by both parties to foster an exclusive focus on their tools to the exclusion of any others, no matter what they might say. Because creating a video and writing a letter is the gold standard in determining educator efficacy?

That sounds like the kind of process that would be dreamt up by a corporate marketing team than anyone serious about improving education to me.  


When I attended the Google certified educator session it was even worse, the admission criteria included taking and passing a multiple-choice test, one where the questions didn’t even match the current iteration of the Google Apps suite that was being used!  So candidates were informed that they would need to answer the multiple choice questions (yet another ludicrous way to determine teaching talent) in a way that aligned with the way the tools used to work... even then what precedent does this set? Based on sample questions we were shown, the sign of a skilled educator is that they have memorised the locations of commands in the menus of the tools they use? That’s not how I operate, if you were to ask me where to find a certain command in Google docs I couldn’t tell you from memory, I haven’t memorised them,  but I know where to look for them when I need them. The criteria for appraising the efficacy of educators in Google is fundamentally flawed, not built on skilful pedagogy but on a naive surface level assumption that memorising the location of functions in software is of paramount importance?

I can tell you that when the institution where I work seeks to recruit coaches/teachers, whether or not they have one of these superficial qualifications is not a consideration.  I’ve encountered many educators who are clearly very poorly skilled and have a very dubious understanding about how tech can be integrated effectively (nouns over verbs, tech viewed more like toys than tools), and their naive display of their certification just served to further undermine their credibility.

The aspect I find most difficult to accept is the way these titles facilitate a kind of exclusivity or 'club', how do people expect to effectively engage in effective professional development if their first criteria is exclusion? Even more ridiculous, you could, like me, be a teacher who has many many years of experience working with tech integration, or/and have a Masters degree in this area, but that will still not permit inclusion to this inner sanctum. When a community values a superficial label over a rigorous professional qualification that takes years to acquire you know there must be something wrong.

So, if you were considering pursuing this certification, my advice is to forget it. Use the time to focus on designing better lessons for your students, and if you’re seeking a qualification, pursue something like a degree, or masters degree instead.