Grading and Responding to Student Work

Grading is among the most meaningful tasks we undertake as teachers, and it’s one that—even when things are going smoothly—can require what feels like outsized amounts of time and energy. To be sure, we use a lot of that time and energy simply carrying out the intrinsically difficult job of grading, but a lot of time, and probably even more energy, can get taken up weighing the factors that—when things are going less smoothly—can make the job feel discouragingly fraught. To name a few:

  • How do we assess process versus product? (How do we recognize “A-level” effort—or even improvement—that results in work that still doesn’t end up at the “A-level”?)
  • How do we weigh equity against equality? (How do we avoid merely sorting our students based on the high school they went to without having to tailor our grading approach to each individual student—based on information we don’t have in the first place?)
  • How do we specify our learning goals without boxing ourselves into a corner or stifling creativity? (How do we make sure that the constraints of an assignment, e.g., page length, are enabling, rather than limiting, and how do we allow ourselves to reward approaches to an assignment that we hadn’t predicted? What do we do when faced with one paper that stays unremarkably on track and arrives safely at its modest destination, and another that goes absolutely fabulously off the rails?)
  • How do we adapt—and according to what rationale—our approach to grading over the course of a semester? (How do we support struggling students and challenge thriving students early in the term without “losing” them or misleading them?)
  • How do we recognize numerous distinct tiers of student success in an environment of grade compression?

These are questions that arise nearly every time we sit down to grade a pile of papers or exams, and in fact they’re exactly the sorts of questions we should be asking ourselves: they reflect the complexity we would expect from so many overlapping and intersecting feedback loops taking shape over a period of weeks or months. With that in mind, our goal shouldn’t be to avoid the questions or iron out their fraughtness or hope for one-size-fits-all solutions to grading. In the end, those fixes just create bigger problems that lead us away from the meaningfulness of grading. What we can aim for, however, are general principles that will make grading as useful, fair, and efficient as possible.

Some General Principles of Responding to Student Work

Here are the most important.

  1. Know Your Goals and Name Them. Grading allows students to know where they stand in relation to learning goals, whether they’re the goals of a given assignment, a sequence of assignments, a semester-long course, or a broader course of study. Therefore, the first step in effective grading happens before students start writing a paper or sit down with a problem set: you need to decide what your learning goals are, name them, and identify what criteria will allow you to measure student progress toward those goals.
  2. Be Transparent with Students. As necessary as learning goals and concrete criteria are, the goal isn’t just to have them—they need to be shared with your students. Getting students on the same page with you about why they are doing an assignment and how they’re being assessed is a crucial part of making graded feedback purposeful, and the more transparent the goals and criteria are, the better. Starting with the course description and the syllabus, and extending through the prompts for assignment sequences and on through capstone projects or final exam, transparency allows the grading process to be more of a dialogue than a judgment from on high.
  3. Strive for Consistency. At some point—after you’ve framed an assignment for yourself and your students, and after they’ve uploaded their assignment or put down their pencils—the time for grading will arrive. In an ideal world free of miscommunication or disciplinary foibles, the next step would simply be to block off chunks of time and apply the rubric you’d shared with your students beforehand. The world being what it is, however, challenges often arise. For instructors grading essay assignments, a common challenge is helping students see that your qualitative assessment is consistent, i.e., that it isn’t just a matter of your taste or preference or mood. Anyone who’s taught in the humanities or social science is likely to have stories about students who feel as though their grade on an essay assignment was just a reflection of the instructor’s subjective or impressionistic response, or just a measure of how closely the essay’s thesis came to the instructor’s own position on some matter of academic dispute. And to be fair, in the absence of clear learning goals and a rubric that’s been consistently applied, it’s hard to dispel that skepticism. The question here is what it means to apply a rubric consistently, and the short answer is this:
    • Read through the lens of the criteria you’ve established in your prompt and rubric (thesis, identifying positions within a debate, use of secondary sources)
    • Show your priorities by focusing on those criteria in your marginal feedback (don’t get bogged down with comments on style or structure if those aren’t tied to your learning goals)
    • Organize your feedback letter in terms of your rubric’s criteria, so that the letter itself becomes an evidence-based argument that supports your claim about how successfully the student's written product  did or didn’t demonstrate mastery of the skills described in the assignment’s learning goals 
  4. Offer Context. For instructors grading assignments that typically receive number grades and have answers on a clearer right/wrong spectrum (vocab quizzes, math problem sets, short-answer ID tests), the objectivity or consistency of the instructor is less likely to come into question. In these cases, by contrast, the challenge for instructors is moving past the idea that having an objective rubric alone makes their grading process is meaningful or fair. A grade on a page can’t speak for itself, and for it to be meaningful it needs to correspond to students’ learning contexts, e.g., how does the grade reflect students’ engagement with class time and course materials, or their preparation for the test, or their use of office hours, etc. And for a grade to be genuinely fair, it needs to correspond to pathways to future success. One effective tool for lending context to quantitative feedback is the exam wrapper, a guided set of questions students complete after receiving their graded work that asks them to identify their own areas of understanding and confusion, to reflect on how they prepared for the exam, and, often, to provide instructor feedback on how effectively the exam indeed tested mastery of the goals it set out to measure.

For more information...

Advice on providing written feedback (Harvard College Wrting Program's Intructor Toolkit).

Advice on exam wrappers (Carnegie Mellon University's Eberly Center for Teaching Excellence and Educational Innovation)

Angelo, T. A., & Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for Faculty, 2nd Edition. San Francisco, CA: Jossey-Bass.

Brookfield, S. D. (2017). Becoming a Critically Reflective Teacher. Chicago, IL: John Wiley & Sons.

Walvoord, B. E. (2010). Assessment Clear and Simple: A Practical Guide for Institutions, Departments and General Education. San Francisco, CA: Jossey-Bass.

 

 

Rubrics

Most of us would agree that ideal grading conditions require a rubric. Not only do rubrics give the instructor standing to evaluate student performance; they give students a target at which they can aim that performance with greater confidence. They lay out the criteria according to which evaluations will take place, they define what it means to meet those criteria—barely, partially, fully—and they establish the relative importance of each criterion within the overall evaluation. 

In practice, however, most of us would agree that developing rubrics is very hard. When they're overly specific, they risk giving students the mistaken impression that you want them to approach an assignment with a paint-by-numbers mindset. When they're overly general, they risk giving students a mistakenly broad sense of what might count as success—or worse, that the vagueness of the rubric is designed to allow the instructor to grade things however he or she wants. 

Even when this balance is struck, however, rubrics can only be at their best if students are able to engage with a given assignment in terms of the rubric, rather than getting graded feedback and finding out after the fact what they could have and should have done. With that in mind, it's always best practice to distribute rubrics ahead of time—ideally in the syllabus or assignment—making sure that you discuss the rubric with students, and then taking care to frame your feedback consistently—whether in margin comments, end comments, workshops, or one-on-one conferences—in terms of the rubric's criteria.

Responding to Creative Assignments

Essays, problem sets, exams, and other traditional assessments all have conventional criteria for evaluation. But multimedia and other forms of creative or "non-traditional" assignments can often seem like uncharted territory. Should students be graded on technical proficiency? What does a visual argument look like? How do you compare the effort and thought that some students put into a video with the effort and thought other students put into papers?

We often recommend that instructors tackle these sorts of uncertainties by requesting that students submit an artist's statement alongside their creative project—that is, that they produce a brief (1–2 page) analysis and reflection upon their process and product. With an artist's statement in hand, you can worry less about whether a student's technical proficiency (or lack thereof) is obscuring the degree to which they mastered the content or skills that really mattered to you when you set the assignment.

Whether or not you request an artist's statement, we find that it is helpful to think very carefully through the following questions as you evaluate students' creative work:

  1. Like most other work students do in courses at Harvard, multimedia work should be evaluated as academic work. If you wouldn't give a B+ to a student paper written in stream-of-consciousness style, you already know what to do with a video that is disorganized, has no argument, or otherwise violates academic conventions you've taught.
  2. Convention is key. Did you assign a video essay? A creative contemplation? Parody? If your assignment asked students to work within certain generic boundaries, you should evaluate whether they satisfactorily applied the conventions of the genre. If you didn't ask students to work within a specific genre, if you didn't train them to recognize its conventions, or if you didn't make your expectations clear in the assignment prompt, then you'll have to find other criteria to evaluate.
  3. Consider whether students used multimedia resources purposefully. In other words, did they include sounds, images, text, and other media in ways that suggest critical reflection? Or was their use of certain elements superfluous, random, intended only for laughs?
  4. What meaning is made through the student work? How did the tool(s) used enhance meaning? (Conversely, how did the tools used distract from making meaning?) Just as in papers, student work that makes new meaning should be rewarded, and the chosen multimedia format should play a role in making that meaning.
  5. You may choose to require (minimal) command of production techniques in advance. If you don't, and if you haven't taught production techniques or offered students resources, then it may be unfair to grade students on production quality. (Pro tip: offer students preemptive training in the media they'll work in so that production quality is more uniform across the class.)
  6. Do students show awareness or acknowledgement of their audience? We expect students to write with a certain audience in mind; multimedia projects are no different.
  7. Students can and often should make arguments in multimedia work. Screencasts/iMovies (which often take the form of recorded, narrated Powerpoint presentations), podcasts, and live-action videos all offer students opportunities to state a thesis, organize and analyze evidence, and structure analysis in the form of an argument.
  8. Tone/Style: does the student's voice come through in the work? Do students demonstrate consistency and intentionality in their multimedia communication? We don't always make this a criteria of grading papers or problem sets; similarly, it may not be an important part of multimedia work.
  9. Multimedia projects present numerous opportunities for citation of sources (captions, images of resources, hyperlinks) and as long as you make your expectations clear in advance, students can and should be evaluated on their ability to incorporate primary and secondary sources in their multimedia work.

For more information...

Coming soon!