Teachers make rubrics fairly often. A rubric is a tool for attaching a numerical score to a something of subjective quality, like an essay. Rubrics are important because they keep you from passing students just because you like them, or failing students just because you were hungry and tired when you read their essay.
Good rubrics are a series of objective questions. “Does the essay have a works cited page? Is the page correctly formatted? Did the essay have a coherent thesis? Did it cite evidence?” Rubrics have their failings, of course. In particular, they’re weak on the high end of the spectrum; they tend to put everything in a few buckets: bad / ok / good is about the most detailed sorting you can expect from a rubric. Rubrics are also terrible at distinguishing between two pieces of really excellent writing. In short, they allow a teacher to answer the question, “is this essay good enough to meet the requirements for this class” quickly and fairly, but they don’t tell you whether you like Hemingway or Faulkner more.
How could rubrics be applied to software development? I can think of two interesting use cases: Code reviews and MVPs. During a code review, we might want to answer a simple question: is this checkin good enough to let into the codebase? And examining an MVP, we might be answering the simple question, is this product good enough to share?
Rubric for a Code Review
For rubrics to be easy to apply, they should be a series of yes/no or bad/ok/good questions that can be answered pretty quickly. Here are some examples of what I think might make good code review rubric items:
- How are the names of things? Bad / ok / good
- Are there any really obvious inefficiencies? e.g., does the method do the same computational work twice, or does it use obviously wrong data structures?
- Does it follow the style guide? That is, does it have the same casing, naming conventions, indenting structure, etc., as the rest of the code?
- Is there any reasonable execution path that could throw a boneheaded exception? (e.g., in C#, can you get a NullReferenceException or an IndexOutOfRangeException?)
- Does it do what it was supposed to do?
Rubric for a Minimum Viable Product (MVP)
Basically, we’re driving at the question: Is this a thing people can use and would want? Did we build enough software that you can use this without knowing how to write code?
- Does it run without erroring out the vast majority of the time?
- Is it reasonably fast?
- Can people who haven’t used it before quickly figure out how to do obvious tasks?
The rest of the rubric for a minimum viable product is going to vary quite a bit between the different projects that you might work. The rest of the rubric might be a list of cool features that are or are not present. As long as it’s a short list of easy-to-answer, objective questions, it can probably be used as a rubric.
Rubrics are really good at making a complex, subjective assessment come down to a handful of numbers and yes/no questions that most people can agree on. If you get thirty people in a room and thirty essays about the Stamp Act, and just ask everyone “which essays are good,” you will get a bunch of arguing. If you ask them, “which essays have a works cited page,” you will get a reasonable answer in a reasonable amount of time. Rubrics limit the subjectivity and scope of a decision.
Here are a few expected benefits of using a rubric:
- The review meeting has a definite end state: when all the rubric questions are answered, you know whether you’re done with the project or not, or whether to accept the code or not.
- People’s feelings get less hurt. Telling someone their code is terrible can be very discouraging. Telling them their code doesn’t have camelCasing or might throw an error in some situation is demonstrably true. It’s a good way of making sure the comments are about the code and not about the developer.
- It makes things fair between instances. Without rubrics, personal bias, whether conscious or not, can affect people’s performance review, and that’s not fair. A rubric-based score varies much less from personal factors.
Rubrics are really a way of making a subjective decision into two objective steps: Which criteria constitute success? and Which criteria does this object satisfy? I have often seen meetings drag on because people circle back. They start to answer a question, decide they don’t like the implication of an earlier decision, so they start second-guessing something everyone agreed to ten minutes before. The key to using the rubric is that it can’t be revised while it’s being applied. Write a rubric, grade the paper (review the code, assess the project, whatever). Then, after the assessment is over, look at the rubric? Did the rubric penalize things that were good or permit things that were bad? Revise it for next time. But revising a rubric, or going in without a rubric, when you’re trying to assess something is a good recipe for having a very long meeting.
I have seen code reviews and performance reviews and whatnot that have some objective criteria, but I think formalizing these criteria as a rubric is a good idea. I think many qualitative assessments can be accomplished by first creating a set of objective questions, whose answers indicate quality, and then applying them. Knowing that that’s the process one is trying to follow will make it easier to follow.
Have you seen rubrics or rubric-like documents at different kinds of reviews? Are there any glaring omissions in my list? Please leave a comment below.
Till next week, happy learning,