D&D General Where Report Card Ranking Methods Fail

FrogReaver

The most respectful and polite poster ever
"A lot of popular subclass rankings look analytical on the surface, but the method behind them is fundamentally flawed."
-Some Guy


What are Report Card Rankings?
Essentially when a class/subclass/build is scored separately in various 'subjects' on a set list, much like a report card. Final Score is then typically averaged with possibly a slight nudge up or down due to some intangibles not captured in the subject grades.

Example:
  • Damage
  • Survivability
  • Control
  • Support
  • Utility
Seems analytical to me, what's the problem?
Report Cards ranking methods assume each category represents an independent, equally important dimension of performance - which is where the system breaks down. The fundamental problem is that these categories are not independent. When categories overlap, scoring them separately double counts the same resources and inflates subclasses that can’t actually express all those strengths at once. Action Economy, Resource Economy and Concentration Economy all overlap multiple categories. It's essentially the "Wizards can do everything, but not all at once and not all day" issue repeated across every category.

There's also some ancillary issues.
  • How should each category be weighted - most real world implementations do it equally because it has a semblance of fairness, but equally is most likely the least correct method
  • Which categories should be used in the first place - different categories can yield different final rankings
  • D&D typically rewards specialization, while such averaged out report card rankings normally incentivize jack of all trades style characters
  • These rankings generally overvalue abilities fueled by flexible resources (ties into fundamental problem)
  • Niche powers that rarely matter, get scored as if they matter often
  • Often Ignore encounter frequency and applicability (though sometimes this is factored into the category ranking)
Because of these issues, report card rankings often misrepresent real performance at the table. Any fair evaluation system needs to account for action economy overlap, resource gating, and encounter applicability - not treat them as separate subjects.

But Frogreaver, Report Card Rankings are fun.
Yes they are and by all means keep having fun with them. I enjoy reading through such lists as much as the next. I'm just here to point out that there are fundamental issues with taking them as gospel.

What's an objectively better ranking method?
Well, I'm not sure, but I am open to suggestions. I know that if we want rankings that reflect real play, we need systems built around actual decision making and action economy constraints, not just school style subject lists.

*Note this is centered on 5.5e D&D, however I believe it's broadly applicable to many other versions and many similar games.
 
Last edited:

log in or register to remove this ad

Recent & Upcoming Releases

Remove ads

Top