As someone interested in game mechanics and theory, it's nice to have data to support or debunk assertations about what is or what is not bad/mediocre/good/busted in games- it helps to set expectations for me as a DM and knows what I should strive for, and where I should draw the line as a player.
And make no mistake, establishing a real baseline is a good thing, on both sides of the "screen". I remember sitting down in a Pathfinder 1e game with a new DM, and I played a Rogue. In the first combat, he immediately started calling foul, saying my damage was way too high.
Now at the time, I laughed (and I really shouldn't have, mea culpa), because Pathfinder 1e Rogues had, at least in online discourse, a pretty shabby reputation as damage dealers. But again, expectations.
The DMG doesn't have a chart saying "if character X deals Y damage in a round of combat" what to do, if anything. You just have to play it by ear. The CR system is useless to a lot of people because it's a vague number based on assumptions that may or may not be true. 2014 5e has long had DM's saying that they need to run Deadly+ encounters for their groups to face any serious challenge, but then immediately discard the testimony of other DM's who claim to see TPK's all the time. Because different groups play differently, either because of player skill, choices made, party composition, and the like.
For example, in 2014, the "-5 to hit/+10 damage" Feats were seen as busted by many DM's, but many players equally found them necessary to actually maintain good damage against enemies as they rose in level. This reminded me exactly of Power Attack discussions in 3.x and Pathfinder 1e, where people would present charts saying "if AC is x, then use y Power Attack for best results." But you don't always know what the AC of an opponent is. Maybe your DM tells you, maybe they don't. Maybe you have savvy players who pay attention and can sus out the proper AC. Maybe you don't.
It's not unbelievable for a player to decide to gamble on big damage, and due to unlucky rolls and a high AC to simply whiff on their turn for no effect, and decide their Feat is "useless". It's also just as likely you find yourself in a party where you can gain easy advantage and have things like Bless or Inspiration dice, or Battlemaster maneuvers and whathaveyou to mitigate the penalties, resulting in very skewed data.
Effectively speaking, baseline data is useful, but how to decide upon a baseline is the problem. Your data is only good for a hypothetical group facing enemies you choose, in a white room.
And as some in this thread have pointed out, there are other factors in a real game that determine effectiveness. The other two pillars are part of a game, for one, and it doesn't matter that you can pull 100 dpr under optimal conditions if you can't get to the battle, or your group routinely bypasses combats.
But more importantly, it doesn't take much to throw any numbers off. Difficult terrain, hazards, traps, enemies using control/debuff effects, and suddenly everything is out of alignment.
While I appreciate people who are willing to compute and calculate data, a few things are necessary for that data to mean anything.
Most importantly, please share your methodology. What circumstances are being assumed? How do you presume characters will be built? What are they doing on their turn?
For example, when calculating damage per round/combat, if you assume a Barbarian is always raging, can always reach enemies, and always using a greataxe, always using Reckless Attack, and always improving their Strength at every opportunity, and using point buy or dice rolling to get their starting scores, and listing what relevant species and Feats they have, then please share, so that we know what to expect if any of these aren't true- this way, someone who sees that your assumptions are in no way reflective of their group can discount them.
For example, many Rogues use two weapon fighting to maximize their chances to deliver a Sneak Attack (and in 5.5, probably with one weapon having Vex). However, it's quite possible to Sneak Attack at range, which is typically safer. If your methodology assumes dual wielding, a DM with a ranged Rogue knows that your data isn't very useful to them.
Alternately, a third DM, whose play group likes Rogues with Warlock dips to use Devil's Sight and attacking from areas heavily obscured by a lack of lighting, is going to have very different results at their table.
Similarly, when calculating defenses, what AC do you presume? Do you presume certain armors? Shields? Reaction defenses? Damage resistances?
In the case of the Rogue, they can resist the damage of just about anything with their reaction, but that only works once per round. So how many attacks are we assuming are aimed at the Rogue each turn?
Are Monks constantly using their bonus action Dodge, or are they using their Flurry?
Now I admit and understand that no one can take all of these factors into account, and there will always be some variable that you cannot take into account in a complex game like a TTRPG, designed to let you make wildly different characters. But if we can at least see what sort of game you're assuming is being played, then we can actually begin to see if we can derive value from your work, and your valuable time.