I'm making very sure not to bring anything IRL into here. This is just roleplaying out possible fictional perspectives to assign to "cosmic" forces.
That's fine. My only point in bringing it up was that it isn't some weird bizarro thing for a cosmological force for Good to have lines it absolutely will not cross, even if crossing them would provably lead to the world being better, because there are some things that are simply Not Acceptable, no matter how much good might come of them down the line.
And this works at all levels. It might be the case that if I were to murder a specific set of individuals today, right now, people who have committed no wrongs worthy of commentary, then in a thousand years we would live in an absolute perfect utopia, completely free of all suffering and without any coercion or exploitation.
I still would adamantly refuse to murder those people. "Utopia justifies the means" is an extremely, overwhelmingly
dangerous argument to make. As soon as you start justifying heinous acts because
eventually they'll pay off, you have just invited every possible question of "well what if you do just a
little bit more evil now, to get a better world
sooner, or to make that better world
even better, or to share it with
more people, or..." You no longer have the ability to just reject those questions as flatly unacceptable behavior; you have to give a reasonable answer as to why
this evil act, at
this time, is justified, while
that evil act at
that time is unjustified.
Both, really. Most cosmological forces are going to be primarily composed of local (restricted to one nation, planet, plane, etc.) agents and factions. But the big picture drivers are, by necessity, going to look at the big picture. They're concerned with endpoints, where the functions peter out into asymptotes or continue onwards towards infinity.
Okay but if you're using "local" to mean two different things (mathematical optimization
and regional variation), you're going to make swiss cheese of what I said--which is why I balked. I was exclusively using it in the mathematical optimization sense. If one is currently at a (mathematical) local maximum of the perfection-of-the-world function, then by definition you must make the world worse before you can make it better. There are plenty of takes on Good--both cosmological and personal--that refuse to be party to making the world worse. Especially if making the world worse actually results in going negative, making the world actually
evil, before you can make it more good than it was before.
Again: "utopia justifies the means" is an
incredibly dangerous position. It invites many of the worst impulses a sapient being can have, all while sincerely believing that following those impulses is
good for the
victims beneficiaries of that "compassion."
It's essential to mine.
I think it's quite possible, certainly within fantasy fiction, to have a force that is recognizably "Good" that doesn't feature the elevation of agency as its primary metric of what "Good" is.
I disagree, about as strongly as it is possible to disagree, with your dismissal of moral agency as the critical differentiator (but more on this in a moment). In the absence of agency, choice is irrelevant. Hence, to choose to do good in the absence of agency means nothing. A robot (for example) catching a person before they fall off of a building has saved a life, but it has done so purely because it is following the programming inserted into it. We do not say that that robot is
morally upstanding because it did the one and only thing its programming permits. Likewise, while we might praise a dog that helps rescue people who are stuck in the snow, their extremely minimal individual agency limits their ability to actually be good or evil. It's not just the absence of agency in general, it's the absence of
sufficient agency.
However, rereading what you've said here, it looks like you're stating that "elevation of agency"
defines Good. That is not the case. That would be like saying that being liquid
defines, say, Coke. Being liquid is certainly a necessary condition for a substance to be Coca-Cola, but it is definitely not a sufficient condition. Likewise, it is necessary for anything worthy of the label of "Good" to prioritize agency, because in the absence of agency, a person is identical to the robot example I gave above, an automaton carrying out programming without moral merit. What actually
defines Good is what actions the entity/force/etc. actually encourages (or discourages).
Heck, the whole point of "muscular neutrality" could be the protection of agency and free will, even when that allowance causes widespread suffering, conflict, and destruction.
And I would argue that any setting which has done that is a setting where "Good" has been watered down into either merely "Lawful" or into some insipid caricature, usually by making its members incapable of moral choice (they're preprogrammed robots) or too stupid to understand that what they think is beneficial is actually very, very detrimental.
The difference between the two--merely Lawful vs insipid caricature--is often whether the so-called "Good" beings/entities/forces/etc. are
aware that their actions will cause the harm that the "muscular' Neutrals wish to avoid. If they know and understand it and pursue their goals anyway, they were never Good in the first place, they were just Lawful in a funny hat. If they don't know and cannot be made to know, then either they refuse to learn, and are thus idiots, or are genuinely
incapable of learning, and are thus automata. The automaton isn't stupid, but it lacks agency. The idiot has agency, but is too stupid to actually use it.
Essentially, in order to have the "muscular" Neutrals be truly, genuinely reasonable, they have to actually be
right about the "balance" they protect. If their balance is illusory or ineffable, something they pursue as an article of faith because it is functionally beyond proof, then the "muscular" Neutral lacks any actual moral argument; they do crazy things for crazy reasons. But as soon as you admit that the "muscular' Neutral is actually
right about existence, Good (and many forms of Evil) must become either too rigid or too stupid to understand that their actions will cause harm to the very beings they wish to aid and protect.