With respect, we do - LLMs don't "know" anything in the sense you or I do.
A LLM is pretty much predictive text writ large. Your autocorrect doesn't understand ethics, or role playing, and neither does a LLM. It just presents what, based on prior examples, output would best match or fullfil...