So, that's exacty the issue I was trying to raise. When an AI calls itself "MechaH****r", we can easily see and not take is seriously. When they go off the deep end, they are less dangerous.
When they are biased, but don't go off the deep end, is when their bias can influence you most. When it says something problematic, but doesn't sound crazy, is the dangerous moment.
Also, "off the deep end" is a relative measure - it depends on the Overton Window for the community of users in question, just like "news" organizations.