I found AI price tracking very helpful for figuring out what Black Friday sales were actually good deals.
And that would be a function I wouldn't trust LLMs with, most things outside of 'creative' writing I wouldn't trust an LLM with. Why? Because I don't know the actual answer, never ask a LLM a question you don't know the answer to, so you can check the actual output.
Guess we need to define slop.
A now infamous AI written guide to edible mushrooms — has resulted in actual harm to actual people. That, in my opinion, is slop.
For myself, sludge, presents as poorly written and reasoned.
So, if a book actually harms people, it's considered 'slop'? So the bible should be considered 'slop', just as most religious texts? What about a military report that results in action that harms people?
Or should we talk about the products made by people that have been recalled because
they harmed actual people? Things range from all kinds of cars, stuff with batteries, food items, etc. LLM products that harmed humans pale in comparison. It's actual humans that thought up the scheme that they will only recall dangerous products if the costs for that are lower then the potential lawsuit costs...
People writing books that are inaccurate or straight up untrue are legion, they still outnumber LLM 'slop' by a wide margin. When people's knowledge isn't correct, when their beliefs are different, when they disregard other sources, when they make mistakes, and no one knowledgeable about the subject matter checks the work thoroughly you get human made problems. How many RPG/game books have issues with with writing, editing, rules consistency, readability, etc.
Heck people complain about writing that's too verbose. Have you ever read a rulebook by games workshop from a decade ago? They wrote two paragraphs were one sentence would suffice for three decades (+). GW now has a value as a company 72.5% that of Hasbro (WotC)... Do you remember the discussions about stealth, hiding, and invisibility when D&D 5e 2024 PHB came out? That was not due to it being well written, consistently checked, and very clear. And let's not even go near all the editions Shadowrun has...
The problem with LLM isn't the LLM, it's the people using it. Just because a typewriter was used to create all kinds of
bad stuff, do we blame the typewriter or the person using it? Imho the same goes for LLM, you use it to make a bad product, it's a bad product. If you use it to make a great product, it's still a great product, no matter what tools were used.