... but a lot of the hate comes across to me as insincere.
Yeah, one of the best ways to dismiss the concerns of others is to decide they are insincere.
... but a lot of the hate comes across to me as insincere.
Yeah, that makes sense. Maybe if we had more positive news stories about how it has helped astrophysics/astronomy by analysing the vast amounts of data to find more phenomena in space, or about how it is used to help in medicine we'd have people with a more balanced view of it.
If that's how it comes across then that's how it comes across. Like when someone uses the term AI slop, it comes across to me as them parroting the anti-AI buzzwords because they feel like they shouldn't like AI. if they instead say something like "I have concerns over AI, and here's why..." I'm much more likely to listen; but yeah, the AI slop crowd will continue to come across to me as insincere.Yeah, one of the best ways to dismiss the concerns of others is to decide they are insincere.
It’s a debatable use. There’s been some very high profile errors/problems in documents released by a certain massive government recently which were almost certainly caused by running the docs through a LLM to accomplish certain editing tasks. Human editors/proofreaders can make errors plenty of course, but they have context awareness and inference across large documents even the best current models don’t (and some of the latest releases have regressed in certain areas with large context handling).
given that this is about AI content in RPG products, I would expect most of that to come from LLMs rather than something trained on tumor scanning, and the results reflect thatAs someone in statistics/machine learning, one of the things that annoys me about this is how "AI" has become synonymous with "a massive data center using LLM trained on everything online without regard to copyright". I agree that there is a big difference between that and a custom deep neutral net designed for astronomy or tumor scanning, or an artist who trained something on their own works.
I kind of wonder how that affects the poll results (and what they reflect about consumers).
It's shorthand, rather than having to type all that out over and over again. Forcing people to re-explain their premise every time they speak is a great way to silence people. I'm sure you know what the general concerns over AI are; you don't need them reiterated in every post.If that's how it comes across then that's how it comes across. Like when someone uses the term AI slop, it comes across to me as them parroting the anti-AI buzzwords because they feel like they shouldn't like AI. if they instead say something like "I have concerns over AI, and here's why..." I'm much more likely to listen; but yeah, the AI slop crowd will continue to come across to me as insincere.
if they instead say something like "I have concerns over AI, and here's why..." I'm much more likely to listen; but yeah, the AI slop crowd will continue to come across to me as insincere.