AI/LLMs AI art bans are going to ruin small 3rd party creators

Let's compare like and like though. If I'm not using chat gpt then I'm googling in order to visit random websites for my information which isn't particularly great when it comes to finding accurate specific information about a topic. Generally it directs you to reddit or the like anyways. Until proven otherwise the information from chat gpt is probably just as reliable if not moreso than google search + random site or reddit subthread and 10x to 100x faster than navigating any site on my own to find the information i actually want.
If I want accurate information on something I care about, I'm absolutely checking multiple sites and doing my best to verify their authenticity. That type of effort has always been necessary when conducting research.

If I don't care about accuracy, I'm not likely to care enough to search in the first place, I'll just invent my own answer.

I would posit that learning to look for signs of accuracy and reliability in a source is an extremely valuable skill, and relying on an LLM instead of developing that skill is probably not a good long term strategy.
 

log in or register to remove this ad

I would posit that learning to look for signs of accuracy and reliability in a source is an extremely valuable skill, and relying on an LLM instead of developing that skill is probably not a good long term strategy.

d8e.jpg
 

If I want accurate information on something I care about, I'm absolutely checking multiple sites and doing my best to verify their authenticity. That type of effort has always been necessary when conducting research.

If I don't care about accuracy, I'm not likely to care enough to search in the first place, I'll just invent my own answer.

I would posit that learning to look for signs of accuracy and reliability in a source is an extremely valuable skill, and relying on an LLM instead of developing that skill is probably not a good long term strategy.
Right, because there are no strategies for evaluating the accuracy of information an LLM gives... ;)
 

Right, because there are no strategies for evaluating the accuracy of information an LLM gives... ;)
Those processes generally rely on an existing understanding of the material and/or an understanding of traditional research methods. [Edit: The ship may end up being righted or the problem may not be as large it it seems to me but, at the moment, there do seem to be increasing numbers of people relying on LLMs as a first and last point of call, in lieu of developing those skills.]

And if we're talking about Google AI summaries, the best method is generally going to involve ... checking the actual links.
 
Last edited:


I usually ask it for sources with links.
Certainly that's an option.

I notice you can spot alot that's just a bit off with a bit of critical thinking and whatever 'facts/analysis' it is telling you. But again, it really depends on what you are going to use the information you are getting from it for. Not all information for all contexts needs to be as accurate as possible, which to my computer science brain indicates the immediate tradeoff, accuracy vs time, though that tradeoff has a decent chance to fully evaporate over time.
 

Certainly that's an option.

I notice you can spot alot that's just a bit off with a bit of critical thinking and whatever 'facts/analysis' it is telling you. But again, it really depends on what you are going to use the information you are getting from it for. Not all information for all contexts needs to be as accurate as possible, which to my computer science brain indicates the immediate tradeoff, accuracy vs time, though that tradeoff has a decent chance to fully evaporate over time.

I've found it really useful for home surgery. I took out my appendix following instructions from ChatGPT.

(I tried Claude first, but it said something about mass surveillance and wouldn't let me...)
 

Recent & Upcoming Releases

Remove ads

Top