ChatGPT lies then gaslights reporter with fake transcript

How does that thread prove that the 'novels' on Amazon are not slop?

It doesn't, but it's an illustration of a use case where AI output is often called slop when it is, in fact, "consumer-level".

"Consumer-level" is your words, not mine. The slop I am trying to tell you about is not "consumer-level", as you'd realise if you looked at it.

I haven't bought the whole Amazon inventory, that's true. I have read fan-fiction that was AI-generated (or, as the author put it, AI-enhanced) and it was bad, really. And I have spent some time on fanfiction websites, with a lot of content being romance fanfiction between Hermione and Draco (at the time when HP was at its height). It was bad, really. Yet, those websites thrived, so they must be attractive to some readers. Probably not discerning readers, I am totally ready to accept that.
 

log in or register to remove this ad

Does it get them right? As far as I have seen, it can't do math, not the kind of math I can do. Which to be fair most everbody can't, that is why engineers get hired.

It often gets the mathematical statistics and calculus right... and often needs to be prompted about what it messed up (it really doesn't like some regions of multiple integration...).

It will sometimes do a really nice data analysis where it shunts the stuff off to python and does something reasonable... and often makes up fake data and/or does kind of strange things.

(The range of scores it would when I try it on the homework assignments I give is all over the place. Probably more better scores than worse, but nothing I would trust in general).
 

So, do we blame the tool for existing or do we blame the person who used the tool? Or are both at fault?
Excellent question. Who do we 'blame'? Well, the tool, as folks endlessly point out, has no consciousness.

But people --

1. Train the tools on stolen material.
2. Choose to use them despite their environmental effects.
3. Rely on them despite their unreliability.

So yes, you blame the person if they used a tool in an unethical way, or if they used unethical processes to create the tool.

Now, if you can take the ethics out of it, and train the tools ethically, and use them ethically, then sure, no blame. But that ain't what's happening right now.
 



2. Choose to use them despite its environmental effects.

A lot of universities in the US (at least until recently) prized various green certifications and have also started buying chatLLM access for all of their faculty and students. I hope the people who give out the former start paying attention to use of the later.
 

It often gets the mathematical statistics and calculus right... and often needs to be prompted about what it messed up (it really doesn't like some regions of multiple integration...).

It will sometimes do a really nice data analysis where it shunts the stuff off to python and does something reasonable... and often makes up fake data and/or does kind of strange things.

(The range of scores it would when I try it on the homework assignments I give is all over the place. Probably more better scores than worse, but nothing I would trust in general).
I find it just retrieves aggregate data, which is not very valuable, because then that has to be checked.

Recently someone sent me a picture of a sheared connection, and asked if it was ok, because the AI said it was. I replied absolutely not, all failed connections have to be repaired without question. Enabling someone's laziness? I don't know, though that exchange was $300 minimally. Someone is going to bite it on this stuff, and for sure it isn't coming out of my fees.
 

It really doesn't.


Where she shows it cited a non-existent paper of hers.
I don't see anything about the "deep research" tool in that post or the linked article. Am I missing something? Have you used it?

I'm an associate editor for a journal and recently had to send a note to the editor when something submitted had several non-existent papers in the reference section...

I think news stories of lawyers using it and putting fake cases in their court filings have been mentioned in some other threads on here.
Likewise, how does this relate to my claim "deep research leads to fewer (!= zero) hallucinations and more easily verified references"?
 

I don't see anything about the "deep research" tool in that post or the linked article. Am I missing something? Have you used it?
As far as I have seen, that is the result of AI searches. You would not want someone like me who you depend on for your safety, using AI. Have some huge metal beam fall on you, or plunge to your death in an elevator. Though guaranteed these things will happen in the future, I know there has be a big discussion of what happens when the older guys (me) retire. Entropy eventually gets everything though.
 

As far as I have seen, that is the result of AI searches. You would not want someone like me who you depend on for your safety, using AI. Have some huge metal beam fall on you, or plunge to your death in an elevator. Though guaranteed these things will happen in the future, I know there has be a big discussion of what happens when the older guys (me) retire. Entropy eventually gets everything though.
The point of my reference to deep research is that it isn't the result of all searches. LLMs are most effective when they compile and summarize references and include links to those references.

Most of the criticisms of LLMs in this thread seem to be based on an outdated or improper use of LLMs. Now fair enough, there are issues with improper use and some people will use them in foolish ways. But I'm wondering if the criticism holds up to intelligent use as well.
 

Remove ads

Top