ChatGPT lies then gaslights reporter with fake transcript

The point of my reference to deep research is that it isn't the result of all searches. LLMs are most effective when they compile and summarize references and include links to those references.

Most of the criticisms of LLMs in this thread seem to be based on an outdated or improper use of LLMs. Now fair enough, there are issues with improper use and some people will use them in foolish ways. But I'm wondering if the criticism holds up to intelligent use as well.

An area of improvement would certainly be for websites that provide access to an LLM to assist their users in using it intelligently. They warn against trusting the result, which is a good basic step, but teaching how to use them would certainly help their customer base as well (for example, in how to use the prompt textbox in other ways than a "type your random question here" window).
 

log in or register to remove this ad

The point of my reference to deep research is that it isn't the result of all searches. LLMs are most effective when they compile and summarize references and include links to those references.

Most of the criticisms of LLMs in this thread seem to be based on an outdated or improper use of LLMs. Now fair enough, there are issues with improper use and some people will use them in foolish ways. But I'm wondering if the criticism holds up to intelligent use as well.
All of that is a question of trust. I can still do all this stuff by hand with paper and pencil. If you trust that other stuff (I don't) good luck.

Improper vs proper use of something is like in the film Idiocracy it's moot unless you can tell the difference, and that dovetails back into the trust issue. If you have to have someone that knows anyways, and the AI is spurious, get rid of it. Yes, peddlers are always trying to sell something to make a dollar, common sense says this is just another of those things.
 

If you have to have someone that knows anyways, and the AI is spurious, get rid of it.
The ideal use case is where the user can verify but production would longer than verification. I posted an example of Terence Tao using it to speed up a search for numerical examples earlier. This is also the case with "deep research": I could find the sources manually in a library but it would take me much longer. I could also use another aide like google scholar but that isn't quite as good.

All of this still requires that the user read the citations or check the math. Counterexamples where the user didn't show improper use is an issue but don't demonstrate that LLMs in general lack value.
 
Last edited:

All of this still requires that the user read the citations or check the math. Counterexamples where the user didn't show improper use is an issue but don't demonstrate that LLMs in general lack value.

Like a car stalling when pulling away. While it's valid to point out that it happens, there are way to circumvent that by applying the correct methodology to drive your car. It doesn't necessarily means that car lack value. They wouldn't only if you spent so much time stalling that it would be more efficient to walk to your destination. Not necessarily an impossible outcome, but not one that can be assessed by looking at a few examples of cars stalling.
 

He honestly thinks he did nothing wrong and still is blaming ChatGPT.
I think this is the important aspect of the problem with "AI."

That is not actually directly attributable to ChatGPT, but rather with the attitude companies and engineers encourage people to have about using their 'data,' about the benefit it will produce, and what the regulatory environment says is permissible about attributing reported "speech."

Some people think issues around the attitudes listeners should take regarding the truth of what they hear, and the trust they place in it, should be isolated from one another. Perhaps, but I think it's obvious that while in theory evaluating the truth-value and trust-value of "speech" can be isolated, in practice not so much.

Most defenders of the current state of affairs in AI are upset when people say they are untrustworthy, and so it is probably better if their statements fall on deaf ears. I guess it's sad for them that what they say isn't worth listening to because it cannot be trusted, but I still don't have any reason to care or pay attention to what they "say."
 

The ideal use case is where the user can verify but production would longer than verification. I posted an example of Terence Tao using it to speed up a search for numerical examples earlier. This is also the case with "deep research": I could find the sources manually in a library but it would take me much longer. I could also use another aide like google scholar but that isn't quite as good.

All of this still requires that the user read the citations or check the math. Counterexamples where the user didn't show improper use is an issue but don't demonstrate that LLMs in general lack value.
Relationships depend on trust, once the trust is broken, the relationship goes. If things were done differently with the rollout of AI, maybe things would be different now. Instead the trust was broken and now the only smart move is to push back. Otherwise what, double the workload? No.
 

Most defenders of the current state of affairs in AI are upset when people say they are untrustworthy, and so it is probably better if their statements fall on deaf ears. I guess it's sad for them that what they say isn't worth listening to because it cannot be trusted, but I still don't have any reason to care or pay attention to what they "say."
"LLMs often return untrustworthy results" is a pretty unobjectionable statement. The jump to "Therefore LLMs have no use cases/LLMs only produce slop" is what I see as getting pushback.
 

I've noticed that the marketing department where I work has stopped all mention of "AI" or "Artificial Intelligence" at the start of this quarter. This is very different from last quarter, when they were mentioning both in practically every paragraph of ad copy they could find. Suddenly, at the start of this quarter, it's all crickets. Instead they say something generic like "software tools" or "algorithms," if they say anything at all.

Not sure what changed, but it sure changed in a hurry.
 



Remove ads

Top