ChatGPT lies then gaslights reporter with fake transcript

Because you keep defending AI? The "why" is something only you can answer.
I've defended my use of it, not AI in general. First, I started by trying to explain how I use it and how it's incredibly useful to people like me, but that didn't go anywhere. All the little attacks and digs implying that I'm a terrible person for using such a useless tool put me on the defensive. Now, I don't care anymore. I'm going to keep using it for what I use it for and come up with cool new ways to use it as I think of them and the technology improves, all the way up until the day it may/may not lead to our destruction.

And if it does, that would have nothing to do with how I used it a couple hours ago to confirm that this device would work with 802.1x RADIUS authentication using the quick prompt, "Will the UKG InTouch device that supports WiFi work with RADIUS authentication?"
 

log in or register to remove this ad

I'll try again. Consider the truth of statements like "LLMs have no safeties to prevent, or even tag, possible slop, and so they aren't worth my time and effort in production environments." When that insight (which must be true because a statement of judgement) is combined with listening to hype their boosters have produced, and the irresponsible way that LLMs are being used and monetised, people might also say "To save my time and attention for important matters, I'll presume that those who use LLMs are either incompetent or untrustworthy, and feel comfortable saying so to people who use them in practical life."

Who are we to dissuade them? There's nothing untrue in those sentences.
I don't know..."not worth my time" is subjective, but there is an underlying objective metric, how long it takes to do something, that they could be wrong about. Someone could say "driving to NY rather than walking is not worth my time, because I don't mind a two week walk", and it technically isn't wrong. But it's also the case that that individual has very different time preferences than the average person.

As for supposed use cases: LLMs don't fold proteins, do math, analyse medical images, etc. What are their use cases? Is there any other use for them other than making speech which wasn't thought?
LLMs don't fold proteins, but various iterations of AlphaFold have used the transformer architecture. I posted a math example earlier in this thread, which was aided by LLMs. LLMs have also performed impressively on mathematical competitions. There is also translation.
 

I'm going to keep using it for what I use it for and come up with cool new ways to use it as I think of them and the technology improves, all the way up until the day it may/may not lead to our destruction.
I don’t think that anybody was in any doubt of that. You’ve made yourself very clear.
 


The best I have heard that AI does is to create summaries, such as with notes from a meeting.
Not that good, for certain purposes I needed to use a certain long text which was impractical, but I needed to avoid synthetic text. I used Chatgpt to select key sentences. It wasn't that good at it. The selection was kind of random. I ended up removing some of the selection and reinstating a lot manually. But in the end, just for curiosity to see if I could get an even shorter text, I gave the summarized text to chat gpt and asked it to summarize it further. It removed the set up of an important twist, but kept the punchline. It also changed a lot of text and made up a lot of anecdotes the author didn't have. The tone changed from semi-comedic to tragicomedy and a lot of wallowing and angst that just wasn't there in both the original and the reduced text.

The thing is very bad at following instructions.
 

For example, I can look at an AI art piece and tell very quickly that its good, decent, or garbage. But the time to create that piece could be hours or days. That is a use case where AI does well.
Commercial art isn't random. It is a process where every step is carefully crafted before proceeding into more time consuming parts of the process. The end result is never a surprise and corrections happen mostly at the early stages when they are easier to correct. The only bad surprises happen with bad -or very overwhelmed- art directors.

If artists are losing commissions right now, it is because AI doesn't produce slop the customers don't want, it's because it is producing a satisfying result. Certainly not perfect, but enough to fill the need.
Quite the contrary, with the advent of AI, low quality and stingy customers went mostly away. Artists didn't have to keep screening for cheapskates and problem customers because they started self selecting out.
 

'Looks like it was made by AI' is still a derogatory way to describe art or writing for a reason.
I remember seeing a comment online that kids in class were calling AI art 'Boomer Art'. Which isn't as accurate as calling it slop, but it's still funny to me.
 

Not that good, for certain purposes I needed to use a certain long text which was impractical, but I needed to avoid synthetic text. I used Chatgpt to select key sentences. It wasn't that good at it. The selection was kind of random. I ended up removing some of the selection and reinstating a lot manually. But in the end, just for curiosity to see if I could get an even shorter text, I gave the summarized text to chat gpt and asked it to summarize it further. It removed the set up of an important twist, but kept the punchline. It also changed a lot of text and made up a lot of anecdotes the author didn't have. The tone changed from semi-comedic to tragicomedy and a lot of wallowing and angst that just wasn't there in both the original and the reduced text.

The thing is very bad at following instructions.
i'm curious to see what instructions you gave it
 

i'm curious to see what instructions you gave it
I don't really remember all of them. I however remember something else that is more recent.

I asked it: Add x to this list
It adds it with mistakes.
Me: y is missing, add it back.
It adds it except in one case. It also pollutes the list by marking every time it made a correction.
Me: it is missing in this one case.
It copies an explanation of why it isn't wrong despite it being wrong.
At this point I decided I could add it manually.
Me : clean up the list.
It repeats the text verbatim...
 

I don't really remember all of them. I however remember something else that is more recent.

I asked it: Add x to this list
It adds it with mistakes.
Me: y is missing, add it back.
It adds it except in one case. It also pollutes the list by marking every time it made a correction.
Me: it is missing in this one case.
It copies an explanation of why it isn't wrong despite it being wrong.
At this point I decided I could add it manually.
Me : clean up the list.
It repeats the text verbatim...
Interesting, I used it to clean up some setting info I had typed up and it pointed out a contradiction that I had typed without me asking it to and asked me which way it should proceed
 

Remove ads

Top