ChatGPT lies then gaslights reporter with fake transcript

AI Tech companies return value to their investors, not the economy as a whole, and they aren't even profitable at that, yet.
The fact that so many AI services, like Cursor, Replit, Claude Code, etc, are raising prices and implementing increasingly strict and opaque rate limits seems like reason to suspect that AI companies are having trouble even charting a path to possible profitability.
 

log in or register to remove this ad

Making your product a net loss-leader works when you're something like Amazon or Uber, where the whole model is to monopolize than exploit, but when you're actively trying to create a market for your product that might not actually exist in large enough scale to be profitable at all, then it's a lot trickier.
 

Directly below the text box for ChatGPT where it says "Ask anything," it states, "ChatGPT can make mistakes. Check important info."

That's there for every query.

Yes, and they put cancer warnings on cigarettes in 1965. It took about twenty years for the percentage of people smoking to drop 10%.

A line of text in the small print does not overcome the rest of the presentation.

There are easy things you can do to enhance ChatGPT's truthfulness and veracity.

Why? The technology is supposed to serve us, not the other way around.

I was taught that Utopia is where the machines do all the heavy and dangerous work, leaving humans free to engage in art and intellectual pursuits. You are arguing that we should allow the machines to pick up the art and intellectual pursuits, and that we should change our behavior to enable it.

That sounds pretty backwards, to me.
 

I mean sure, people still dunk on Liefeld so yes, I guess people will continue to comment about individual arts vs the monolith that is AI generated images.

I'm not sure how its relevant to you, the AI doesnt have feelings, but sure.
I wasn't thinking about feelings, how often do you dunk on Liefeld? Every time someone brings up one of his comics? Or is it more rare?

Every time an image is generated and shared it's either a joke about it finally not having a messed up hand or "it can't draw (inset whatever) correctly yet"

Maybe I'm just old man yelling at cloud about continuous dunking
 


I wasn't thinking about feelings, how often do you dunk on Liefeld? Every time someone brings up one of his comics? Or is it more rare?

Every time an image is generated and shared it's either a joke about it finally not having a messed up hand or "it can't draw (inset whatever) correctly yet"

Maybe I'm just old man yelling at cloud about continuous dunking

But why do you even care? If someone was a Liefeld diehard, then thats just your typical pro/anti debate which somehow powers the internet. Thankfully we will eventually just have bots endlessly fighting eachother and we can move on to our Vineyards.

Why are you on Team AI when it comes to dunking on AI slop? Yes, AI generated images still have flaws, because its not learning, its not art, and its not even thinking. It just goes 'well I see B follows A' and sometimes it gets it wrong.
 

I wasn't thinking about feelings, how often do you dunk on Liefeld? Every time someone brings up one of his comics? Or is it more rare?

Every time an image is generated and shared it's either a joke about it finally not having a messed up hand or "it can't draw (inset whatever) correctly yet"

Maybe I'm just old man yelling at cloud about continuous dunking
Very true. I just asked DALL-E to create a photorealistic image of a hand. It got it right (except for the left/right part)! We should throw a parade.

I said:​

Create a photorealistic image of a closeup of a human hand showing the appropriate number of fingers.

DALL·E said:​

Sure, I can generate that. Could you please clarify:

  • How many fingers should be shown? (e.g. 5 fingers, 6 fingers, etc.)
  • Should it be a left or right hand?
  • Any specific skin tone, gender, or age for the hand?
  • Any specific background or lighting (e.g. natural light, studio, outdoor)?
Once I have that, I’ll generate a photorealistic image for you.

I said:​

It should have 5 fingers, right hand, natural light, person's age around 35.

1759517095854.png
 

You are arguing that we should allow the machines to pick up the art and intellectual pursuits, and that we should change our behavior to enable it.

That sounds pretty backwards, to me.
I didn't do that. There's a difference between saying that AI will put a lot of human creatives out of work, which I strongly believe, and saying "we should allow the machines to pick up the art and intellectual pursuits."

I'm not stating whether I think it's "good" or "right." I'm stating that these things will happen nevertheless.

I mentioned trying to help my kids choose their career paths. Like most kids, my daughter loves to draw and loves art. Thirty years ago, a parent might have felt comfortable encouraging their children to consider art as a career if they showed a strong interest in it, or at least to leave the option wide open.

Nowadays though? With AI models creating what they can do in 5 seconds?? That parenting decision is not so easy to do anymore.
 

The fact that so many AI services, like Cursor, Replit, Claude Code, etc, are raising prices and implementing increasingly strict and opaque rate limits seems like reason to suspect that AI companies are having trouble even charting a path to possible profitability.
No doubt the industry will go through the same type of consolidation every other burgeoning industry goes through, with most of the players disappearing and a small few rising above the rest. There also used to be large booksellers in every moderately large city in America (Borders, Barnes & Noble, Crown Books, Walden Books, etc.). Now there aren't because Amazon took over.

Speaking of Amazon.com, they operated at a loss for 8 years before turning in their first annual profit. It can take a while for these things to shake out.

Again, not suggesting that "AI is awesome." I'm just pointing out the obvious. Someone else had said "sea change" and I agree. We're on the cusp of a massive restructuring of the global economy.
 

I also think, though, that there is an opportunity for human-created content to stand out from the slop. People will seek that out, and seek out the creators/brands that they trust. That 'real content' will have value just for being 'real'. But we (creators) have to make sure we are better than the slop.
The pessimist in me worries that maybe the audience cannot recognize the value because AI slop still can manipulate us very effectively emotionally with words.
I guess the question is - at what point is the AI slop is so sloppy that even though it hits all the right key words to get us emotionally engaged that we stop because it does more harm than good?
Part probably matters how quick the feedback cycle is. If you get a bad AI post with false or misleading information on something going on in a foreign country, you might never realize it was wrong, or it might take years (maybe after you already supported your country's military actions). A post on how to fix a flickering monitor - you might see immediately which post gave you useful or useless (or harmful) advice.
A challenge will be however - if there is a constant influx of new AI slop that takes away your attention. Content creators will need the fortitude to build a reputation, and that might prove a challenge if the content is hard/expensive to make, because they are constantly competing with some new AI website producing content for the same audience.
 

Remove ads

Top