ChatGPT lies then gaslights reporter with fake transcript


log in or register to remove this ad


Claiming it is only a waste of time in the office disqualifies any claim that it might have an economic impact, and therefore I was pointing out that discussing the economic consequences of it requires to drop the idea that it is totally useless.
Or to double down on the idea that there is a kind of mass psychosis going on and the many many people making decisions to use more AI are all fooling themselves, perceiving it as useful when it isn't.

Related Q: AI skeptics, how many of you have tried "deep research"? It does much better with returning a variety of cited sources.
 

No, it is growing because corporate suits are being told that, and sold a product, not because the value is actually realized. Hype and marketing.

Sales of enterprise AI generally focuses on two use-cases:

1) replacing low-level employees with AI - like, say, removing humans from call centers, so that customers never speak to humans on the phone.

2) supposedly increasing productivity through employees using AI in their work.

#2 is generally demonstrated by showing how quickly an individual can complete an atomic task with AI, and assuming that you do that across many tasks, and thus increase productivity.

It is not demonstrated by comparing entire teams on large efforts, so downstream effects of AI use are hidden.

It is all well and good if a coding task is completed quickly, but if the AI-using team winds up spending more time fixing bugs the AI introduces that humans don't, or the resulting code is not performance or is fragile, overall productivity doesn't go up!

Your lawyer spends less time creating a brief, but either they spend more time reviewing and editing out hallucinated case references, or get their licenced yanked - no increased productivity!

OK, so the net effect of AI is that companies are spending a few (thousands) bucks on AI solution in the hope that the increased productivity will allow them to get the same work done with fewer people, leading to increased profit, and they are fooled by the AI selling geniuses because there is, in fact, no increased productivity. So they have a tool that replaced "making things slowly" with a tool that "makes things quick but you have to babysit", and there is no increase in productivity. So there is no way to fire anyone and there is no economic effect except the cost of paying an AI solution that will either cancelled after some time, or get adopted out of habit, resulting in a few less benefits for shareholders. If I follow you, the net effect on AI on business will be akin to buying pet rocks for workers. Why not: it's totally possible that all the economic forecast by the Fed and ECB (not to mention private think tanks and the anecdotal reports of people actually satisfied with their use of the technology) are all wrongs. After all, there is a possibility they are all morons and that their assessment is junk, or all corrupt and paid by AI companies, and there is no productivity increase to be had. But then, there is no job to remove, since there is no productivity gain to be had.
 


OK, so the net effect of AI is that companies are spending a few (thousands) bucks on AI solution in the hope that the increased productivity will allow them to get the same work done with fewer people, leading to increased profit, and they are fooled by the AI selling geniuses because there is, in fact, no increased productivity. So they have a tool that replaced "making things slowly" with a tool that "makes things quick but you have to babysit", and there is no increase in productivity. So there is no way to fire anyone and there is no economic effect except the cost of paying an AI solution that will either cancelled after some time, or get adopted out of habit, resulting in a few less benefits for shareholders. If I follow you, the net effect on AI on business will be akin to buying pet rocks for workers. Why not: it's totally possible that all the economic forecast by the Fed and ECB (not to mention private think tanks and the anecdotal reports of people actually satisfied with their use of the technology) are all wrongs. After all, there is a possibility they are all morons and that their assessment is junk, or all corrupt and paid by AI companies, and there is no productivity increase to be had. But then, there is no job to remove, since there is no productivity gain to be had.
If I understand @Umbran right, the argument isn't that people are morons. It's that OpenAI (etc) have invested a lot in making a product that is enjoyable and pleasing to use and this kind of hijacks critical thinking skills even in very smart people. For example, there are studies that show people perceive to be much faster coders when LLM assisted, but who objectively do not work any faster than colleagues who were not assisted.

(I've criticized that study elsewhere in the thread. Just want to do his argument justice).

Edit: link to study.
 

How did it come about where I'm being made to feel like I have to defend AI?? Where do I say that I think AI won't result in widespread job losses, because I've said that it will repeatedly in this thread. I've even said I believe I have a MORE negative outlook on society as a result of AI than most of the readers of this thread. I even used the word "apocalyptic."

That's where or views differ. I think it might result in widespread job losses, except that I have faith that we'll either find new occupations to replace jobs as we've doing since the dawn of technological progress -- millions of jobs are both destroyed and created each years for a zero net effect on unemployment -- or, if it should happen to be the final technological step that removes the need to work, that we'll policitally find a solution to share wealth. It might not be painless as a transition depending on our political choices (end of 19th centuries workers revolt resulted in bloodbath), but I don't share the negativity long term. Of course, since it comes to our collective choices, it's political and outside the scope of this board.
 

If I understand @Umbran right, the argument isn't that people are morons. It's that OpenAI (etc) have invested a lot in making a product that is enjoyable and pleasing to use and this kind of hijacks critical thinking skills even in very smart people.

Honestly, I'd call experts in assessin productivity gains morons if they are fooled by this and don't really measure productivity gains (with an amount of set task needing to be done in a set time and measuring). Especially with the paygrade at the ECB (don't know about the Fed but I am sure they are not poor and undereducated either). That the workers themselves feel good about using AI is certainly a part of what makes it desirable: who wouldn't prefer an intern that tell you how great you are instead of an intern that sighs whenever you as something? That the analysis might be faulty by extending the productivity gain in all sectors based on the gains in only one sector is something that is possible, and it is very difficult to make broad assessment on whole economic sectors, but what we're dealing here is basic mistake in productivity measurement.

As a side note, something that gets no net increase in productivity but make work more enjoyable for workers to do is a positive in my opinion, but i can see how one would call that a waste of money.
 

No, it is growing because corporate suits are being told that, and sold a product, not because the value is actually realized. Hype and marketing.

Sales of enterprise AI generally focuses on two use-cases:

1) replacing low-level employees with AI - like, say, removing humans from call centers, so that customers never speak to humans on the phone.

2) supposedly increasing productivity through employees using AI in their work.

#2 is generally demonstrated by showing how quickly an individual can complete an atomic task with AI, and assuming that you do that across many tasks, and thus increase productivity.

It is not demonstrated by comparing entire teams on large efforts, so downstream effects of AI use are hidden.

It is all well and good if a coding task is completed quickly, but if the AI-using team winds up spending more time fixing bugs the AI introduces that humans don't, or the resulting code is not performance or is fragile, overall productivity doesn't go up!

Your lawyer spends less time creating a brief, but either they spend more time reviewing and editing out hallucinated case references, or get their licenced yanked - no increased productivity!

This is the general pattern seen - AI tools increase the need for editing and correction enough to eliminate their supposed increase in productivity.
This. I'm a Product Owner for a large financial organization. There is a big push to use AI, specifically in my world to create user stories much faster. (By the By, my company just laid off a third of our technology group).

AI doesn't do that for us. I have to go and review every user story anyway and correct every single one. AI simply can't predict or know how the users navigate the system to ensure adequate coding or testing steps. The net result is now the rest of us still here are working twice as hard as before, not only because we still have to do the core work, but due to layoffs, have to pick up the slack.

Not only is it not a wash, AI is expensive. Not only from a cost perspective, but from an environmental and energy consumption perspective. I would say it reminds of all the other previous "next big things", like 6 Sigma, or Salesforce CRM, or some other pet project a senior executive was sold on, only to see it fade away after a couple years. I would say that, except AI is objectively more harmful than any those other things.

Recent anecdotal experience. We were looking at a new home. Built in 1927, so having updated appliances was important (replacing HVAC isn't cheap). This was the listing:

1759848568215.png


Note the comment about a new HVAC, plumbing, and electrical. So we put in an offer. Inspector came and said that not only was it not a new HVAC, the HVAC wasn't even working. When our real estate agent confronted theirs, their agent said, "I used ChatGPT to write that listing, it's not my fault. It's ChatGPTs." He honestly thinks he did nothing wrong and still is blaming ChatGPT.
 

Honestly, I'd call experts in assessin productivity gains morons if they are fooled by this and don't really measure productivity gains (with an amount of set task needing to be done in a set time and measuring). Especially with the paygrade at the ECB (don't know about the Fed but I am sure they are not poor and undereducated either). That the workers themselves feel good about using AI is certainly a part of what makes it desirable: who wouldn't prefer an intern that tell you how great you are instead of an intern that sighs whenever you as something? That the analysis might be faulty by extending the productivity gain in all sectors based on the gains in only one sector is something that is possible, and it is very difficult to make broad assessment on whole economic sectors, but what we're dealing here is basic mistake in productivity measurement.

As a side note, something that gets no net increase in productivity but make work more enjoyable for workers to do is a positive in my opinion, but i can see how one would call that a waste of money.
It will take a few years to shake out I think. The current AI investment is probably a bubble so there will be some drawdown. Once that happens, do companies keep using it, or move on? Do those that use it do better? That will be the test.

Recent anecdotal experience. We were looking at a new home. Built in 1927, so having updated appliances was important (replacing HVAC isn't cheap). This was the listing:

View attachment 419055

Note the comment about a new HVAC, plumbing, and electrical. So we put in an offer. Inspector came and said that not only was it not a new HVAC, the HVAC wasn't even working. When our real estate agent confronted theirs, their agent said, "I used ChatGPT to write that listing, it's not my fault. It's ChatGPTs." He honestly thinks he did nothing wrong and still is blaming ChatGPT.
Man, I see these very obviously generated listings everywhere. Not for housing, but event descriptions, game advertisements, and so on. It's really bad. LLMs are good at many things, but not writing text that is meant to be read.
 

Remove ads

Top