How Does AI Affect Your Online Shopping?

You discover a product you were interested in was made with AI. How does that affect you?

  • I am now more likely to buy that product.

    Votes: 0 0.0%
  • I am now less likely to buy that product.

    Votes: 73 59.3%
  • I am neither more nor less likely to buy that product.

    Votes: 16 13.0%
  • I need more information about the product now.

    Votes: 18 14.6%
  • I do not need more information about this product.

    Votes: 22 17.9%
  • The product seems more valuable to me now.

    Votes: 0 0.0%
  • The product seems less valuable to me now.

    Votes: 69 56.1%
  • The product value hasn't changed to me.

    Votes: 13 10.6%
  • I will buy the product purely on principle.

    Votes: 2 1.6%
  • I will not buy the product purely on principle.

    Votes: 65 52.8%
  • My principles do not extend to a product's use of AI.

    Votes: 15 12.2%
  • I think all products should be required to disclose their use of AI.

    Votes: 93 75.6%
  • I don't think products should be required to disclose their use of AI.

    Votes: 2 1.6%
  • I don't care if products disclose their use of AI or not.

    Votes: 5 4.1%

Yeah, some people are just so anti-AI that they can't see its good uses. A lot of the people who complain about it are just a little bit too extreme or overzealous and I get that there are concerns, such as deepfakes, but a lot of the hate comes across to me as insincere.
To me AI is like plastic. Yeah there are a lot of good uses for plastic but it's also incredibly harmful. When I purchase something I prioritize products with less plastic.
 

log in or register to remove this ad

Not "some," but most. Most people are so anti-AI that they can't see the good uses for it. That's not our fault, either: for every good use the devs present, there are countless stories in the news about AI being used to commit actual harm--fraud, theft, and misinformstion... to say nothing of the annoying ads and low-quality products that we can't seem to escape.

They're are zealots on both sides of AI, those who believe it can do no wrong and those who believe it can do no good. They will always be out there, yelling back and forth at each other and nothing will change their minds. I'm not talking about them... I'm talking about the average consumers, folks who would see a product was made with the assistance of AI, remember the last news story they heard about AI, and then walk away.

Until they fix the harm it's doing-- stop the fraud, the theft, the misinformation, the deepfakes-- none of the good will get noticed. IMO, of course.
Yeah, that makes sense. Maybe if we had more positive news stories about how it has helped astrophysics/astronomy by analysing the vast amounts of data to find more phenomena in space, or about how it is used to help in medicine we'd have people with a more balanced view of it. I think part of the problem is that for every good story, there are 10 bad ones. Even stories that aren't about AI itself but are about the massive data centres that have a negative impact on the environment and quality of life of people nearby are going to negatively impact people's view on it. I do actually agree with this one, and if I was going to give a reason not to have AI, it would probably be the impact of the data centres required to run them.
 


Not "some," but most. Most people are so anti-AI that they can't see the good uses for it. That's not our fault, either: for every good use the devs present, there are countless stories in the news about AI being used to commit actual harm--fraud, theft, and misinformstion... to say nothing of the annoying ads and low-quality products that we can't seem to escape.

They're are zealots on both sides of AI, those who believe it can do no wrong and those who believe it can do no good. They will always be out there, yelling back and forth at each other and nothing will change their minds. I'm not talking about them... I'm talking about the average consumers, folks who would see a product was made with the assistance of AI, remember the last news story they heard about AI, and then walk away.

Until they fix the harm it's doing-- stop the fraud, the theft, the misinformation, the deepfakes-- none of the good will get noticed. IMO, of course.

Yeah, that makes sense. Maybe if we had more positive news stories about how it has helped astrophysics/astronomy by analysing the vast amounts of data to find more phenomena in space, or about how it is used to help in medicine we'd have people with a more balanced view of it. I think part of the problem is that for every good story, there are 10 bad ones. Even stories that aren't about AI itself but are about the massive data centres that have a negative impact on the environment and quality of life of people nearby are going to negatively impact people's view on it. I do actually agree with this one, and if I was going to give a reason not to have AI, it would probably be the impact of the data centres required to run them.

As someone in statistics/machine learning, one of the things that annoys me about this is how "AI" has become synonymous with "a massive data center using LLM trained on everything online without regard to copyright". I agree that there is a big difference between that and a custom deep neutral net designed for astronomy or tumor scanning, or an artist who trained something on their own works.

I kind of wonder how that affects the poll results (and what they reflect about consumers).
 

I think something missing from this discussion of editing in terms of AI is that an editor is not just a glorified spelling and grammar checker. Editing is as much an art and skill as writing or illustrating, and two different editors can produce two different versions of the same book. It's like translators; a translator can make a work drag or sing, and their approach changes a book. The very thought of everyone running their work through the same AI editor is profoundly boring. Why would you want a mechanical player piano when you could hire a jazz pianist?
 

Yeah, some people are just so anti-AI that they can't see its good uses. A lot of the people who complain about it are just a little bit too extreme or overzealous and I get that there are concerns, such as deepfakes, but a lot of the hate comes across to me as insincere.

I wouldn't go as far as to accuse them of being insincere, but in some cases it does seem not as thought through it can be in its absolutism.
 

I think something missing from this discussion of editing in terms of AI is that an editor is not just a glorified spelling and grammar checker. Editing is as much an art and skill as writing or illustrating, and two different editors can produce two different versions of the same book. It's like translators; a translator can make a work drag or sing, and their approach changes a book. The very thought of everyone running their work through the same AI editor is profoundly boring. Why would you want a mechanical player piano when you could hire a jazz pianist?

Yeah, but the truth is--and I say this as a retired editor--many products are not going to see a full editor anyway. They're just not; the choice isn't "AI usage or an editor" its "AI usage or self-editing." I'm not at all surprised that the AI can often do a better job than the latter.

("Often" is important. Back when I did work for Eden we had one writer who I barely needed to look at his work. He was the exception, though.)
 

I agree, that is an excellent way to use AI. One of several, actually.

Unfortunately, judging by the poll results, it won't matter. Almost nobody thinks it adds value... in fact, most people will see that the product was made with AI, and reject it on principle.

AI has an image problem, and I'm not talking about deepfakes or poorly-rendered teeth. Consumers just don't want it.

It’s a debatable use. There’s been some very high profile errors/problems in documents released by a certain massive government recently which were almost certainly caused by running the docs through a LLM to accomplish certain editing tasks. Human editors/proofreaders can make errors plenty of course, but they have context awareness and inference across large documents even the best current models don’t (and some of the latest releases have regressed in certain areas with large context handling).
 

For me personally, I want to support human creativity with my $. I commission art and buy products to give people who make cool and interesting things the ability to keep doing that. With so much excellent human created content in this world, more then I can use, why would I pay for slop?
 

There’s been some very high profile errors/problems in documents released by a certain massive government recently which were almost certainly caused by running the docs through a LLM to accomplish certain editing tasks. Human editors/proofreaders can make errors plenty of course, but they have context awareness and inference across large documents even the best current models don’t (and some of the latest releases have regressed in certain areas with large context handling).

This is also where D&D has been ahead of the curve. They managed to make the iwizard and dawizard blunder across an entire book without any AI or LLM.
 

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top