How Does AI Affect Your Online Shopping?

You discover a product you were interested in was made with AI. How does that affect you?

  • I am now more likely to buy that product.

    Votes: 0 0.0%
  • I am now less likely to buy that product.

    Votes: 86 56.6%
  • I am neither more nor less likely to buy that product.

    Votes: 20 13.2%
  • I need more information about the product now.

    Votes: 24 15.8%
  • I do not need more information about this product.

    Votes: 23 15.1%
  • The product seems more valuable to me now.

    Votes: 0 0.0%
  • The product seems less valuable to me now.

    Votes: 85 55.9%
  • The product value hasn't changed to me.

    Votes: 13 8.6%
  • I will buy the product purely on principle.

    Votes: 2 1.3%
  • I will not buy the product purely on principle.

    Votes: 83 54.6%
  • My principles do not extend to a product's use of AI.

    Votes: 17 11.2%
  • I think all products should be required to disclose their use of AI.

    Votes: 112 73.7%
  • I don't think products should be required to disclose their use of AI.

    Votes: 3 2.0%
  • I don't care if products disclose their use of AI or not.

    Votes: 5 3.3%

Yeah, to be more precise in my response to @Belen--I've worked in cultures where the main author writes the paper in word (etc.) and then sends it to everyone. And I've worked in ones where everyone collaborates at once in some kind of online environment. For me, the offline version worked better. Part of that is just more clarity and precision of thought. But some of it is because working online is a pain--you need an internet connection, and editing, especially figures, is slower. Even moving an image around on the page is a headache with word online. And if you want to write on a plane, or in a bar with no wifi (both things I do), then the connectivity is a problem.
We are addressing these issues in the tech. You'll be able to use an offline copy and sync it when you reconnect.

Figure placement is pure gold in the tech. You insert anywhere, tag it. It is all bundled together and the system let's you know if you missed any pieces or journal requirements. You can then view the PDF in the final journal XML and see what it would look like in published form.

Later, if you are using the production part, you can drag figures any place them anywhere in the doc. It is all XML in the background so avoids the issues in word.
 

log in or register to remove this ad

We are addressing these issues in the tech. You'll be able to use an offline copy and sync it when you reconnect.

Figure placement is pure gold in the tech. You insert anywhere, tag it. It is all bundled together and the system let's you know if you missed any pieces or journal requirements. You can then view the PDF in the final journal XML and see what it would look like in published form.

Later, if you are using the production part, you can drag figures any place them anywhere in the doc. It is all XML in the background so avoids the issues in word.
Well then, I'm excited to try it!
 

LLMs do have some strong use cases when used to detect anomalies at scale, especially for things too complicated or abstract for humans to read, like obfuscated code. An LLM could be used to detect a weird anachronistic writing style being used in an otherwise consistent work.
 

Having read that AI statement, are you now more or less likely to purchase it? Does it seem more or less valuable? Do you care how the product was generated? Has your interest in the product piqued, or faded? Do you buy or reject it purely on principle? Check any that apply.

*This happened to me, on Steam yesterday. And between that, and the thread about AI products and Amazon, I was inspired to put this poll together.

Less. If we allow AI to create products for us, then why are we here? How do we avoid a standardize version of product? What about the contributors who added to the AI's data base? When something is free, it rarely carries value in the context of AI. I can generate 100 products just as easily as 1.
 

Small indie publishers still managed to write and publish their books long before the advent of AI. So if the cries of "what about the little guy?" sound a bit like a manufactured problem, it's because it is. I think generative AI is destroying these indie publishers, not helping them, and it was never an accident.

First, genAI prices out human creators, then inserts itself as the only viable alternative. It looks like these two things are already happening, judging by the people in this thread who say they have no choice but to use genAI to produce their work.

Now as this poll shows, consumers clearly don't want AI-generated products: most people reject it purely on principle, and absolutely nobody believes it adds any value. So now, smaller indie publishers are left with an impossible choice: either they use genAI to produce something that won't sell, or they hire writers/editors/artists and pay them out of pocket (and hope to break even.)

It gets worse: there are still no laws or regulations currently in place to protect creatives or their work. Should someone ever create something original without the use of genAI, and it manages to sell enough to turn a profit, bad actors will just buy it and feed it to the algorithm to make AI "better." Those good sales will not be good for very long.

Small indie publishers keep touting genAI as a useful tool. I guess a wrecking ball is a type of tool...
 
Last edited:

It will remain in a quantum state. I am not willing to state a firm position on your skill without information, nor am I willing to spend the resources of time, privacy, and pedantic debating of the quality of your work to obtain that information.

The point to be made is that your past employment is not an argument that I find compelling given my current employment. As I said, I am the wrong audience for it.

You're not required to, however if you didn't want me to make assumptions about your take on it "fallacy" is not exactly a connotationally neutral word to use. As such, I have to conclude you think one of the two cases I mentioned applied; if you didn't want that you made no effort to avoid it.
 

Artists don't spring forth as you say, but they do know how to make something look interesting, creative, and original. GenAI, not so much.

And there's the problem; I'm not convinced at least two of those three cases is true. Many artists works don't look particularly original or interesting to me, and some of what I've seen from Generative AI does. So either its a simple case of taste being different, or the fact people knowing someting is AI generated prebiases them to seeing the work differently than they would if they didn't know that (and given the number of people who will claim on human work comes from AI I have no faith that most people intrinsically can see the difference).
 

I wonder if they should declare usage of AI if the programmers use it. Because I can guarantee EVERY modern videogame will have source code that was written with help of generative AI.

I suspect most people care far less about this issue with coding than what's considered the artistic part of products, except those who are simply hostile to AI in principal.

In general its definitely lessen my want to buy, but its not all gloom and doom. Expedition 33 used AI too and I think no one would describe this game as "AI slop".

Note there's question as to whether that means what you think it means here. The reporting on that particular case has been rather hit or miss.

Edit: To be clear, see @GrimCo posting on the subject above.

I found the question of value change interesting. If I can't perceive the usage of AI the value should not be changed to me.

Except, of course, for the people who are hostile to in principal and not just in practice.
 

I
I see the same disconnect in the exchange between @Incenjucar and @Thomas Shey. I'm sure Incenjucar is seeing a lot of bad examples of AI use where it distorts the facts or changes the meaning. But that can't contradict Thomas Shey seeing examples where it doesn't.

Especially since I rather specifically wasn't talking about it in the context of unregulated nonfiction usage, but as an interim pass tool in fiction and the like. The answer to that was somewhat of a nonsequitor.
 

Both AI & humans can do certain tasks well and the same tasks poorly. But from what I’ve seen, their main error types differ.

Humans tend to simply miss/ignore things, whereas it’s a known issue that AIs make stuff up. The former means errors don’t get corrected. The latter introduces new errors.

Part of the problem is that the people making public access AIs have taught them to, effectively, try and make their end users happy. Not only are they not good at sorting the wheat from the chaff (because they don't have any sort of good criteria for doing so with massive amounts of online information), they've effectively been motivated to, well, lie, if it makes the end user happier.
 

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top