How Does AI Affect Your Online Shopping?

You discover a product you were interested in was made with AI. How does that affect you?

  • I am now more likely to buy that product.

    Votes: 0 0.0%
  • I am now less likely to buy that product.

    Votes: 86 56.6%
  • I am neither more nor less likely to buy that product.

    Votes: 20 13.2%
  • I need more information about the product now.

    Votes: 24 15.8%
  • I do not need more information about this product.

    Votes: 23 15.1%
  • The product seems more valuable to me now.

    Votes: 0 0.0%
  • The product seems less valuable to me now.

    Votes: 85 55.9%
  • The product value hasn't changed to me.

    Votes: 13 8.6%
  • I will buy the product purely on principle.

    Votes: 2 1.3%
  • I will not buy the product purely on principle.

    Votes: 83 54.6%
  • My principles do not extend to a product's use of AI.

    Votes: 17 11.2%
  • I think all products should be required to disclose their use of AI.

    Votes: 112 73.7%
  • I don't think products should be required to disclose their use of AI.

    Votes: 3 2.0%
  • I don't care if products disclose their use of AI or not.

    Votes: 5 3.3%

This is an interesting concept. I'm very sympathetic to the goal but at the same time the idea of using an online editor worries me. I'd probably want to write it myself and then upload, which I guess would look suspicious.

It doesn't worry me, so much as seem impractical. Writing a proper academic paper isn't the work of an afternoon.

The best solution ime is to know the authors and their work and then know if they are trustworthy.

And new authors? How do they get accepted?
 

log in or register to remove this ad

Some quick searching indicates to me that ChatGPT currently hallucinates at a rate of 33% to 79% depending on the type of test.

In cases where you are depending on accurate presentation of information, hallucination is a failure.

Can you claim it does something well, when it fails 33% of the time? I mean, unless you are a baseball player?
I think it depends on the activity.

I only use it to do the following:
  • Edit emails
  • Edit marketing language
  • Edit business docs
  • Create images for my private D&D game
I would never use it for paid content. It works well enough for those purposes with heavy editing from me.
 

And new authors? How do they get accepted?
I think that's one thing that established authors can at least see a sprig of hope about. It doesn't help new authors, but those who have established names will still be able to make a living as boutique/artisan level creators. The same, I think with news outlets and the like--because you won't be able to believe anything you see, you will come to rely on some trusted sources who have established their reliability over decades.

At least that's my hope. It may be wishful thinking!

Like you say, still doesn't help new authors, though. I guess that's where the publisher role transitions to--a brand which you know you trust to publish human created content, which will be (hopefully) valued higher than the mass market slop which will dominate every creative industry. Publishers like that can help to lift up new authors. Maybe.
 

It doesn't worry me, so much as seem impractical. Writing a proper academic paper isn't the work of an afternoon.
There is no time limit once you start the submission. It remains open until the author is ready to submit.

I imagine that we will add controls for abandoned articles if no activity in 90 days for example.
 

I think these are some of the best examples offered. From my perspective, it feels like they don't land because the more pro-AI position is "AI is capable of doing some things well". The counterargument reads to me (have I got it wrong?) as "AI is not capable of doing things well, see these n examples".

And it doesn't convince me because we have all seen many many examples of AI doing things poorly. But those type of examples cannot establish that AI is not capable of doing things well. Just that it often or occasionally does things poorly.
Both AI & humans can do certain tasks well and the same tasks poorly. But from what I’ve seen, their main error types differ.

Humans tend to simply miss/ignore things, whereas it’s a known issue that AIs make stuff up. The former means errors don’t get corrected. The latter introduces new errors.

Put differently, when a human makes a mistake, the end product does not improve. When AIs hallucinate, the end products degrade. That’s an important distinction.

Add into the equation the resources required for AIs to function, and a lot of them start looking like a bad idea.
 

Add into the equation the resources required for AIs to function, and a lot of them start looking like a bad idea.

The efficiency questions important. When you start talking about using generative AI out of its lane, the performance drops considerably...

1767714830027.jpeg
 

There is of course the hypothetical scenario of better-designed AI that actually processes the information itself rather than just tokens, meaning it has to be to associate concepts with more than just the requency of assocation with tokens and build complex relationships between them, but aside from finding people both able and willing to build that, I would expect it to be even more expensive than the current token mashers. And for TTRPGs you then have to cram them full of fiction that clashes with all the real data.
 


There is of course the hypothetical scenario of better-designed AI that actually processes the information itself rather than just tokens, meaning it has to be to associate concepts with more than just the frequency of association with tokens and build complex relationships between them, but aside from finding people both able and willing to build that, I would expect it to be even more expensive than the current token mashers. And for TTRPGs you then have to cram them full of fiction that clashes with all the real data.
Let's be honest. The current gen AI is being forced on the public. The companies are using it as a cover to reorganize. They are not getting rid of that many people. They are just sending the jobs elsewhere in order to increase profits. Meanwhile, the AI companies are in debt with a bubble that will burst and wipe out a few trillion in wealth. They are just trading cash between themselves hoping no one will notice. My guess is that they will force bailouts.

Let's look at the current DRAM crisis. The companies are sitting on the hardware to deny their competitors access. OpenAI bought 40% of the entire DRAM supply for data centers they cannot build due to lack of sufficient electricity.
  • Nvidia just announced that they will bring back the RTX 3060. The ....3060....
  • No GPU refresh for the 50 series and the current 5090 will be 5k
  • Microslop is going to force agentic Windows OS.....that does not work
  • Samsung just announced that they do not have ram for TVs and home appliances
  • Apple is setting up semi-permanent quarters overseas to try to secure LTAs for ram
  • ASUS announced massive price increases
  • Micron will end its consumer business
  • New console generation has been delayed for 2-3 years
  • $200 DDR5 is now going for $900-$1500
  • FYI: The Ram manufacturers are not going to increase or build supply
  • There is not actual supply shortage because they are sitting on warehouses of supply waiting for things to be built
This is all for tech that cannot achieve a consistent success rate with a required infrastructure that does not exist.

I love technology and always have but what is happening now is insane.

Sorry if off-topic, but this all drives my avoidance of any AI generated content. I will not even buy books any longer unless from a reputable publisher and I have friends who self-publish on Amazon.
 

It doesn't worry me, so much as seem impractical. Writing a proper academic paper isn't the work of an afternoon.
Yeah, to be more precise in my response to @Belen--I've worked in cultures where the main author writes the paper in word (etc.) and then sends it to everyone. And I've worked in ones where everyone collaborates at once in some kind of online environment. For me, the offline version worked better. Part of that is just more clarity and precision of thought. But some of it is because working online is a pain--you need an internet connection, and editing, especially figures, is slower. Even moving an image around on the page is a headache with word online. And if you want to write on a plane, or in a bar with no wifi (both things I do), then the connectivity is a problem.

And new authors? How do they get accepted?
Yeah that's a major issue. Relying on established reputation is going to favor people in more established institutions with all the issues for inclusivity and accessibility that implies.

Some quick searching indicates to me that ChatGPT currently hallucinates at a rate of 33% to 79% depending on the type of test.

In cases where you are depending on accurate presentation of information, hallucination is a failure.

Can you claim it does something well, when it fails 33% of the time? I mean, unless you are a baseball player?
I don't think that's a very nuanced approach to the topic. Like the chess example--we judge chess playing computers based on how good they are at chess. But LLMs are treated as this kind of everything engine, and judged by how well they perform at any task. That doesn't seem fair to me.

I get that, to some extent, that criticism is a response to hype and AI marketing cycles which claim LLMs are general intelligence or imply they are good at everything. And on those grounds, I'm sympathetic to that critique.

But I think overly focusing on that criticism can cause us to miss the very specific and structured ways where LLMs are useful. Maybe not useful enough to justify the expense and the environmental costs--I'm also sympathetic to points you've made in that regard. But, in the specific cases where they are beneficial--translation, programming, search, editing, brainstorming--I think it's premature to write off everything as slop.
 

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top