How Does AI Affect Your Online Shopping?

You discover a product you were interested in was made with AI. How does that affect you?

  • I am now more likely to buy that product.

    Votes: 0 0.0%
  • I am now less likely to buy that product.

    Votes: 88 56.4%
  • I am neither more nor less likely to buy that product.

    Votes: 20 12.8%
  • I need more information about the product now.

    Votes: 24 15.4%
  • I do not need more information about this product.

    Votes: 23 14.7%
  • The product seems more valuable to me now.

    Votes: 0 0.0%
  • The product seems less valuable to me now.

    Votes: 86 55.1%
  • The product value hasn't changed to me.

    Votes: 13 8.3%
  • I will buy the product purely on principle.

    Votes: 2 1.3%
  • I will not buy the product purely on principle.

    Votes: 86 55.1%
  • My principles do not extend to a product's use of AI.

    Votes: 17 10.9%
  • I think all products should be required to disclose their use of AI.

    Votes: 114 73.1%
  • I don't think products should be required to disclose their use of AI.

    Votes: 3 1.9%
  • I don't care if products disclose their use of AI or not.

    Votes: 5 3.2%

I was using AI this morning to search for references for a paper I was writing. It made up every reference. Every time.
A friend of mine, a lawyer, uses it to research. He then has to check all of the references, but finds that even with that additional time he's still completing work in a fraction of the time when he was searching for everything manually. It saves individual clients hours of billed time, and for him he's just able to complete more in the week so he's not losing any billable hours by being more efficient.

LLM models were trained where 'I don't know' was considered a worse answer than something that could be right, and that plus statistical word choices being the other big factors in hallucinations. So if you are searching for something with a good amount of data then there's an acceptable number of them. For your paper, "every reference. every time" should only happen if it's a rather rare thing. If you care to post some of the questions that gave only bad references we could see if some of the LLM are better for what you are doing. Since every time it was bad that shouldn't be hard.

SOURCE: https://openai.com/index/why-language-models-hallucinate/

Which brings up about another friend who uses the free uses of one LLM to double check the answers of the LLM he's subscribed to, since differently trained models won't hallucinate the same thing.

It's the early stages of a truly new tool, much like cranking to start early automobiles could break people's arms when the engine caught. It's got a while to go before it's polished. That companies have tried pushing it into everyone's hands this early is not surprising for capitalism, but shows the early, unsafe stages it's going through right now.
 
Last edited:

log in or register to remove this ad

LLM models were trained where 'I don't know' was considered a worse answer than something that could be right
Like my friend likes to call it, they are trained to be "perfect rectal alpinists". Current LLMs have big problem in that they are trained to be people pleasers.

From talking to people, most use ChatGPT like search engine, and it does pretty decent job at it, so long as you input good question and ask (always ask) for links.
 

Our company uses Gemini. I am required to incorporate it into every day activities. I mainly use it for editing emails and creating marketing content: LinkedIn, Press Releases.

I have used it for deep research for a few recent STM publishing proposals where I was doing research on peer review products. It is good at scraping sites and the internet for all of their marketing language, demos, user engagement. It probably compressed 2 weeks of work into 2 days; however, I am forced to check the data to make sure that it is correct such as the references today. Luckily, I read all references and it was very apparent that it was making up papers that do not exist.

I think this is mainly to do with so few academic papers being RAG-ready so AI searches, find people who wrote on similar topics, and hallucinated papers based on whatever information was scraped from multiple sources.
I've not used gemini so I guess I can't comment directly. But I will say that I have never encountered this issue--citing hallucinated papers--with ChatGPT's deep research. It always includes links to its sources. Sometimes the AI summary of those papers is wrong, but never the existence of the paper itself. So I don't think it is fundamentally intractable, but maybe Gemini isn't there yet?
 

Like my friend likes to call it, they are trained to be "perfect rectal alpinists". Current LLMs have big problem in that they are trained to be people pleasers.

Yeah, I suspect if they were trained to say something equivelent to "The information I can find on this is not consistent enough I can give an authoritative answer" there'd be, at least a different set of complaints about them as search engine adjunctants.
 

I've not used gemini so I guess I can't comment directly. But I will say that I have never encountered this issue--citing hallucinated papers--with ChatGPT's deep research. It always includes links to its sources. Sometimes the AI summary of those papers is wrong, but never the existence of the paper itself. So I don't think it is fundamentally intractable, but maybe Gemini isn't there yet?
Gemini has a Deep Research option. But you have to use it. And as @GrimCo said, always ask for references/links.
 

Current LLMs have big problem in that they are trained to be people pleasers.

Yep. I've noticed this. I use ChatGTP to run a solo D&D game for me, when I have free time. It's fun but I have noticed that whatever the situation, whatever I do, it's the right thing. I can't do anything wrong.

I get the same thing when discussing something of a more philosophical nature... specifically my views on laws, crime and suitable punishment. It provides an interesting conversation but it never questions my point of view. You can't actually have a debate because it will not throw up a opposing view.
 


Yeah, I suspect if they were trained to say something equivelent to "The information I can find on this is not consistent enough I can give an authoritative answer" there'd be, at least a different set of complaints about them as search engine adjunctants.
I'd love it if Amazon would just flatly say "sorry, that item does not exist in your size" instead of ignoring my search terms and listing dozens of things that "other people like me" are buying instead. I don't care what other people are looking for.
 
Last edited:

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top