How Does AI Affect Your Online Shopping?

You discover a product you were interested in was made with AI. How does that affect you?

  • I am now more likely to buy that product.

    Votes: 0 0.0%
  • I am now less likely to buy that product.

    Votes: 83 56.5%
  • I am neither more nor less likely to buy that product.

    Votes: 18 12.2%
  • I need more information about the product now.

    Votes: 22 15.0%
  • I do not need more information about this product.

    Votes: 23 15.6%
  • The product seems more valuable to me now.

    Votes: 0 0.0%
  • The product seems less valuable to me now.

    Votes: 82 55.8%
  • The product value hasn't changed to me.

    Votes: 13 8.8%
  • I will buy the product purely on principle.

    Votes: 2 1.4%
  • I will not buy the product purely on principle.

    Votes: 83 56.5%
  • My principles do not extend to a product's use of AI.

    Votes: 15 10.2%
  • I think all products should be required to disclose their use of AI.

    Votes: 109 74.1%
  • I don't think products should be required to disclose their use of AI.

    Votes: 3 2.0%
  • I don't care if products disclose their use of AI or not.

    Votes: 5 3.4%


log in or register to remove this ad

When it comes to editors, at least in my part of the globe, they charge 0.025-0.05 euro per word. If they operate under LLC which is in VAT system, then you get 25% VAT on top of that. So a 15000 word document can be from 375-750e, depending on how good of a editor you want. And if editor's company is in VAT system, add 25% on top of those prices. Hourly rate vary significantly, depending on experience of editor, complexity of text, but proofreeding is 15-25 E/h, content/copy edit is 20-40 E/h. Most common way of charging is per standard page (1800 characters including spaces), with prices for proofreading 4-8e per standard page and copy/content edit 8-15e.

So, for some niche products that will sell couple of dozens to maybe couple hundred copy, probably at low price (and platforms take their cut), hiring professional editor is just not financially feesable. You can easily end up losing money. Sure, if you have friends who are decent enough at it and are willing to do it for free (or round of drinks), it's also one way to do it without using AI.
 

I think you might have been reading 'page after page after page of discussion genAI has had on these boards' then wrapping that discussion around your own personal biases.

Oh, good grief. You make it sound like anyone who disagrees with you can't actually have an informed opinion.

If someone contradicts the opinions that one has formed, a person tends to react in the way that you have.

Ho, there, partner. That applies to you as much as anyone else.

It's understandable, but to me, it's also intellectually vacant.

Intellectually vacant?

Sir, I'm not the one trying to dismiss a whole class of poster's opinions because they choose language I don't agree with.

Also consider that you are speaking to other human beings. Be kind to others. I mean... come on.

Perhaps you hadn't noticed, but you are the one currently on the high horse, and being kind of insulting to those around you.

So having said that... how about you put these two thoughts together and come up with a better answer?

We do not dance for you, sir.
 

It's a really poor and ill formed shorthand. It implies a qualitative assessment even when such is not a valid criticism.

It would seem to me that, in the context of gaming products, the quality of a work is valid criticism of the work. Indeed, it is perhaps the single largest valid criticism of the work.

It's like 'Defund the police' ...

"It is like this political topic that has little similarity to the topic under discussion, and I know I should not reference because it will tend to make people angrier, but I will reference it anyway..."

Don't use politically charged analogies, please and thanks.
 


Maybe if we had more positive news stories about .. how it is used to help in medicine we'd have people with a more balanced view of it.
If genAI was mostly being used to assist highly specialized practitioners in highly specialized fields, nobody would give two hoots. My own graduate research into what would eventually be called "generative AI" was of that form, but for particle accelerators.
Those are not the uses that concern people. Those are not the uses that the tech community trying to sell it are targeting!
I'm not sure that it makes sense to characterize medicine as a "highly specialized field". It's often quoted as being 15% or more of the GDP of the United States, and, furthermore, it absolutely is being targeted by the tech companies.

In an earlier post you talked about the difference between using GenAI for creative tasks and non-creative tasks. I agree with you that the creative uses look like they are self-limiting. We are not seeing much in the way of improvement recently; the input data they use is close to exhausted and new data is heavily polluted. But where I disagree is that the GenAI industry cannot survive without being used for creative purposes. Certainly GenAI is being hyped and 2025 as the "year when agents will take over menial human tasks" never materialized, and looks unlikely to in 2026. Or, possibly, ever.

But like any hyped technology, what I expect to happen is that the boring, mundane solutions that are value-for-money will be refined and grow, and the hyped areas will wither away.

For everyone's consideration, here are a number of scenarios in healthcare where GenAI has as a strong ROI (and would have at 10x the current cost) and, I would argue, are also uses that make the world a better place:

Imaging Incidental Findings
If you get a mammogram X-ray, or to see if you broke a rib falling off a ladder, the person viewing that set of images is not looking for other features. They might not spot an issue with your lungs because they aren't looking for that -- it's probably not their job or specialty. A GenAI solution can alert an image technician to things they might not have seen. Obviously good for the patient, it also saves money for the healthcare provider. For context, the US does over a quarter of a billion x-rays per year (not including a billion+ of dental ones) and 100m+ CT scans.

New Patient Summarizations
You are a doctor in a local clinic well away from a hospital, and a patient comes in with a serious condition that needs specialist attention , so you send them immediately to a big hospital to be seen asap. That hospital needs to review the patient's medical records rapidly. If we are lucky, the records are electronic. If we are unlucky, the doctor faxes images of the patient records to the hospital. Typically a nurse will have to review this and summarize -- potentially overnight for a surgery the next day. This means a nurse will have to read something that about a third of the time is longer than Moby Dick in the middle of the night and make sure they don't miss anything important. GenAI summarization is really good at this task, improving accuracy and assisting the nurses to get the job done faster.

Automatic Transcription
You may have experienced this yourself -- your doctor may have asked your permission to use AI to record a conversation with them, which is then summarized and can be used as the basis for the doctor's notes. This is an area I've personally done validation studies on (https://ai.nejm.org/doi/abs/10.1056/AIoa2500945) and it not only saves doctor's time (especially work outside work hours), but the resulting summaries are really quite good -- we used a couple of ways of measuring quality and compared them to non-AI workflows.

-------------------

I have been pretty unimpressed with GenAI creativity. Studies also show that although it can be shown to be more creative than a single random professional, because it is more creative in the same way for everyone, it's less creative than a group of people. But the boring "non-creative" uses I think are quite strong. Not enough to justify the current hype levels of company valuation, but I do think they are sufficiently sustainable to ensure that GenAI will continue to be a seriously large industry.
 

There are lots of people in this thread who believe the value of a product is lessened by use of generative AI. But what if the product was a system to hunt down Nazis, and a website to out them? Is the value of that service diminished by the use of AI? Would you still use the website, or boycott it due to ethical concerns? Would you want the creator to continue their work?

To be fair, if I came up with this scenario as a hypothetical question it would feel like ragebait, and possibly a disingenuous argument. But this is not a hypothetical. An investigator used AI chatbots to help crack the website WhiteDate.


They now have a website that you can use to find info from the crack (with names and other identifying information removed): https://okstupid.lol/ That website was "made without any pride and with chatGPT".

Generative AI was used extensively for this project - it was critical to the process. And the website hosting it is a product of ChatGPT. This AI was not ethically trained. Would you use consider using this website to check if someone used this dating service? Would you feel bad about using it? Is it's value inherently lowered because of the use of AI?

Yes, this is a classic "what ends justify the means" scenario. But I swear I did not make it up. This is a real thing. My flabbers are just as gasted as yours that I am considering this scenario.
 
Last edited:

There are lots of people in this thread who believe the value of a product is lessened by use of generative AI. But what if the product was a system to hunt down Nazis, and a website to out them?
I mean, sure, we can invent fanciful things and hypotheticals which can alter the situation.

But what we’re talking about, specifically, is generative LLMs used to make TTRPG products.
 

I'm not sure that it makes sense to characterize medicine as a "highly specialized field".

It... doesn't?

By all means, then - try to practice it without the years of specialized education and training required. What could possibly go wrong?

Wait... People die and you go to jail? You get sued into oblivion by former patients you harmed? Gee, I guess it was a specialized field after all!



New Patient Summarizations
You are a doctor in a local clinic well away from a hospital, and a patient comes in with a serious condition that needs specialist attention , so you send them immediately to a big hospital to be seen asap. That hospital needs to review the patient's medical records rapidly. If we are lucky, the records are electronic. If we are unlucky, the doctor faxes images of the patient records to the hospital. Typically a nurse will have to review this and summarize -- potentially overnight for a surgery the next day. This means a nurse will have to read something that about a third of the time is longer than Moby Dick in the middle of the night and make sure they don't miss anything important. GenAI summarization is really good at this task, improving accuracy and assisting the nurses to get the job done faster.

There are some technical and legal issues here - and while there's an adage that all problems can be solved with sufficient code, since the purveyors of AI have been resisting regulation, the trust we can give them with things like HIPAA concerns is questionable.

After all, we are talking about systems that are typically trained on data that was taken without permission. Not exactly a strong ethical ground upon which to base a trust relationship with supposedly private health data.

But, the biggest issue that pops to mind is one that has plagued genAI - the fact that it does not present facts, or answers. It presents things that look like facts and answers, and that need significant oversight and editing to get right. If the AI hallucinates something into, or out of that summary, someone's health and life can be at stake.

Automatic Transcription

Generative AI doing transcription of phone messages can't seem to get "euthanasia" right most of the time. See previous issues of trust and hallucination risk, which apply here as well.

The study you cited included measures of quality, which is good. However, I would be wary of the issues that genAI systems typically run into when they attempt to handle technical information: When you focus on task completion, they look like they produce a savings. When you look at overall workflow, with the AI task integrated into the rest of work done, those savings are overcome by post AI corrections.
 
Last edited:

But what we’re talking about, specifically, is generative LLMs used to make TTRPG products.

If you don't want to personally consider it, that's fine. But IMNSO we're quite a few pages beyond pretending the conversation is so limited. Not only does the OP question the meaning of value, but the dialogue since then has been expanded to advertising, movies, the economy in general, and even pornography.

Do you really think posts like this, or this are limited to specifically generative LLMs used to make TTRPG products? Lots of people are happy to announce they are drawing lines in the sand; I think it's fair to discuss where those lines are actually drawn.
 

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top