Creative works have one chance to get the right result. Scientific uses of GenAI generally work with a lot of expected failures but they can find the goodies in the pile of nonsense, and they do not need coherence between findings. The situations are very different. You don't use a gun as a light switch.
I am well aware of the approach. It gives you a small chance of better wording, a high chance of worse or incorrect wording, and remains disconnected from the work because it's not able to consider the whole. Its use is likely to muddle the writing and confuse the writer, and is thus most likely to result in a poor product. This is not an effective replacement for skill.But they don't.
Some of the best uses of Generative AI I've seen have been iterative; a human user has input prompts, looked at the result and either tried again or adjusted the prompts before trying again.
I sometimes get the feeling that some perceptions of AI use in creative enterprises is primarily or entirely based on the most lazy uses of AI in those endeavors, and treating them as though they are all possible uses.
Actually, given the bizarre hallucinations I have seen from LLM, that is both impractical and unethical.There are lots of people in this thread who believe the value of a product is lessened by use of generative AI. But what if the product was a system to hunt down Nazis, and a website to out them? Is the value of that service diminished by the use of AI? Would you still use the website, or boycott it due to ethical concerns? Would you want the creator to continue their work?
To be fair, if I came up with this scenario as a hypothetical question it would feel like ragebait, and possibly a disingenuous argument. But this is not a hypothetical. An investigator used AI chatbots to help crack the website WhiteDate.
They now have a website that you can use to find info from the crack (with names and other identifying information removed): https://okstupid.lol/ That website was "made without any pride and with chatGPT".
Generative AI was used extensively for this project - it was critical to the process. And the website hosting it is a product of ChatGPT. This AI was not ethically trained. Would you use consider using this website to check if someone used this dating service? Would you feel bad about using it? Is it's value inherently lowered because of the use of AI?
Yes, this is a classic "what ends justify the means" scenario. But I swear I did not make it up. This is a real thing. My flabbers are just as gasted as yours that I am considering this scenario.
More to the topic at hand, the average consumer isn't looking at a list of scientific uses of GenAI and deciding whether or not to purchase them. The average consumer is looking at products--specifically in this thread, gaming products--that have been made using GenAI, and deciding whether or not to buy that product.Creative works have one chance to get the right result. Scientific uses of GenAI generally work with a lot of expected failures but they can find the goodies in the pile of nonsense, and they do not need coherence between findings. The situations are very different. You don't use a gun as a light switch.
I am well aware of the approach. It gives you a small chance of better wording, a high chance of worse or incorrect wording, and remains disconnected from the work because it's not able to consider the whole. Its use is likely to muddle the writing and confuse the writer, and is thus most likely to result in a poor product. This is not an effective replacement for skill.
I always see people praising AI summaries and then I point out all the factual errors.Given I've seen it do a prefectly good job of tightening up and producing better word choice, I have to conclude you haven't seen enough of it.
I'll second the point that it's useful for summarizing and tightening word choice, ime. Also, a little fine tuning goes a long way. The default output is typically not good. But if I feed it, like, the introduction to a scientific paper I wrote which I'm happy with, and say "I want things to sound like this", then its much better.I always see people praising AI summaries and then I point out all the factual errors.
How so? I've seen my own writing being accused of being AI when it wasn't (not to mention artists being constantly accused of it). I don't think these things are as obvious as some people like to claim, and having been on the receiving end of false accusations of such more than once, I am leery of anybody who claims they can tell.