Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Geek Talk & Media
ChatGPT lies then gaslights reporter with fake transcript
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Jfdlsjfd" data-source="post: 9768486" data-attributes="member: 42856"><p>The problem with the video is that it's... like a lot of newspaper report, pretty lacking in data. We get a single anecdote (possibly fabricated to convey the point) showing that ChatGPT outputted a wildly hallucinated result about previously entered data, which I am quite accepting since something close happened to me when using it. Except that I wasn't surprised, so I dismissed the hallucination and reprompted by request until it was correctly executed -- so far, I thought it was what a regular person with no particular skill would do, but apparently it's because I have Charles-Xavier level of fluency with AI. Why not, after all. So, let's assume we have a report on a true, single, incident.</p><p></p><p>It is reported, demonstrating what? That it <em>can</em> happen. Which is correct. It can happen. But what can we draw, as conclusion, on the ability of the software to be good or bad? The journalist claims to have been using every day for a long time before it happens, and probably to his entire satisfaction. So, it is obviously not bad all the time.</p><p></p><p>Now, let's imagine another news report. Instead of ChatGTP, he newsman explains to his co-host his dealings with a new intern in the staff, Chad Jaypity. He usually doing his summary quite well and everyone like him, but yesterday he was fluking work and denied it, then denied he was asked to do something and gaslighted the newspaper. And the newsman goes on to tell how he doubled down when caught not having done the job.</p><p></p><p>What could this piece teach us about the ability of humans to be god or bad at a job? Nothing. We can learn that there are occurrences of faulty job by AI or interns, but we don't have enough data to determine the general answer. Is it worthwhile to be warned that humans and AI can output false result? Sure! And books too. And lot of thing. But we can't assess their performance, and that's not what the video is about, from a single result. The video explicitely explains that the news man was satisfied with his use of the tool for a long time before an incident happens, so what is the conclusion? Obviously, it's not "stop using ChatGPT for his work" it's "learn to identify the hallucinations the same way you deal daily with incompetent, slothy subordinate: we don't stop employing people saying "they are bad at their job", we're making the most with the people we work with despite their flaw.</p><p></p><p>Same with the tool. Is it flawless? Certainly not. Can you gain productivity with it? Certainly. Both examples are in the video. Is the productivity gain worth the productivity loss incurred by checking the result for anything important and dealing with the hallucinations that may happen? This is the key question, which depends on the line of work, the exact tool used, the training provided to the operator of the solution. Those are key questions, totally unadressed in the video, to give an honest answer about whether the tool is useful or not. But such a video would certainly be less buzzworthy.</p><p></p><p></p><p></p><p></p><p>LLMs don't search, but professional AI solutions aren't just LLMs. I am part of the team working to assess a legal AI tool sold by Dalloz, a reputable law resource editor, and it is a LLM interface coupled with their database, and they either search it or are trained on very specific content, and it is supposed to be adversarially checking answers againt the database. I don't know yet how much time it will save over regular use of the database, possibly none, possibly some but not enough to be worth the price, but there is also the possibly that the AI solution in a professional environment isn't to just use a 20 USD/month chatgpt toy alone. Or maybe it's not worth using a very expensive tool built upon an LLM and run deepseek for free on your own computer and take the time to deal with the unaccuracies yourself.</p></blockquote><p></p>
[QUOTE="Jfdlsjfd, post: 9768486, member: 42856"] The problem with the video is that it's... like a lot of newspaper report, pretty lacking in data. We get a single anecdote (possibly fabricated to convey the point) showing that ChatGPT outputted a wildly hallucinated result about previously entered data, which I am quite accepting since something close happened to me when using it. Except that I wasn't surprised, so I dismissed the hallucination and reprompted by request until it was correctly executed -- so far, I thought it was what a regular person with no particular skill would do, but apparently it's because I have Charles-Xavier level of fluency with AI. Why not, after all. So, let's assume we have a report on a true, single, incident. It is reported, demonstrating what? That it [I]can[/I] happen. Which is correct. It can happen. But what can we draw, as conclusion, on the ability of the software to be good or bad? The journalist claims to have been using every day for a long time before it happens, and probably to his entire satisfaction. So, it is obviously not bad all the time. Now, let's imagine another news report. Instead of ChatGTP, he newsman explains to his co-host his dealings with a new intern in the staff, Chad Jaypity. He usually doing his summary quite well and everyone like him, but yesterday he was fluking work and denied it, then denied he was asked to do something and gaslighted the newspaper. And the newsman goes on to tell how he doubled down when caught not having done the job. What could this piece teach us about the ability of humans to be god or bad at a job? Nothing. We can learn that there are occurrences of faulty job by AI or interns, but we don't have enough data to determine the general answer. Is it worthwhile to be warned that humans and AI can output false result? Sure! And books too. And lot of thing. But we can't assess their performance, and that's not what the video is about, from a single result. The video explicitely explains that the news man was satisfied with his use of the tool for a long time before an incident happens, so what is the conclusion? Obviously, it's not "stop using ChatGPT for his work" it's "learn to identify the hallucinations the same way you deal daily with incompetent, slothy subordinate: we don't stop employing people saying "they are bad at their job", we're making the most with the people we work with despite their flaw. Same with the tool. Is it flawless? Certainly not. Can you gain productivity with it? Certainly. Both examples are in the video. Is the productivity gain worth the productivity loss incurred by checking the result for anything important and dealing with the hallucinations that may happen? This is the key question, which depends on the line of work, the exact tool used, the training provided to the operator of the solution. Those are key questions, totally unadressed in the video, to give an honest answer about whether the tool is useful or not. But such a video would certainly be less buzzworthy. LLMs don't search, but professional AI solutions aren't just LLMs. I am part of the team working to assess a legal AI tool sold by Dalloz, a reputable law resource editor, and it is a LLM interface coupled with their database, and they either search it or are trained on very specific content, and it is supposed to be adversarially checking answers againt the database. I don't know yet how much time it will save over regular use of the database, possibly none, possibly some but not enough to be worth the price, but there is also the possibly that the AI solution in a professional environment isn't to just use a 20 USD/month chatgpt toy alone. Or maybe it's not worth using a very expensive tool built upon an LLM and run deepseek for free on your own computer and take the time to deal with the unaccuracies yourself. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
ChatGPT lies then gaslights reporter with fake transcript
Top