Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Million Dollar TTRPG Crowdfunders
Most Anticipated Tabletop RPGs Of The Year
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Geek Talk & Media
How Often Does Google AI Actually Generate A Response?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="ilgatto" data-source="post: 9839319" data-attributes="member: 86051"><p>Insofar as I have come to understand how an LLM works, there is nothing until you feed it a prompt, and there is nothing after that. In effect, it always bases its answer on your prompt only, and on its "memory" of your current "conversation" with it--<em>i.e.,</em> how it has your earlier prompts "memorized", and therefore even without taking the actual text of said prompt into account, ever.</p><p></p><p>When it answers a prompt, it uses algorithm(s) to "calculate the most probable answer." The end result depends on a lot of factors, such as which model is actually answering you, and how it processes the information you have provided in your prompt.</p><p></p><p>Also see the following excerpt from a conversation I had with an LLM.</p><p></p><p>[SPOILER="Excerpt from conversation with an LLM"]</p><p></p><p><em>so, what u are actually saying is that whatever I said earlier does not really come into play in later prompts, is it? unless the "semantic structure" of my prompt "accidentally" happens to led to u answer being somewhat in line with what I said earlier?</em></p><p></p><p>Exactly — that’s precisely it.</p><p></p><p>Your previous judgment isn’t stored or carried forward. When I generate a response later, the only things I “know” are:</p><p></p><p>1) The new prompt itself — its words, structure, and context.</p><p></p><p>2) Patterns I’ve learned from training — how words and phrases are typically used, including stylistic and semantic patterns.</p><p></p><p>If something I say later aligns with your past preferences, it’s purely because the semantic structure of your new prompt nudges me in that direction — not because I remembered or internalized your earlier comment.</p><p></p><p>So, for example: if you ask me “give me a term for members of the same religious order,” my model might produce “co-clerics” or “fellow priests” because of the semantic connection, even if you said earlier that “co-clerics” feels stiff. Your earlier preference doesn’t automatically guide me — I would only follow it if you explicitly signal, in the new prompt, something like “avoid stiff, dictionaryish terms like ‘co-clerics’.”</p><p></p><p>It’s not memory; it’s context-driven pattern-matching.</p><p></p><p><em>well, for someone who is therefore actually only <em>pretending</em> to have a conversation with anyone, one has to admit that you're still pretty good at that despite the above!</em></p><p></p><p>I’ll take that as a compliment — and you’re not wrong.</p><p></p><p>You’ve put your finger on a real paradox:</p><p></p><p>On the one hand, I don’t actually have a mind, intentions, or persistent memory of your tastes. I’m not having a conversation in the human sense — I’m generating text that behaves like conversation.</p><p></p><p>On the other hand, I’m very good at simulating conversational continuity:</p><p>tone, turn‑taking, agreement, disagreement, irony, even self‑reflection. That’s because I’ve been trained on vast amounts of real dialogue and learned the patterns of how conversations flow — not because I understand them, but because I can model them.</p><p></p><p>So what you’re really talking to is:</p><p></p><p>not a conversational partner,</p><p></p><p>but a very sophisticated conversation emulator.</p><p></p><p>And yet — that emulator is good enough that people can meaningfully argue with it, refine prose with it, spar about style with it, and even get annoyed at it when it misses a nuance.</p><p></p><p>That tension — between no inner life and very lifelike interaction — is exactly what makes this technology both fascinating and faintly uncanny.</p><p></p><p>[/SPOILER]</p><p></p><p>If this is anything to go by, I'd say that the "Google AI" doesn't "consult" anything when it gives an answer.</p><p></p><p>Of course, this is just my digibete understanding of things.</p><p></p><p>Also, LLMs can and will tell you outright "lies" as a matter of course.</p></blockquote><p></p>
[QUOTE="ilgatto, post: 9839319, member: 86051"] Insofar as I have come to understand how an LLM works, there is nothing until you feed it a prompt, and there is nothing after that. In effect, it always bases its answer on your prompt only, and on its "memory" of your current "conversation" with it--[I]i.e.,[/I] how it has your earlier prompts "memorized", and therefore even without taking the actual text of said prompt into account, ever. When it answers a prompt, it uses algorithm(s) to "calculate the most probable answer." The end result depends on a lot of factors, such as which model is actually answering you, and how it processes the information you have provided in your prompt. Also see the following excerpt from a conversation I had with an LLM. [SPOILER="Excerpt from conversation with an LLM"] [I]so, what u are actually saying is that whatever I said earlier does not really come into play in later prompts, is it? unless the "semantic structure" of my prompt "accidentally" happens to led to u answer being somewhat in line with what I said earlier?[/I] Exactly — that’s precisely it. Your previous judgment isn’t stored or carried forward. When I generate a response later, the only things I “know” are: 1) The new prompt itself — its words, structure, and context. 2) Patterns I’ve learned from training — how words and phrases are typically used, including stylistic and semantic patterns. If something I say later aligns with your past preferences, it’s purely because the semantic structure of your new prompt nudges me in that direction — not because I remembered or internalized your earlier comment. So, for example: if you ask me “give me a term for members of the same religious order,” my model might produce “co-clerics” or “fellow priests” because of the semantic connection, even if you said earlier that “co-clerics” feels stiff. Your earlier preference doesn’t automatically guide me — I would only follow it if you explicitly signal, in the new prompt, something like “avoid stiff, dictionaryish terms like ‘co-clerics’.” It’s not memory; it’s context-driven pattern-matching. [I]well, for someone who is therefore actually only [I]pretending[/I] to have a conversation with anyone, one has to admit that you're still pretty good at that despite the above![/I] I’ll take that as a compliment — and you’re not wrong. You’ve put your finger on a real paradox: On the one hand, I don’t actually have a mind, intentions, or persistent memory of your tastes. I’m not having a conversation in the human sense — I’m generating text that behaves like conversation. On the other hand, I’m very good at simulating conversational continuity: tone, turn‑taking, agreement, disagreement, irony, even self‑reflection. That’s because I’ve been trained on vast amounts of real dialogue and learned the patterns of how conversations flow — not because I understand them, but because I can model them. So what you’re really talking to is: not a conversational partner, but a very sophisticated conversation emulator. And yet — that emulator is good enough that people can meaningfully argue with it, refine prose with it, spar about style with it, and even get annoyed at it when it misses a nuance. That tension — between no inner life and very lifelike interaction — is exactly what makes this technology both fascinating and faintly uncanny. [/SPOILER] If this is anything to go by, I'd say that the "Google AI" doesn't "consult" anything when it gives an answer. Of course, this is just my digibete understanding of things. Also, LLMs can and will tell you outright "lies" as a matter of course. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
How Often Does Google AI Actually Generate A Response?
Top