Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Geek Talk & Media
SkyNet really is ... here?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Cergorach" data-source="post: 9756700" data-attributes="member: 725"><p>Honestly, these are all links to YouTube videos, twitter accounts, organisations that have an axe to grind with AI, with possibly the exception of cloudsecurityalliance.org... Where are the links to the academic research papers? I can give you links about 'flat earth', but I hope we can all agree those are nonsense. So why are these twitter posts and sensationalist YouTube clips any different from flat-earthers? GIVE US SCIENCE!</p><p></p><p>And while the guy in the first video in this thread is not exactly wrong... He is manipulating the truth so much that he is lying to you by omission and cherry picking the results without context.</p><p></p><p>That said. AI/LLM isn't bad if it's not in control of anything important. But it IS when it's in control of important systems. In the same case as you don't give an eight-grader control of the nuclear football, you don't give a generic AI control over your email, security, production systems. No matter what the tech-bros say. As an IT person, I'm vetted to different levels of depth, depending on what I work on. I get vetted pretty thoroughly when I work on the IT security at a bank, even more so if I work in certain areas of government and it would go to insane levels if I ever where to work in sensitive levels at intelligence agencies or the military.</p><p></p><p>People need to realize that there are TONs of factions within IT. Security would prefer to cast your computer into concrete and let it drop into the Mariana Trench, and you just work in the mailroom. Support just wants to make their users happy, so anything that interferes with that must go! Developers... Oh, developers... They just want to make stuff! Security is nonsense, especially when it interferes with them making stuff. Sysadmins just want to make the Tech go VROOM!!! Management just wants to do the cool stuff they heard about, etc. One side of IT often lacks deep or even basic knowledge of other disciplines within IT. Is it any wonder that when an (AI/LLM) developer makes something cool, they go to management, which wants to deploy it instantly, no matter what security (hopefully) screams. Why is that, because <em>maybe</em> 30 years ago, you could know pretty much everything in IT if you were smart enough and had enough experience, today that is a flat-out impossibility! It also does not help that most of the sales people at software companies flat-out lie to decision makers about the capabilities of their product, this includes the big Tech companies like Microsoft. Sometimes this is due to lack of knowledge 'other' times it's because they want to make their sales quota or their sales bonus for that week/month/year.</p><p></p><p>There have been different AI for a while now, but particularly the LLM stuff is concerning when used for things like automated security that has <em>way</em> too much access and the security humans often don't bother to check the results. Now, I'm all for automated systems that quarantine files, devices, users, networks, etc. if it reaches certain criteria, that's what you use to limit the damage of a successful attack. But for the love of *** let actual humans with enough knowledge/experience check if that was done in error or not.</p><p></p><p>When we live in an age where the customer support AI/LLM can't even recognize their own products, what exactly do you expect AI/LLM to do for you on a regular basis? Don't get me wrong, it's cool stuff that can do cool stuff in the right circumstances, if used properly. And what's 'properly' still depends on people, flawed people, who make the wrong decisions, they didn't need AI/LLM for that, they were making those before LLM ever existed...</p><p></p><p>I find it scarry that a shop owner can sell guns/ammo to a person without without checking their mental health. Heck, there are now pretty stringent safeties in place for selling/buying nitrate, as farmers still need that stuff. The same with airplanes, 9/11 wasn't the first time they used kamikaze attacks against targets (60 years before), people still use airplanes to travel. The security just is a lot tighter then before, but flights from El-Al had way more security in place before 2001. People evaluate threats differently, and even if there's a threat, if they still want it, they'll find a way to mitigate the threat. The same goes for AI, we don't need scary stories by some media influencer, we need actual huge product gaffes. Like the CloudStrike shenanigans that put the half the world on pause... Oh wait... We're now 14 months down the line and the stock price is now 12%+ higher then it was before Blue Friday... <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f609.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=";)" title="Wink ;)" data-smilie="2"data-shortname=";)" /></p></blockquote><p></p>
[QUOTE="Cergorach, post: 9756700, member: 725"] Honestly, these are all links to YouTube videos, twitter accounts, organisations that have an axe to grind with AI, with possibly the exception of cloudsecurityalliance.org... Where are the links to the academic research papers? I can give you links about 'flat earth', but I hope we can all agree those are nonsense. So why are these twitter posts and sensationalist YouTube clips any different from flat-earthers? GIVE US SCIENCE! And while the guy in the first video in this thread is not exactly wrong... He is manipulating the truth so much that he is lying to you by omission and cherry picking the results without context. That said. AI/LLM isn't bad if it's not in control of anything important. But it IS when it's in control of important systems. In the same case as you don't give an eight-grader control of the nuclear football, you don't give a generic AI control over your email, security, production systems. No matter what the tech-bros say. As an IT person, I'm vetted to different levels of depth, depending on what I work on. I get vetted pretty thoroughly when I work on the IT security at a bank, even more so if I work in certain areas of government and it would go to insane levels if I ever where to work in sensitive levels at intelligence agencies or the military. People need to realize that there are TONs of factions within IT. Security would prefer to cast your computer into concrete and let it drop into the Mariana Trench, and you just work in the mailroom. Support just wants to make their users happy, so anything that interferes with that must go! Developers... Oh, developers... They just want to make stuff! Security is nonsense, especially when it interferes with them making stuff. Sysadmins just want to make the Tech go VROOM!!! Management just wants to do the cool stuff they heard about, etc. One side of IT often lacks deep or even basic knowledge of other disciplines within IT. Is it any wonder that when an (AI/LLM) developer makes something cool, they go to management, which wants to deploy it instantly, no matter what security (hopefully) screams. Why is that, because [I]maybe[/I] 30 years ago, you could know pretty much everything in IT if you were smart enough and had enough experience, today that is a flat-out impossibility! It also does not help that most of the sales people at software companies flat-out lie to decision makers about the capabilities of their product, this includes the big Tech companies like Microsoft. Sometimes this is due to lack of knowledge 'other' times it's because they want to make their sales quota or their sales bonus for that week/month/year. There have been different AI for a while now, but particularly the LLM stuff is concerning when used for things like automated security that has [I]way[/I] too much access and the security humans often don't bother to check the results. Now, I'm all for automated systems that quarantine files, devices, users, networks, etc. if it reaches certain criteria, that's what you use to limit the damage of a successful attack. But for the love of *** let actual humans with enough knowledge/experience check if that was done in error or not. When we live in an age where the customer support AI/LLM can't even recognize their own products, what exactly do you expect AI/LLM to do for you on a regular basis? Don't get me wrong, it's cool stuff that can do cool stuff in the right circumstances, if used properly. And what's 'properly' still depends on people, flawed people, who make the wrong decisions, they didn't need AI/LLM for that, they were making those before LLM ever existed... I find it scarry that a shop owner can sell guns/ammo to a person without without checking their mental health. Heck, there are now pretty stringent safeties in place for selling/buying nitrate, as farmers still need that stuff. The same with airplanes, 9/11 wasn't the first time they used kamikaze attacks against targets (60 years before), people still use airplanes to travel. The security just is a lot tighter then before, but flights from El-Al had way more security in place before 2001. People evaluate threats differently, and even if there's a threat, if they still want it, they'll find a way to mitigate the threat. The same goes for AI, we don't need scary stories by some media influencer, we need actual huge product gaffes. Like the CloudStrike shenanigans that put the half the world on pause... Oh wait... We're now 14 months down the line and the stock price is now 12%+ higher then it was before Blue Friday... ;) [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
SkyNet really is ... here?
Top