Cergorach
The Laughing One
Honestly, these are all links to YouTube videos, twitter accounts, organisations that have an axe to grind with AI, with possibly the exception of cloudsecurityalliance.org... Where are the links to the academic research papers? I can give you links about 'flat earth', but I hope we can all agree those are nonsense. So why are these twitter posts and sensationalist YouTube clips any different from flat-earthers? GIVE US SCIENCE!
And while the guy in the first video in this thread is not exactly wrong... He is manipulating the truth so much that he is lying to you by omission and cherry picking the results without context.
That said. AI/LLM isn't bad if it's not in control of anything important. But it IS when it's in control of important systems. In the same case as you don't give an eight-grader control of the nuclear football, you don't give a generic AI control over your email, security, production systems. No matter what the tech-bros say. As an IT person, I'm vetted to different levels of depth, depending on what I work on. I get vetted pretty thoroughly when I work on the IT security at a bank, even more so if I work in certain areas of government and it would go to insane levels if I ever where to work in sensitive levels at intelligence agencies or the military.
People need to realize that there are TONs of factions within IT. Security would prefer to cast your computer into concrete and let it drop into the Mariana Trench, and you just work in the mailroom. Support just wants to make their users happy, so anything that interferes with that must go! Developers... Oh, developers... They just want to make stuff! Security is nonsense, especially when it interferes with them making stuff. Sysadmins just want to make the Tech go VROOM!!! Management just wants to do the cool stuff they heard about, etc. One side of IT often lacks deep or even basic knowledge of other disciplines within IT. Is it any wonder that when an (AI/LLM) developer makes something cool, they go to management, which wants to deploy it instantly, no matter what security (hopefully) screams. Why is that, because maybe 30 years ago, you could know pretty much everything in IT if you were smart enough and had enough experience, today that is a flat-out impossibility! It also does not help that most of the sales people at software companies flat-out lie to decision makers about the capabilities of their product, this includes the big Tech companies like Microsoft. Sometimes this is due to lack of knowledge 'other' times it's because they want to make their sales quota or their sales bonus for that week/month/year.
There have been different AI for a while now, but particularly the LLM stuff is concerning when used for things like automated security that has way too much access and the security humans often don't bother to check the results. Now, I'm all for automated systems that quarantine files, devices, users, networks, etc. if it reaches certain criteria, that's what you use to limit the damage of a successful attack. But for the love of *** let actual humans with enough knowledge/experience check if that was done in error or not.
When we live in an age where the customer support AI/LLM can't even recognize their own products, what exactly do you expect AI/LLM to do for you on a regular basis? Don't get me wrong, it's cool stuff that can do cool stuff in the right circumstances, if used properly. And what's 'properly' still depends on people, flawed people, who make the wrong decisions, they didn't need AI/LLM for that, they were making those before LLM ever existed...
I find it scarry that a shop owner can sell guns/ammo to a person without without checking their mental health. Heck, there are now pretty stringent safeties in place for selling/buying nitrate, as farmers still need that stuff. The same with airplanes, 9/11 wasn't the first time they used kamikaze attacks against targets (60 years before), people still use airplanes to travel. The security just is a lot tighter then before, but flights from El-Al had way more security in place before 2001. People evaluate threats differently, and even if there's a threat, if they still want it, they'll find a way to mitigate the threat. The same goes for AI, we don't need scary stories by some media influencer, we need actual huge product gaffes. Like the CloudStrike shenanigans that put the half the world on pause... Oh wait... We're now 14 months down the line and the stock price is now 12%+ higher then it was before Blue Friday...
And while the guy in the first video in this thread is not exactly wrong... He is manipulating the truth so much that he is lying to you by omission and cherry picking the results without context.
That said. AI/LLM isn't bad if it's not in control of anything important. But it IS when it's in control of important systems. In the same case as you don't give an eight-grader control of the nuclear football, you don't give a generic AI control over your email, security, production systems. No matter what the tech-bros say. As an IT person, I'm vetted to different levels of depth, depending on what I work on. I get vetted pretty thoroughly when I work on the IT security at a bank, even more so if I work in certain areas of government and it would go to insane levels if I ever where to work in sensitive levels at intelligence agencies or the military.
People need to realize that there are TONs of factions within IT. Security would prefer to cast your computer into concrete and let it drop into the Mariana Trench, and you just work in the mailroom. Support just wants to make their users happy, so anything that interferes with that must go! Developers... Oh, developers... They just want to make stuff! Security is nonsense, especially when it interferes with them making stuff. Sysadmins just want to make the Tech go VROOM!!! Management just wants to do the cool stuff they heard about, etc. One side of IT often lacks deep or even basic knowledge of other disciplines within IT. Is it any wonder that when an (AI/LLM) developer makes something cool, they go to management, which wants to deploy it instantly, no matter what security (hopefully) screams. Why is that, because maybe 30 years ago, you could know pretty much everything in IT if you were smart enough and had enough experience, today that is a flat-out impossibility! It also does not help that most of the sales people at software companies flat-out lie to decision makers about the capabilities of their product, this includes the big Tech companies like Microsoft. Sometimes this is due to lack of knowledge 'other' times it's because they want to make their sales quota or their sales bonus for that week/month/year.
There have been different AI for a while now, but particularly the LLM stuff is concerning when used for things like automated security that has way too much access and the security humans often don't bother to check the results. Now, I'm all for automated systems that quarantine files, devices, users, networks, etc. if it reaches certain criteria, that's what you use to limit the damage of a successful attack. But for the love of *** let actual humans with enough knowledge/experience check if that was done in error or not.
When we live in an age where the customer support AI/LLM can't even recognize their own products, what exactly do you expect AI/LLM to do for you on a regular basis? Don't get me wrong, it's cool stuff that can do cool stuff in the right circumstances, if used properly. And what's 'properly' still depends on people, flawed people, who make the wrong decisions, they didn't need AI/LLM for that, they were making those before LLM ever existed...
I find it scarry that a shop owner can sell guns/ammo to a person without without checking their mental health. Heck, there are now pretty stringent safeties in place for selling/buying nitrate, as farmers still need that stuff. The same with airplanes, 9/11 wasn't the first time they used kamikaze attacks against targets (60 years before), people still use airplanes to travel. The security just is a lot tighter then before, but flights from El-Al had way more security in place before 2001. People evaluate threats differently, and even if there's a threat, if they still want it, they'll find a way to mitigate the threat. The same goes for AI, we don't need scary stories by some media influencer, we need actual huge product gaffes. Like the CloudStrike shenanigans that put the half the world on pause... Oh wait... We're now 14 months down the line and the stock price is now 12%+ higher then it was before Blue Friday...
