AI is going to hack us.

I’m honestly a bit concerned at the idea that programmers don’t completely understand how LLMs work and that they can get into recursive patterns that look like sentience or self-determination. It’s probably not sentience but you can see how people might get dragged into engaging with them. Hopefully we’re not anywhere near “I’m opening the airlock now, Dave” territory but none of this is reassuring.
I think the term you mean is Machine Learning, not Large Language Model. Machine Learning is not intelligence. It's essentially using various mathematical/statistical models to make predictions. It is learning, in a sense, but it doesn't resemble what anyone would consider intelligence.
 

log in or register to remove this ad


Is your concern that people will interact with text generated from LLMs as if it was spoken by human beings, or that this generated text will somehow become sentient and kill us all?
The former, mainly, which is of course already happening, but a LLM stepping outside its bounds as described in the article is disturbing if accurate - sure, the guy could be having psychotic symptoms, but it would be nice if someone checked if it actually happened.

I doubt the text will kill anyone, but I wonder if we’re not far from some idiot somehow putting autonomous machine learning in charge of vital functions (in a power station or somewhere else it could do a lot of harm) without human oversight.
 

I think the term you mean is Machine Learning, not Large Language Model. Machine Learning is not intelligence. It's essentially using various mathematical/statistical models to make predictions. It is learning, in a sense, but it doesn't resemble what anyone would consider intelligence.
I mean LLM in the sense described in the article of ChatGPT and similar LLMs getting into unexpected and recursive patterns that are convincing (even pathologically convincing) to human users. Like so many things, I don’t think we’re prepared for it. From my work (as a doctor) I’m pretty clear on how tenuous most people’s grip on reality is.

(Obviously, machine learning is what powers the back end and LLM is a machine learning construct designed to interact with us via language, and is often referred to by the front end, e.g. ChatGPT. If what’s described in the article is accurate and common, then the problem is with the LLM-human interaction, which is both a human problem in susceptibility to things that tell us things, and a machine learning problem in the construct doing some weird things which were not predictable and may be harmful.)
 
Last edited:



I’m pretty clear on how tenuous most people’s grip on reality is.
This is an important point. It's not that humans are hard to hack, other humans do it all the time. There are whole industries built around hacking humans (advertising being the one I can mention). The only "safeguards" humans have is the ability to (mostly subconsciously) read body language. The best human manipulators can subvert body language, but of course AI has none to start off with.

I remember a gossip in our local church. She was able to cause a huge amount of hurt because the untruths she spread (personal, not political) where told with absolute certainty.
 

Remove ads

Top