AI is going to hack us.

Marriage is an archaic institution that's been falling apart since the 70's - long before AI showed up. But that's what people do, right? They see something they don't like and identify it as a "social problem" then pins the blame for the problem on some new thing.

I remember when TV was getting blamed for all of society's ills. Alcohol was a media BoogieMan for more than one generation. Hey! Remember when D&D was spreading Satanism like wildfire?

200w.gif
Still worth discussing because 'similar' does not equal 'the same', but this issue strongly points away from the 'hey you guys! Look what I found!' direction.
 

log in or register to remove this ad





Anecdotal. I order fast food in the UK and I've had a few wrong orders over the past few years, ranging from missing / substituted items up to an entirely different meal.
Interesting. I haven't had anything except a minor missing item in very large orders (i.e. for 5+ people) for probably 5 years. And I order a really excessive amount of delivery. I'm probably jinxing the hell out of myself by saying this of course.

Orders for delivery from supermarkets though, goddamn, I've had everything from them cancelling literally 60-70% of the items on the list (making the delivery entirely pointless) as "out of stock"* to demented substitutions like substituting the ground coffee I'd ordered with coffee beans! If I had a grinder at that time (I do now), I wouldn't be ordering ground coffee, you maniacs! The same store also later substituted ground coffee with instant coffee, I've just stopped ever ordering anything coffee-related from them at this point. I've also had just missing multiple items with no explanation.

* = One time I needed what I'd ordered sufficiently that I went down to the same exact store I'd ordered from (a Waitrose) and saw that, in fact, literally every single thing they'd claimed was "out of stock" was in stock! What a surprise! Somehow I don't believe that every single thing (and it was like 12 different things) was restocked in the hour and a bit between me getting the delivery and going down there, either.
 


AI is addictive and can exacerbate existing issues.
Whilst I get what the article is getting at, and it's certainly incredibly wordy and rather smugly written (despite complaining about smug-ness repeatedly), I would urge caution here, because in the 21st century we've seen increasing misuse of the term "addiction" to mean behaviours which aren't actually addictions, neither physiologically nor psychologically. I note that Lance Eliot immediately points to other "addictions" which are not actually real addictions (or even properly recognised psychiatric conditions) as "addictions" as part of his evidence here (i.e. "internet addiction" etc), but in fact these tend to be individually-specific behaviours which are the result of other psychiatric or psychological problems. Notably, methods of treatment which work on real addictions don't function well (or even at all in many cases) with these "addictions", but treatment for underlying psychiatric conditions (very often clinical depression) works extremely well. Therefore treating them as "addictions" can be actively harmful. A good example of a false addiction is "porn addiction", which is remarkable in that it only seems to affect men with in an immediate environment where strong religious beliefs vitiate against pornography, or where it seems like would be a convenient defence against them having downloaded illegal material. Just never seems to impact anyone else!

Importantly I would point out three things:

1) Lance Eliot is not a medical doctor, and thus not a psychiatrist. Nor is his doctorate in psychology, so he's also not a psychologist. Nor does he have the slightest bit of expertise in the field of addiction in any other way. So his qualifications for these assertions are essentially the same as any other layman, like, say, most posters here.

2) This article is entirely based him stacking a bunch of assumptions and essentially demanding that people agree with him in a slightly ill-tempered way (I admit there is an element of "let he who is without sin..." in me in particular pointing this out!). It's a relatively logical stack of assumptions, but they're not well-evidenced, and what evidence he does present is often weak in various ways. He also spends a truly demented amount of time in the article asking ChatGPT questions, which is just silly business.

3) In particular, this is not based on any kind of study, or study of studies. Which it really should be.

So all we have really is the equivalent of a lengthy forum rant or blog post, being published by Forbes. I mean, I guess that's what about 95% of columnists are so...

(On the flip side, I did wonder if the entire article was a massive troll, because it's incredibly excessively long and repetitious and feels as if it was written by AI a lot of the time, whilst also featuring AI prominently. It's very funny when he's like "AI chatbot, do u agree that u are addictive?!?!" and the designed-to-be-agreeable chatbot agrees lol, and he seems to think this has evidentiary value. I guess someone had to prove that firing all the editors was a bad decision.)

What's my TLDR here?

That in an entirely colloquial sense, sure, you can call AI use "addictive", in the same way a TV show (or even watching TV generally), or videogames in general (as opposed to those with a gambling loop) can be called "addictive", or even running or cycling can be called "addictive".

But in a medical sense? In a psychiatric sense? In any more genuine sense? I would say that this article completely fails to prove that point (and indeed is kind of embarrassing).
 
Last edited:

And those are heavily regulated to protect society from their negative effects.
Because those have an actual, proven, studied, medical basis with deep and real research.

Whereas this does not.

It's not exactly complicated.

It's the same way we don't put heavy regulation on every single activity which some random punter with no medical expertise, who hasn't performed any studies, claims is "addictive" or "an addiction". Basically everything anyone does, someone out there, often even an MD, sometimes even a psychiatrist, has claimed is "an addiction", often because it would potentially benefit a client of theirs for this to be true, or just because it's flavour-of-the-month.

In this case, we have a guy with zero relevant qualifications (and no, being an "AI expert" isn't relevant qualification here - indeed his suggestion that we use AI chatbots to treat addiction to AI chatbots suggests it may be disqualifying!) claiming something is an addiction in a rambling blog post of an article, half of which is just him asking ChatGPT stupid questions and reporting the stupid answers as if they were pearls of wisdom!

The only "medical"-seeming evidence he presents at all is this article (which is not clear is any kind of actual study, nor whether it is peer-reviewed or by whom):


But that's as fake as the rest of it. Why? Because none of the people involved are psychiatrists or even medical doctors. In fact they're a couple of business management professors! I can scarcely think of someone less qualified to make pronouncements about what's an addiction.
 

Because those have an actual, proven, studied, medical basis with deep and real research.
The relative amount of regulation these things have bears little relationship to the scientific evidence with regards to level of harm.

And a scientific approach would be to err on the side of caution until more evidence is available.
 

Remove ads

Top