AI is
addictive and can exacerbate existing issues.
Whilst I get what the article is getting at, and it's certainly incredibly wordy and rather smugly written (despite complaining about smug-ness repeatedly), I would urge caution here, because in the 21st century we've seen increasing misuse of the term "addiction" to mean behaviours which aren't actually addictions, neither physiologically nor psychologically. I note that Lance Eliot immediately points to other "addictions" which are not actually real addictions (or even properly recognised psychiatric conditions) as "addictions" as part of his evidence here (i.e. "internet addiction" etc), but in fact these tend to be individually-specific behaviours which are the result of other psychiatric or psychological problems. Notably, methods of treatment which work on real addictions don't function well (or even at all in many cases) with these "addictions", but treatment for underlying psychiatric conditions (very often clinical depression) works extremely well. Therefore treating them as "addictions" can be actively harmful. A good example of a false addiction is "porn addiction", which is remarkable in that it only seems to affect men with in an immediate environment where strong religious beliefs vitiate against pornography, or where it seems like would be a convenient defence against them having downloaded illegal material. Just never seems to impact anyone else!
Importantly I would point out three things:
1) Lance Eliot is not a medical doctor, and thus not a psychiatrist. Nor is his doctorate in psychology, so he's also not a psychologist. Nor does he have the slightest bit of expertise in the field of addiction in any other way. So his qualifications for these assertions are essentially the same as any other layman, like, say, most posters here.
2) This article is entirely based him stacking a bunch of assumptions and essentially demanding that people agree with him in a slightly ill-tempered way (I admit there is an element of "let he who is without sin..." in me in particular pointing this out!). It's a
relatively logical stack of assumptions, but they're not well-evidenced, and what evidence he does present is often weak in various ways. He also spends a truly demented amount of time in the article asking ChatGPT questions, which is just silly business.
3) In particular, this is not based on any kind of study, or study of studies. Which it really should be.
So all we have really is the equivalent of a lengthy forum rant or blog post, being published by Forbes. I mean, I guess that's what about 95% of columnists are so...
(On the flip side, I did wonder if the entire article was a massive troll, because it's incredibly excessively long and repetitious and feels as if it was written by AI a lot of the time, whilst also featuring AI prominently. It's very funny when he's like "AI chatbot, do u agree that u are addictive?!?!" and the designed-to-be-agreeable chatbot agrees lol, and he seems to think this has evidentiary value. I guess someone had to prove that firing all the editors was a bad decision.)
What's my TLDR here?
That in an entirely
colloquial sense, sure, you can call AI use "addictive", in the same way a TV show (or even watching TV generally), or videogames in general (as opposed to those with a gambling loop) can be called "addictive", or even running or cycling can be called "addictive".
But in a medical sense? In a psychiatric sense? In any more genuine sense? I would say that this article completely fails to prove that point (and indeed is kind of embarrassing).