ChatGPT lies then gaslights reporter with fake transcript

Time will tell. We'll see where things are in a year. By then the picture should be clearer.

Tnat's still a short time. Home access to the Internet and telematic services started in the 80s. People in the 80s probably wouldn't have bet the Internet would be their regular way of interacting with most public services and companies back them. Maybe with IA we're not in the 80s but in the 90s, but one year is still very short to assess a transformation. But in 30 years, we might very well look back and say it was just a fad like beanie babies (analogy provided by ChatGPT) or something like the dotcom bubble: a few companies in the spotlight blowing and exploding but the technology being widespread and adopted everywhere (probably discussing this through our personal muses from Eclipse Phase).
 
Last edited:

log in or register to remove this ad

Tnat's still a short time. Home access to the Internet and telematic services started in the 80s. People in the 80s probably wouldn't have bet the Internet would be their regular way of interacting with most public services and companies back them. Maybe with IA we're not in the 80s but in the 90s, but one year is still very short to assess a transformation.
Fair enough. How about 3 years? I want to put it in Google Calendar.
 

Fair enough. How about 3 years? I want to put it in Google Calendar.

Sorry, I developped my answer when you were typing yours. I think if we're speaking of TwoSix's sea change, we should be speaking in decades. 5 years to assess acceptance, especially focusing on public policy interests.

There might be very interested countries in adapting LLM to predict whether a person is harbouring un(country)an thoughts in order to fight off un(country)an ideology and identify the enemy within, and at this point I am pretty sure funding will become available, even if private investors become reluctant to sell it to the government.
 


Tnat's still a short time. Home access to the Internet and telematic services started in the 80s. People in the 80s probably wouldn't have bet the Internet would be their regular way of interacting with most public services and companies back them. Maybe with IA we're not in the 80s but in the 90s, but one year is still very short to assess a transformation.
AI is very, very old, older than personal computers. LLMs and diffusion models are but a proposed branch of the technology. One that has shown some very useful applications, mostly in actual research environments, but that has shown its limits.
 


AI is very, very old, older than personal computers. LLMs and diffusion models are but a proposed branch of the technology. One that has shown some very useful applications, mostly in actual research environments, but that has shown its limits.

I was basing my calender on "home access for individual people" for the Internet, and not back to the 60s because that was the equivalent of the "early AI research". Same with AI application: we're at the starting point where it's avaiable to the general public if they express an interest in it.
 

AI is very, very old, older than personal computers. LLMs and diffusion models are but a proposed branch of the technology. One that has shown some very useful applications, mostly in actual research environments, but that has shown its limits.
Yeah, I still remember how trendy machine learning was in the tech world a decade-ish ago, when a bunch of people were swearing that ML was the basis of truly intelligent machines and every tech company was racing to jam ML features into their product so they could claim to be using "advanced AI and ML." After a few years, nobody in my workplace would even entertain a product pitch involving ML unless the sales staff could explain what the ML was actually necessary for. It speedran all the way to becoming a meaningless buzzword. That was called "AI" at the time, as was speech recognition tech. And before that, it was reverse image search, and before that it was Chess engines, and so forth. I've lost track of how many times we've been through the cycle of AI boom, bust, and "AI winter" until a new technology/technique is pioneered and the boom starts all over again. Like you said, even older than personal computers. IIRC Alan Turing was already thinking about AI before transistors existed.
 
Last edited:


Excuse Me Reaction GIF by One Chicago
 

I was basing my calender on "home access for individual people" for the Internet, and not back to the 60s because that was the equivalent of the "early AI research". Same with AI application: we're at the starting point where it's avaiable to the general public if they express an interest in it.
Provided this development leads somewhere other than a technological dead end, right now we are at the "Time sharing" stage of personal computing. Still decades away from the right developments that allow for mass adoption and at least one or two bubbles that will decimate the companies pedling it. At least the internet companies that survived got to build assets that could be leveraged for survival, but the current AI companies will only have obsolete rigs to show for it once this goes away.
 

Remove ads

Top