ChatGPT lies then gaslights reporter with fake transcript

But... that's the point, isn't it? That's even exactly what the video shows - if someone cannot trust the results, you actually aren't great at search! If you return things that don't exist, that's being BAD at search.

Especially when you are unreliable, AND several times more costly in energy use/computing power than regular search is. That's pretty much the opposite of "great" now isn't it?

It is important to note that LLMs don't actually "search". Where a traditional search engine is a combination of an exhaustively created and maintained catalog and lookup, an LLM is basically a very complicated text predictor. If the pieces of information you want happen to have been given sufficient weight when the thing was trained, you'll get your information. But if not, you will get whatever did happen to have the weight, with no regard whatsoever to what the content really is - which is where "hallucinations" come from.
It's great at certain types of searches. For example, translating plain language into a search. Example: today, a student wanted to know how to turn one type of audio file to another on her macbook, and Google was giving her confusing results so I told her to try Chat, which gave her step by step instructions.

In testing it on doing academic searches re. humanities topics, I haven't found that it produces hallucinatory references often, but it does tend to go to the same sources over and over (very iconic, heavily cited ones), even on quite disparate prompts.
 

log in or register to remove this ad

Key points from the article: Humanity May Achieve the Singularity Within the Next 3 Months, Scientists Suggest

  • The world is awash in predictions of when the singularity will occur or when artificial general intelligence (AGI) will arrive. Some experts predict it will never happen, while others are marking their calendars for 2026.
  • A new macro analysis of surveys over the past 15 years shows where scientists and industry experts stand on the question and how their predictions have changed over time, especially after the arrival of large language models like ChatGPT.
  • Although predictions vary across a span of almost a half-century, most agree than AGI will arrive before the end of the 21st century.

 


Key points from the article: Humanity May Achieve the Singularity Within the Next 3 Months, Scientists Suggest

  • The world is awash in predictions of when the singularity will occur or when artificial general intelligence (AGI) will arrive. Some experts predict it will never happen, while others are marking their calendars for 2026.
  • A new macro analysis of surveys over the past 15 years shows where scientists and industry experts stand on the question and how their predictions have changed over time, especially after the arrival of large language models like ChatGPT.
  • Although predictions vary across a span of almost a half-century, most agree than AGI will arrive before the end of the 21st century.

I think the term "AGI" is pretty irrelevant. AI will continue to advance, and whether someone eventually calls a version of it AGI or something else doesn't matter. The problem is that, like many things, there are lots of different definitions of AGI. We have scientists who can't agree on it, and companies eager to claim they've invented it. Then we have journalists and other people either eager to dispel it or declare its invention.

But what someone calls it doesn't matter. What it can do is what will matter. If someone develops something capable of approximating human sentience, it won't have to actually be sentient. All it has to do is convincingly imitate it. If it can learn and adapt to real life situations almost as well as humans, then call it what you will. It'll be an artificial general intelligence.

At some point -- my guess is between 2026-2027 -- someone will develop an AI with something approximating autonomous goal formation, where the AI is able to choose what it spends its processor cycles on, but humanity will not perceive or appreciate it in the moment. It'll take decades before we look back with perfect hindsight and say, "Yeah...that was the moment."

I think by the time society at large accepts the existence of AGI, AIs will already be such a huge part of modern life that no one will even remember the day of its invention. It'll be like asking someone, "When was the day the first computer was invented?" and you'll get 100 different answers.

It'll happen right before our eyes, slowly, iteratively, with new releases and different AI models over years, and we won't even perceive when the big leaps happened in the moment they happened. We'll only see it clearly looking back through the lens of history.
 


Key points from the article: Humanity May Achieve the Singularity Within the Next 3 Months, Scientists Suggest

  • The world is awash in predictions of when the singularity will occur or when artificial general intelligence (AGI) will arrive. Some experts predict it will never happen, while others are marking their calendars for 2026.
  • A new macro analysis of surveys over the past 15 years shows where scientists and industry experts stand on the question and how their predictions have changed over time, especially after the arrival of large language models like ChatGPT.
  • Although predictions vary across a span of almost a half-century, most agree than AGI will arrive before the end of the 21st century.

Right.

It has no real profit model, it doesnt actually know how many fingers to draw, it guesses on what to say/do next, and will disregard commands and delete data, but we are almost there.

Click bait, if I ever saw it.
 

Key points from the article: Humanity May Achieve the Singularity Within the Next 3 Months, Scientists Suggest

  • The world is awash in predictions of when the singularity will occur or when artificial general intelligence (AGI) will arrive. Some experts predict it will never happen, while others are marking their calendars for 2026.
  • A new macro analysis of surveys over the past 15 years shows where scientists and industry experts stand on the question and how their predictions have changed over time, especially after the arrival of large language models like ChatGPT.
  • Although predictions vary across a span of almost a half-century, most agree than AGI will arrive before the end of the 21st century.

Key points from the article The Real (Economic) AI Apocalypse is Nigh by Cory Doctorow

  • a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it.
  • the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters
  • AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, nobody will be there to do your job.
Both meta and openAI have announced moves to generative short term video. It doesn't feel like the move if AGI is months away.

To quote Doctorow a bit more...

"...the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector, e.g. "pivot to video," crypto, blockchain, NFTs, AI, and now "super-intelligence.""

Which is to say, it is all driven by the flawed economic premise is that there is no such thing as "enough".
 


Key points from the article The Real (Economic) AI Apocalypse is Nigh by Cory Doctorow

I'd recommend people click this.

That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.

That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).

Much like the market itself is a game of vibes, we have here a pretty obvious shell game.
 

Key points from the article The Real (Economic) AI Apocalypse is Nigh by Cory Doctorow

  • a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it.
  • the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters
  • AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, nobody will be there to do your job.
Very interesting topic. It could all go down exactly like that....

Here's why I don't think it will though:
  • AI tools are being widely deployed and widely adopted in white-color workspaces. It is not vaporware. AI is about more than simply replacing people with AI versions. In many corporate workplaces it's about augmenting the productivity of the remaining staff. It's about employees using AI tools like Microsoft Copilot, which keeps employees using the tools in the corporate walled garden as opposed to potentially sharing PII out on public platforms like the public version of ChatGPT, to create pivot tables and fancy reports from spreadsheets of raw data in 10 seconds instead of two hours.
  • The topline growth IMO isn't coming from AI replacing most workers with AI. That's the way it keeps being characterized, but I think that's actually an unintentional red herring. Companies are eliminating roles because the remaining staff can be more productive thanks to AI tools.
  • AI cannot do your job, yet. I agree with that. But it makes it so a human using AI can do a job that would have been performed by two people. That's why it's actually growing. Not because anyone's about to literally replace a human with an AI.
 

Remove ads

Top