There's a lot of doomsayers out there, fearmongering sells. Let's just say that I'm skeptical that AI is going to be more of a threat to humanity than other humans already are.
lol... chat bots have been passing the Turing Test for decades. We still don't have anything like actual AI. We're not even heading in that direction in any real or meaningful sense. The success of crappy trend-copying algorithms we have now and pretend are making art (success at making tech execs money, not success at doing anything useful or worthwhile) is already stifling real research into actual AI. Anyone that's interested in the concept of AI for any purpose other than stealing art should really be the ones shouting the loudest about the not-actual-AI that's getting all the press these days.A thread about D&D ... and AI.
I expect computers to pass the Turing Test in year 2025.
I have had a feeling about this for almost twenty years now. I was influenced the book by Kurzweil, The Age of Spiritual Machines, which I read in the early 00s.
To be fair, my prediction is plus-or-minus a year or two. Kurzweil himself wobbled about this date plus-or-minus five years.
But I suspect the Turing Test will be next year, toward the end of the year.
In the case of a Turing Test, the AI can genuinely function as a DM.
Sadly, people have always been good at causing harm to others, oftentimes inadvertently. I just don't see AI being any more dangerous than a number of other technologies or social trends.What can do more physical damage, a machine, or a human?
What can process data faster, in larger amounts, a computer program, or a human?
The question isn't if Humans have a capacity to do harm. We clearly do, in infinite ways, and some even think their harm is harm is kindness!
No, the issue is that no tool, in a connected world, will have the potential to do harm faster, than an eventual AI.
I mean fair enough. Think bigger I suppose.Sadly, people have always been good at causing harm to others, oftentimes inadvertently. I just don't see AI being any more dangerous than a number of other technologies or social trends.
I certainly don't see significant broad based harm to humanity from a toy company's utilization.
Geothermal energy is no where near as easy as you think.The idea is to dig down to near enough to the heat of the lava. Not situate in a gaseous vent.
Corrosion and gases are still an issue.
The surprise is, nations dont seem to be trying to get deep geothermic energy. We are literally floating on a pool of virtually infinite energy.
You know, the optimist in me really wants to believe that you are right here, and that AI could one day do the math that proves (as I have always believed) that short-term gain always leads to long-term loss, and that everyone loses under our current greed-first mentality (including those at the top, they just gain more in the short-term, which is what they care about now).I agree we will see abusive uses of AI in the short term.
But in the long term, AI can create models to predict the likely outcomes of short-sighted business tactics. We can sense intuitively that runaway greed is a bad idea, and see real examples where a mainly middle class society enjoys more happiness. But an AI model would be able to spell it out in detailed consequences why runaway greed is stupid.
Like I said, I am an optimist about AI. But also a realist. We need to encourage the compassionate uses, while minimizing the noncompassionate uses.
By feeding it new biased information until it produces a result that they want to hear?3) Having them act on that information
By feeding it new biased information until it produces a result that they want to hear?