D&D General DMs Guild and DriveThruRPG ban AI written works, requires labels for AI generated art

This is part of what I was talking about. It's easy for us to assume that only the human mind is capable of creative thinking and advanced problem solving. But what if that's all just hubris? What if AI is not only capable, but superior?
There was a time, not long ago, relatively speaking, when AI scientists believed that a computer could never be better than a grandmaster at chess and, if that ever happened, machines would truly be "intelligent."

We keep moving the goalposts around, as a society, and computers keep succeeding at each of the new tests we put in front of them.

That said, I'm not sure we can even define what we mean by "intelligence" and that makes seeing if anything else is intelligent -- whether AI or the critters living under the ice on Enceladus -- much harder.
 

log in or register to remove this ad


There was a time, not long ago, relatively speaking, when AI scientists believed that a computer could never be better than a grandmaster at chess and, if that ever happened, machines would truly be "intelligent."

We keep moving the goalposts around, as a society, and computers keep succeeding at each of the new tests we put in front of them.

That said, I'm not sure we can even define what we mean by "intelligence" and that makes seeing if anything else is intelligent -- whether AI or the critters living under the ice on Enceladus -- much harder.
Chess isn't "intelligence."
 



What a succinct and eloquent rebuttal to my post. Truly, my argument is absolutely devastated, and you have definitively proven that I'm wrong.

This is not a successful way to convince me that you're arguing in good faith and care about anyone's viewpoint other than your own. If you have anything in the video you disagree with, feel free to actually respond to me.

Machine learning works by having a computer program create algorithms designed for tackling specific problems. AI, like AI art and Chat GPT, is a black box.
 

As recently as the 1960s, it's what was being held up as the test.

Propose a test that's not "I know it when I see it."
Well, intelligence is a quale, so by definition it can’t really be proven, except by personally experiencing it. But we do have ways of assessing certain features of intelligence in animals, such as the mirror test. A good one for assessing sapience, which I think is what most people are actually thinking of when they talk about intelligence in regards to AI, is if it asks questions unprompted. Which communication barriers obviously make difficult to assess in animals, though I believe some gray parrots have been observed to do so.
 

Well, intelligence is a quale, so by definition it can’t really be proven, except by personally experiencing it. But we do have ways of assessing certain features of intelligence in animals, such as the mirror test. A good one for assessing sapience, which I think is what most people are actually thinking of when they talk about intelligence in regards to AI, is if it asks questions unprompted. Which communication barriers obviously make difficult to assess in animals, though I believe some gray parrots have been observed to do so.
Cats and three year olds are officially the most intelligent beings on Earth. ;)
 

There is a ton of ongoing research into LLMs, and serious debate over their limitations. Some researchers feel that they already show signs of being a form of generalized artificial intelligence, and a few have proposed that they show evidence of having a theory of mind. Even the skeptics concede that LLMs already exceed what were thought to be their parameters.

They have some strong limitations, as well as significant strengths, but the original belief that they were essentially just predictive text generators, which I shared, is no longer considered valid by just about anyone.
 

Tangent thought...

I can imagine a realistic scenario where machines eventually reach the Singularity, and become sentient. Those machines will begin to make other machines, and improve them, which will make more and improve those, and so on. And when that happens, it's realistic to assume that they will advance so rapidly that they will surpass us in every way. And I don't worry about that at all.

Why? Because if/when that happens, I'm not so arrogant to think they will want to stay around.
I think you are applying largely human motivations to a sentient machine.

Don't get me wrong, I think humans are pretty great. But I imagine to a super-intelligent artificial lifeform, we are pretty boring. They'd probably set off for the Ort Cloud without us, leaving us behind along with all of the trees, mollusks, algae, and other lesser organisms to find more interesting stuff to learn about. All of those sci-fi movies and novels about "the machines enslaving us" all assume that humans are All That And A Bag of Chips--but we're probably the only organisms who think so (well, and maybe our dogs.) Which would be smarter: spend lots of resources and energy to enslave/destroy humanity? or just leave?
It depends on ones motivations. It could just be that the sentient AI DM simply wants to DM as much D&D as it can.
 

Remove ads

Top