D&D General DMs Guild and DriveThruRPG ban AI written works, requires labels for AI generated art

If you pay any attention to neuroscience at all you will see that what qualifies as "intelligence" from a human perspective is something that computers -- at least as we build them now -- simply cannot do.

I apologize if I was inordinately dismissive but neither the highly mathematical process of chess nor the LLM systems of chat GPT actually look like human intelligence. That isn't how our brains work. Is it possible for computer brains to get "smart"? maybe, but none of the things we call "artificial intelligence" actually resemble human intelligence much at all, scientifically speaking.

If you are interested in this stuff and want a really strong overview, I recommend David Eagleman's Incognito and Livewired. What makes a human brain human is fascinatingly complex and no, our engineers are nowhere near replicating it.
 

log in or register to remove this ad

Chess isn't "intelligence."
Quoting a character in a light novel, Chess at it's core is tick tack toe with way more permutations, but still not more challenging at the core. For us humans it is challenging, but it can be solved by basically a calculator.
 

This has been an interesting discussion, one of the better ones on Enworld of late. I think I can break down the general discourse into a few key topics.

Is it Ethical for AI to replace human jobs?

On this front, our history has an answer for us...and the answer is a definitive yes. We have a long history now of machines replacing human labor, work that was done by 1000s of humans is now done by 10 + machines. AI will likely not be any different.

The current pushback we are seeing on many of these fronts is classic, we see it everytime there is a technological disruption in society. People clamor to protect existing jobs and the existing way of life. These methods have historically failed, partly due to the economics....but more often the generational change over. For us adults, AIs are a scary thing. For the children being born now that will have such tech their entire lives....it will seem unethical not to have it. The idea of humans having to slave away for hours on ends on keyboards to generate a simple 200 page novel...I mean how ridiculous, how crazy! Or an artist literally scratching at paper for a week to generate just a single work of art....can you imagine?

Is it legal for AI to use artist work?
So now we dig a little deeper. A common criticism of AI right now is that the AI is "stealing" artist work to then make money off of. So is that true?

Ultimately we have to use the laws that are the books right now. Almost certainly a ton of new legislation will be generated from teh AI revolution, and whose to say what the future of AI law will look like. But as for right now we have our existing copy right laws to fall back on.

To me mind the arguments focus on the concept of "use in teaching". If for example, a human was able to access your work without payment, study that work, and then create their own derivative work and sell it for money without compensating you.... than likely an AI will be able to do that as well, and likely that is what the argument will be. This might even bypass work clauses that prohibit the use of work for "commercial purposes", because again I am not selling the work nor am I copying it to sell. I am using it as a training and teaching tool, which may (may not depending on the language) be legally seperate.

On the flip side, should it be shown that the AI generators were using private images behind paywalls or security and were never supposed to be accessed, those AI creators could be in for a wild ride of lawsuits.

Now, what will be interesting in the future is if legislation is created to force a "white list" for AI training. Aka unless you give permission for your work to be used in an AI training (must likely so contracted work), then AIs are forbidden to train on that work.

However, the law has often been notoriously slow compared to the pace of technology, so it will likely be several years before we see some real changes here. Meanwhile it will be up to the interpretations of courts.

Why the UBI is needed (this time automation IS different)

The other notion we hear is the classic refrain....AI will eliminate certain jobs which will be replaced with other jobs. And that will almost certainly be true, we are already seeing new "AI focused" job descriptions popping up.

Historically when machinery replaced the factory work, we saw a shift to the service economy. And of course now a days a shift to the IT economy. History would then suggest a similar thing would happen this time with AI.

However, I would argue this time is different. While its true none of us will be able to predict what the jobs of the future will be, I can say with high confidence those jobs will involve a combination of physical and intellectual components (you could also argue social as an offshoot of intellectual but its debatable if that is truly different or not). Now machines have largely replaced the physical components of many jobs, and machines with high precision motor skills and coordination are looking to finish the job. Further, AI is starting to creep into the intellectual space. If I can make an AI that can learn with a similar or superior speed to a human...economics have shown us that machines tend to beat out human labor most of the time (there are rare exceptions, but they are exceptions).

So there will be new jobs in the future that we cannot predict today....but why wouldn't an AI do them? Now perhaps AIs have to be specially trained on the new job, and so humans have to do it for a while until the AI can get up to speed. But we aren't talking career paths here, maybe its a year, maybe just a few months.... but then the AI will be trained and the human would be replaced. Jobs would have to be created at an ever increasing rate to provide humans enough employment ahead of the ever learning AI.

Beyond that, we are already dealing with the skill gap. There is a serious belief that systems today are poised to disrupt fast food, waiter services, and truck driving labor systems. These are jobs that don't have a large educational requirement. As those jobs are removed, the jobs of the future (working hand and hand with AI) would likely become more and more technical and specialized. That requires training and education. And there will always be a portion of the population that are unable to acquire the education and skills needed to do that. But even if the entire population could do that....would we really need them to? Again, a job to train an AI could be done by a skeleton crew of humans, and no company would want to invest in a massive labor pool when the expectation is that pool would be replaced as soon as the AI is ready. And so likely even with new jobs in the future, those jobs probably won't provide employment for the equivalent number of people/

All of comes to a sobering conclusion.... unemployment is likely to increase in the future, perhaps markedly. As machines took over more of the labor...humans will need a way to maintain their livelihood. Whether that's a UBI or some other type system, this is going to come to a head in the next 20 years as the labor market continues this latest disruption.
 

Quoting a character in a light novel, Chess at it's core is tick tack toe with way more permutations, but still not more challenging at the core. For us humans it is challenging, but it can be solved by basically a calculator.
"Seasoning", Hal Clement, 1978. A decade before the phrase 'light novel' was coined, written for adults. Down to the misspelling of tic-tac-toe.
 

There is a ton of ongoing research into LLMs, and serious debate over their limitations. Some researchers feel that they already show signs of being a form of generalized artificial intelligence, and a few have proposed that they show evidence of having a theory of mind. Even the skeptics concede that LLMs already exceed what were thought to be their parameters.
All hype. Give it a few years, maybe 10-20 tops and we will understand the mechanisms at play for in-context learning - which is the phenomenon I assume you are referencing.

On in-context learning - (extremely simplified) it's basically giving the AI a categorization list, giving it a word not on the list and asking it to categorize. Behavior is as expected if the categorizations on the list are semantically relevant. However, flip the categorizations and the LLM will pick up on that as well which shouldn't probably shouldn't be possible if it's only looking at word association probabilities for the word you gave it.

Until it's proven or stated otherwise I would assume the base process is actually different. Possibly the LLM doesn't actually care about the word you used to categorize something by. Instead it treats each categorization as a variable (an unknown word) and compares to it's data model to find the most likely actual word that maps to what it's treating as an unknown word. Then it just compares the test word to see which of those of those actual words are most associated with it. Final step it uses the known to unknown word map it previously created to translate the final output back to you.

And perhaps this there's some threshold for when it does one vs the other.

Anyways, the point is that in-context learning need not indicate - generalized intelligence, theory of mind, nor actually exceeding their parameters.

They have some strong limitations, as well as significant strengths, but the original belief that they were essentially just predictive text generators, which I shared, is no longer considered valid by just about anyone.
Or perhaps, a predictive text generator is actually an even more powerful tool than we initially realized.
 


"Seasoning", Hal Clement, 1978. A decade before the phrase 'light novel' was coined, written for adults. Down to the misspelling of tic-tac-toe.
Happy accident, I was paraphrasing Shiro from No Game No Life. Though probably the author was quoting Hal Clement?
 

All hype. Give it a few years, maybe 10-20 tops and we will understand the mechanisms at play for in-context learning - which is the phenomenon I assume you are referencing.

On in-context learning - (extremely simplified) it's basically giving the AI a categorization list, giving it a word not on the list and asking it to categorize. Behavior is as expected if the categorizations on the list are semantically relevant. However, flip the categorizations and the LLM will pick up on that as well which shouldn't probably shouldn't be possible if it's only looking at word association probabilities for the word you gave it.

Until it's proven or stated otherwise I would assume the base process is actually different. Possibly the LLM doesn't actually care about the word you used to categorize something by. Instead it treats each categorization as a variable (an unknown word) and compares to it's data model to find the most likely actual word that maps to what it's treating as an unknown word. Then it just compares the test word to see which of those of those actual words are most associated with it. Final step it uses the known to unknown word map it previously created to translate the final output back to you.

And perhaps this there's some threshold for when it does one vs the other.

Anyways, the point is that in-context learning need not indicate - generalized intelligence, theory of mind, nor actually exceeding their parameters.


Or perhaps, a predictive text generator is actually an even more powerful tool than we initially realized.
You seem much more confident in your beliefs than the experts who are currently studying LLMs.
 



Remove ads

Top