D&D General DMs Guild and DriveThruRPG ban AI written works, requires labels for AI generated art


log in or register to remove this ad

Or perhaps, a predictive text generator is actually an even more powerful tool than we initially realized.
I mean lets consider some of the crazy things that bees or ants can do. I mean all of the things they can recognize, the communication, pattern recognition, navigation etc. And their brains are TINY in comparison.

One sobering answer may be, that intelligence isn't as fancy and special as we like to think it is.
 

I'm not sure if you would disagree with any point in the video, but it is an explanation of the process of creating/training LLMs.
Okay. I watched the video.

Humans do not create Large Language Models.
I think there's a bit of nuance. A language model is just a probability distribution over sequences of words. For a LLM a human never went in and assigned any weights to that probability distribution, instead we programed an algorithm to do that based on the large dataset we fed it. So in that sense it wasn't created by a human. But the same could really be said about anything produced by any computer algorithm. So in what I would consider the more broad sense, it's meant to sound like a grand claim and be technically true, but it's a fairly typical claim when dealing with any computer generated outputs.

We program bots that program them based on a series of "tests". We give them a task to do, tell a creator bot to make bots for tackling that problem, and eventually through trial and error, get a Large Language Model that has somehow figured out how to solve the task we asked it to do.
Not quite accurate. A language model is just a probability distribution over sequences of words. The task a language model is used to solve is predicting the next word/words.

Chat GPT is obviously not sentient, but we also do not know the inner workings/logic that it uses to generate text.
Sure. But while we don't know the precise innerworkings, with some knowledge of computer science and a basic understanding of LLM's we can certainly make educated guesses.

Due to the black-box nature of the logic these AI use to solve the problems we give them, it would be very difficult to tell if an AI developed in the way it was ever did "evolve" into something more than just a text generator that was decently good at pretending to be a person.
High level we 'know' the logic they use - a probability distribution of words is created by an algorithm reading from a large data set. The specifics of that algorithm is a mystery, but not because of emergent properties, it's simply because it's not public information.

Then once we have the model, the next process step is how the response algorithm is programmed to respond to prompts based on that probability distribution. Again, a black box there because we aren't privy to the precise algorithm being used.

IMO there's alot of demystification that needs to happen about LLM based AI's.
 

Okay. I watched the video.

I think there's a bit of nuance. A language model is just a probability distribution over sequences of words. For a LLM a human never went in and assigned any weights to that probability distribution, instead we programed an algorithm to do that based on the large dataset we fed it. So in that sense it wasn't created by a human. But the same could really be said about anything produced by any computer algorithm. So in what I would consider the more broad sense, it's meant to sound like a grand claim and be technically true, but it's a fairly typical claim when dealing with any computer generated outputs.

Not quite accurate. A language model is just a probability distribution over sequences of words. The task a language model solves is predicting the next word/words.

Sure. But while we don't know the precise innerworkings, with some knowledge of computer science and a basic understanding of LLM's we can certainly make educated guesses.

High level we 'know' the logic they use - a probability distribution of words is created by an algorithm reading from a large data set. The specifics of that algorithm is a mystery, but not because of emergent properties, it's simply because it's not public information.

Then once we have the model, the next process step is how the response algorithm is programmed to respond to prompts based on that probability distribution. Again, a black box there because we aren't privy to the precise algorithm being used.

IMO there's alot of demystification that needs to happen about LLM based AI's.
I edited my last post. I misunderstood which tangent of the discussion this was and originally thought you were referring to the video I had posted as the one that was "clickbait". Apologies for the misunderstanding.
 

I edited my last post. I misunderstood which tangent of the discussion this was and originally thought you were referring to the video I had posted as the one that was "clickbait". Apologies for the misunderstanding.
Ah. Understood. Yours was a fun video and not overly technical so accessible to many. I won't say it was inaccurate but I could see places in it where people might come away with some misleading ideas.
 

It will be interesting to see how closing off this method of distribution will affect publishers who want to use these tools. Will they just lie about it, will they distribute indipendently or tbrough channels that don’t add the restriction like patreon and Kickstarter. Or will the systems themselves become more accessible directly to the consumer and not through published products.
 

1690550147302.png
 


I think AI art is amazing, and would happily use it, and in many cases, in my opinion, I doubt a human could duplicate what it does in anything like the time it can churn it out.

Not the least, Sci-Fi Pantheon on DMS Guild uses a lot of AI art - which in my mind is an invaluable tool for those wanting to minimise costs.
 


Remove ads

Top