D&D General Can ChatGPT create a Campaign setting?

Clint_L

Hero
Do I own this?

Describe Stankwater, a small swamp village with an alligator problem, for a D&D setting

Stankwater is a small, isolated village situated in the heart of a vast swamp. The village is known for its lush vegetation, teeming wildlife, and, most notably, its alligator problem.

The villagers of Stankwater live in wooden huts raised off the ground on stilts, and their main source of income is fishing and hunting. The swamp surrounding the village is home to an abundance of alligators, which frequently attack the villagers and their livestock. This has made life difficult for the inhabitants of Stankwater, and they are constantly on the lookout for adventurers who are willing to help them rid the swamp of its dangerous reptiles.
Despite the alligator problem, the people of Stankwater are a close-knit and hospitable community. They are always willing to offer aid and support to those who are willing to help them, and they are known for their delicious Cajun-style cooking.

Visitors to Stankwater should be cautious when entering the swamp, as alligators are not the only danger lurking in the murky waters. There are also rumors of a powerful swamp witch who dwells deep in the heart of the marsh, and who is said to be able to control the alligators.

Adventurers who are brave enough to take on the alligator problem will find themselves well-rewarded by the grateful villagers of Stankwater. With their help, the village could finally be free of its dangerous reptiles, and the swamp could once again be a place of peace and prosperity.


Or what about this, after 5 iterations?

The Gloom Bog is a treacherous and dark swamp, home to all manner of dangerous creatures, including rumored powerful swamp witches. Despite the danger, the villagers of Stankwater have learned to make a living by fishing and hunting the teeming wildlife.

Stankwater is a small, isolated village, known for its close-knit and hospitable community. The village is led by a female goblin named Grummla, who rules with a firm but fair hand. Grummla is a shrewd and cunning leader, respected by the villagers for her ability to navigate the dangerous swamp and protect them from its inhabitants. The villagers live in wooden huts raised off the ground on stilts, and they are always willing to offer aid and support to those who are willing to help them rid the swamp of its dangerous inhabitants, such as swamp hags, will-o'-wisps, and giant mosquitoes. Visitors to Stankwater should be cautious, but well-rewarded, as Grummla and the villagers are known for their hospitality and aid to those who would help protect their swamp home.


At what point does the text become mine?
 

log in or register to remove this ad

joluoto

Villager
Why not?

It seems to me the question is not whether AI is used, but how much and in what capacity. And these are very pertinent questions that are being heavily litigated.

For example, let's say I mostly write a campaign guide, but use AI to fill in some of the basic, repetitive stuff. Can I copyright that? Okay, what if I write a campaign guide, and I write quite a bit of it, but for the gazetteer section I just create the place names and a few details (e.g. Stankwater, a small swamp village with an alligator problem) and get the AI to fill in the details. Can I copyright that?

With other AI assisted art, the AI is basically doing what I described above - the artists are making the key decisions, but much of what you actually see on screen is being filled in by AI.

It seems to me that the question over AI and copyright, in writing or any other art form, is likely to come down to what degree of unique creative expression is required for an artist to claim work as their own. The question won't be about whether the use of AI automatically excludes a written work from copyright, it'll be where the line should be drawn.

I suspect much relevant litigation will come from the music industry, where many court cases have explored questions of originality and ownership (e.g. sampling cases).
The more AI you use the less you can copyright it, because it's "human creativity" that is copyrightable. If you just use AI to fill in some small parts it's probably copyrightable, it's still mainly based on your creativity. But the less human creativity there is in a work, the harder it will be to get a valid copyright. This of course can vary depending on which country's copyright law we are talking about. So your analysis that the debate will be over where the line should be drawn is probably correct.
 


Andvari

Hero
The more AI you use the less you can copyright it, because it's "human creativity" that is copyrightable. If you just use AI to fill in some small parts it's probably copyrightable, it's still mainly based on your creativity. But the less human creativity there is in a work, the harder it will be to get a valid copyright. This of course can vary depending on which country's copyright law we are talking about. So your analysis that the debate will be over where the line should be drawn is probably correct.
Isn't that kind of moot considering they'll have no idea how much is human and how much AI? You could set up a process where using AI requires the AI tool logs everything you do, and then the copyright service would have a tool to compare the logs with your text for matches. But then the author can just use another tool to go through the writing and rearrange sentence structures to prevent matching, and you end up the same place only with a lot of wasted time and effort in the middle.
 

EzekielRaiden

Follower of the Way
I don't think anyone thinks there isn't more to human language than what ChatGPT outputs. But those limitations theoretically can be overcome. It wouldn't strictly be an LLM, but if we're going for "more human," then we'd have to expand past that anyways.
The biggest issue with treating this as limits to be overcome is that you are talking about two different kinds of data, not just larger amounts of data. But I'll spoiler-block the rest of this as being merely topic-adjacent, rather than strictly on-topic.
Syntax and semantics are not the same. No amount of syntax, no matter how great it becomes, can be equivalent to even the smallest amount of semantic content. But it is that very thing, the semantic content, that is the main "more to human language" element you speak of.

A GPT model that has trained on a hundred thousand times more data than the most advanced model currently in research, with a hundred thousand times as many nodes (or whatever internal structure GPT uses for its statistical model), would still be infinitely far away from picking up any semantic content. It is simply not trained, in any way, to identify the meaning of words; it can only identify, to whatever the limit of technology and training currently is, statistical correlations, aka, syntax.

Hence, I share the skepticism about this form of AI doing terribly much more than what it's already doing: generating bland but effective boilerplate content that remixes stuff other people already wrote/drew/etc. Even there, it's still very early days. The drawing side still has a long way to go, what with the eldritch horror edge cases, especially eyes, digits, and teeth, but also the non-euclidean geometries inserted into buildings and natural locations. That doesn't mean these things are useless, they can be very helpful for quickly generating boilerplate text, which can in fact be a tedious and time-consuming process for little benefit. For example, apparently travel agency type things have found Chat GPT to be incredibly powerful for generating quick, descriptive summaries of travel info, to the point that some aren't sure how they managed without such things. Instead of spending hours every day drafting repetitive stuff, they can focus on other things.

My expectation is that the ultimate form of GPT-type "AI" is going to be highly efficient "Virtual Intelligences," to borrow a term from Mass Effect. A Virtual Intelligence, "VI," is not properly speaking intelligent. It acts more like a hybrid database and personal assistant, and has a deep library of input-response associations (again, purely statistical models of syntax, no meaning is stored here) allowing it to handle "mundane" activities. For example, checking emails, filtering down to only those which need a personal response, and using one of several self-written but VI-filled template responses for all the ones that don't. Or drafting meeting notes that can be shared with the team, or summarizing long sections of text into punchy paragraph-length statements. Etc. All the many little ways humans need to condense or process data that can be tedious to do by hand but nearly effortless for a computer.
We would need a new algorithm to imitate human's creativity, but maybe not insurmountably complex.
As said above in the spoiler (since, as you say, slightly off-topic), it's not just a new algorithm. It's an entirely different approach to analyzing and processing data, looking at the actual content of the message, not just the structure. For exactly the same reason that no amount of analyzing the parts inside cars can give you an understanding of why humans choose to break speed limits, no amount of analyzing the parts of sentences in any language can tell you why humans choose to speak some sentences that they know are false.

I mean, human creativity is also based on inputs and association.
Yes. Inputs and association of meaning, not of structure. Humans are actually pretty weird about the syntax of the languages we use. As I've cited elsewhere, nearly every English speaker knows by heart the correct order of adjectives for describing nouns, but could not ever actually identify it for you. It's just pure instinct. You know not to say, "brick old beautiful several houses," even though there's nothing formally "wrong" with that sentence, because that's just...not how adjectives are ordered in English. The correct ordering, despite (almost surely) never having been formally taught to you, is "several beautiful old brick houses." Or, as a famous tweet puts it, English adjective order is very rigidly: "opinion-size-age-shape-colour-origin-material-purpose Noun."

Even if the inputs are a few degrees apart, having an algorithm query similarities to two concepts and merge them together contextually wouldn't be impossible. Nor would taking how many times it's seen a concept and trying to put a twist on it. Although they might be pretty difficult to program in.
The GPT type of model cannot do what you describe; it cannot understand "similarities" at all. It is incapable of even seeing semantic data, which is absolutely required for handling things like concepts and context.

In order to do what you are describing, we would need an entirely new branch of computer programming, something truly revolutionary, not simply evolutionary.
 


Cadence

Legend
Supporter
RE: ChatGPT on Homework.

As an aside, I wondered if ChatGPT would easily will give really bad answers to more obscure things.

It lied about Haiku 302 by Richard Wright and made up the last two lines. Was corrected on the second line and lied again on the third.

It was asked about who cooked better between Nero Wolfe, Fritz Brenner, or Marco Vukčić. It lied about what book the third character was in, lied again, and then finally pulled it up. Asked again later it was again unaware of the character.

When asked about the 30 year war in Oldenburg-Munsterland it said many key battles of ti were faught there. When asked which battles it was unable to say.
 

joluoto

Villager
RE: ChatGPT on Homework.

As an aside, I wondered if ChatGPT would easily will give really bad answers to more obscure things.

It lied about Haiku 302 by Richard Wright and made up the last two lines. Was corrected on the second line and lied again on the third.

It was asked about who cooked better between Nero Wolfe, Fritz Brenner, or Marco Vukčić. It lied about what book the third character was in, lied again, and then finally pulled it up. Asked again later it was again unaware of the character.

When asked about the 30 year war in Oldenburg-Munsterland it said many key battles of ti were faught there. When asked which battles it was unable to say.
ChatGPT is not programmed to know any difference between truth and untruth. It just generates language with words it thinks belong together. So for homework, it has a good chance to not give the correct answer.
 


Remove ads

AD6_gamerati_skyscraper

Remove ads

Recent & Upcoming Releases

Top