WotC Would you buy WotC products produced or enhanced with AI?

Would you buy a WotC products with content made by AI?

  • Yes

    Votes: 45 13.8%
  • Yes, but only using ethically gathered data (like their own archives of art and writing)

    Votes: 12 3.7%
  • Yes, but only with AI generated art

    Votes: 1 0.3%
  • Yes, but only with AI generated writing

    Votes: 0 0.0%
  • Yes, but only if- (please share your personal clause)

    Votes: 14 4.3%
  • Yes, but only if it were significantly cheaper

    Votes: 6 1.8%
  • No, never

    Votes: 150 46.2%
  • Probably not

    Votes: 54 16.6%
  • I do not buy WotC products regardless

    Votes: 43 13.2%

Status
Not open for further replies.
This is incorrect. They can return exact sequences of tokens which are highly represented in the training data. You can get "tomorrow, and tomorrow, and tomorrow" out, because it is one of the most famous passages in English literature. You can't get the House on the Borderland, because it is more obscure.

It just takes more work to filter out. It's a matter of the ratio of needles to hay. The relationships between tokens are the record of what was written. In my puzzle analogy, it's just mixing lots of puzzles together to make it harder to put the less common puzzles together.
 

log in or register to remove this ad


It just takes more work to filter out. It's a matter of the ratio of needles to hay. The relationships between tokens are the record of what was written. In my puzzle analogy, it's just mixing lots of puzzles together to make it harder to put the less common puzzles together.
No, it doesn't. The data have been fundamentally transformed and are no longer present.

You can prove me wrong--get it to output The House on the Borderland. (well, not really, because even if you managed to do that it wouldn't change how LLMs work. But I think this is a straightforward challenge that will demonstrate they do not work the way you think they do).
 

A LLM is not a library. Judging its value based on whether or not it can return the text of works to you is missing the point.

What is that point?


ChatGPT.JPG



QueenOfTheBlackCoast.JPG
 

What is that point?
Well, first off it looks like the original poem is 4 lines; the LLM gets the 4th line wrong, then hallucinates an additional four. I'm not sure what you think this is showing.

As for use cases-programming, translation, brainstorming, editing, spellchecking, tutoring, and so forth...especially, anything where the user can iterate and modify/verify the output is a good bet.
 

Well, first off it looks like the original poem is 4 lines; the LLM gets the 4th line wrong, then hallucinates an additional four. I'm not sure what you think this is showing.

Yes, it got 3 lines correct, probably didnt like where the poem was headed, and then made up 4 more (quality).

It then got the actual question right after it corrected itself, spitting out the first sentence.

I've never used the chat tool before, but figured whats 5 seconds to confirm that it can spit out verbatim text? Which it did.

The fact it then gets high on its own supply and makes some other things up just proves it cannot be relied upon.
 

No, it doesn't. The data have been fundamentally transformed and are no longer present.

You can prove me wrong--get it to output The House on the Borderland. (well, not really, because even if you managed to do that it wouldn't change how LLMs work. But I think this is a straightforward challenge that will demonstrate they do not work the way you think they do).
Nope. You're just using the claim they hope the judges buy in court.

It's why one of the big risks in LLMs is that the model data will get extracted. The transformation is in the vector sense rather than the polymorph sense.
 

Yes, it got 3 lines correct, probably didnt like where the poem was headed, and then made up 4 more (quality).

It then got the actual question right after it corrected itself, spitting out the first sentence.

I've never used the chat tool before, but figured whats 5 seconds to confirm that it can spit out verbatim text? Which it did.

The fact it then gets high on its own supply and makes some other things up just proves it cannot be relied upon.
Yes, you should not rely on a LLM output to be completely reliable.
Nope. You're just using the claim they hope the judges buy in court.

It's why one of the big risks in LLMs is that the model data will get extracted. The transformation is in the vector sense rather than the polymorph sense.
Some model data can be extracted. That's not the same as the model storing all of the training data.
 

Yes, you should not rely on a LLM output to be completely reliable.

Some model data can be extracted. That's not the same as the model storing all of the training data.

So let's say I cared enough, and I certainly don't, to ask for the 2nd sentence. Then the 3rd. On and on.

Do you think I couldn't rebuild the story, verbatim?
 

Yes, you should not rely on a LLM output to be completely reliable.

Some model data can be extracted. That's not the same as the model storing all of the training data.
It converts the training data into model data. It's essentially using spatial relationships and indirect references to store data instead of raw text, but the information is still there even if it's extremely difficult to untangle it precisely. Whether or not the alteration is sufficient for legal purposes, it's still storing the information.
 

Status
Not open for further replies.
Remove ads

Top