WotC Would you buy WotC products produced or enhanced with AI?

Would you buy a WotC products with content made by AI?

  • Yes

    Votes: 45 13.8%
  • Yes, but only using ethically gathered data (like their own archives of art and writing)

    Votes: 12 3.7%
  • Yes, but only with AI generated art

    Votes: 1 0.3%
  • Yes, but only with AI generated writing

    Votes: 0 0.0%
  • Yes, but only if- (please share your personal clause)

    Votes: 14 4.3%
  • Yes, but only if it were significantly cheaper

    Votes: 6 1.8%
  • No, never

    Votes: 150 46.2%
  • Probably not

    Votes: 54 16.6%
  • I do not buy WotC products regardless

    Votes: 43 13.2%

Status
Not open for further replies.
I absolutely do not. If your ethical code hurts others then it can and should be condemned.

So you should be morally condemned in your own system, since your imposition of your ethical code on me hurts me. Artists, also, in this system, should be morally condemned for restricting access to nonrival goods, hence hurting everyone by depriving them of the art. I am not convinced by your ethical position, but as I said, I don't judge you for your ethical code. If you're interested, my ethics revolves around maximizing the common good, and that sometimes, and actually quite often, this can mean hurting others in some way. Taxes to fund public services are hurting the rich more than they are hurting the poor, but they hurt everyone, yet I find them morally justified. Private property hurt the poor most (as homeless people can't sleep in your house), yet as long as it's collectively better to have private ownership of houses (so they are maintained, for example), I am all for private property of housing units despite it hurting people -- same with intellectual property, when applied finely to ensure the common good, it can make sense despite hurting others. I am finding more useful to have a moral system that helps adjudicate between competing harms than avoiding harm, despite the latter's apparent appealing simplicity. But I won't try to convince you that my system is better than yours, because I think you should be respected and it would be far outside the scope of this particular thread.
 
Last edited:

log in or register to remove this ad

That's the hard part. And I agree with what @Morrus suggested above: it won't be "we" who decide this; it will be the courts.
I disagree that "we" won't be deciding this. It reminds me of friends who don't vote in political elections because their votes "don't matter".

Yes, your voice matters.

"We" get to decide how much Gen AI takes over the art world, by being informed and voting with our dollars. If most folks are just fine with AI generated art, or don't care, or don't pay attention . . . then yes, it's going to take over. The big corps are certainly gung ho on this. But if most folks take a minimal effort to be informed on the issues surrounding AI generated art and decide not to purchase products that use it . . . then it won't.

And the voting analogy also means . . . will we vote for politicians who will pass laws restricting how AI generated art can be used, or will they pass laws that will encourage it? Will we contact our politicians and let them know how we feel if/when important bills are being voted on? Again, stay informed and vote . . . this time with your ballot.

If AI tools that scrape artist's content without their permission or compensation are unethical . . . does it matter if there is no enforcement, either from the law for from public backlash?

The choices I make as an individual won't move the needle much . . . I'm just one grain of sand on a beach filled with thousands of grains . . . collectively however, our choices do matter.
 

It literally does. All of society customs, rules, and laws are an attempt to balance benefit and harm. And all of these have exceptions, and they get changed with times.

If your position is that for AI no amount of benefit is enough to outweigh the damage, I don't have any issue with it. I actually mostly agree.

But some posters were also specifically discussing whether AI is useful or not. You yourself took part in that discussion, so your reply here that the whole argument is irrelevant because AI is stealing feels like a coup out. I feel if ones chooses to enter in the merit of a given discussion, then one can't just refute dissenting positions simply by stating the whole discussion is irrelevant.
"Here's a thing that is useful. But to use it, you have to steal other people's artwork and writing. Also, it tends to produce stuff that is sub-par and often has big mistakes in it."

I feel like the usefulness of the product is far outweighed by the other two parts of it. And, sure, one of these days AI may not produce stuff that is sub-par and has big mistakes, so you're still left with that question: is it OK to use a device trained on other people's creative endeavors in order to avoid paying people for their time and effort, especially when it's for a company to use in their publications?

To me, it doesn't matter if it's useful.

It was sort an hyperbole meant to exemplify a nonsensical position. And to me the idea that AI is not useful because you can just learn to draw/write/paint better is as nonsensical as claiming that calculator are useless because you can just learn math better.
And again, those are two incredibly different things because calculators are not trained on stealing.
 


I'm not talking about the ethics here.

Leaving the ethics (piracy) out of a discussion about LLM's aka "ai" is like discussing which deadly poisons taste the best.
Does it really matter how good they taste unless you (the general you, not you specifically) want to slip one in someone's pie, and if you do why shoud we be discussing it with someone that does?

These "ai" tools are based on bad faith and in many cases illegal methodology, ignoring that should end the discussion. The ends justify the means, and greater good arguments are a poor platform to debate topics like this.

This is for commercial products. I.e. the topic of this thread, what you do at your table is between you and your group.
 

This is for commercial products. I.e. the topic of this thread, what you do at your table is between you and your group.

Actually the topic of the thread is whether one WOULD buy an AI product, not whether one CAN (legally) nor SHOULD (ethically). The whole ethic discussion is a tangent and doesn't necessarily has the same lines of opposition. Both pro and anti-AI advocates said they wouldn't. Both pro- and anti-AI advocates said they would.
 

I disagree that "we" won't be deciding this. It reminds me of friends who don't vote in political elections because their votes "don't matter".
The "we" in question doesn't refer to 'society', it's a reference to literally us in this thread and our endless circular arguing.

And this is being decided via civil lawsuits not political office. We're not talking law enforcement; we're talking companies suing each other.
If most folks are just fine with AI generated art, or don't care, or don't pay attention . . . then yes, it's going to take over. The big corps are certainly gung ho on this. But if most folks take a minimal effort to be informed on the issues surrounding AI generated art and decide not to purchase products that use it . . . then it won't.
Don't underestimate the other big corps. The ones who own IP and don't want to see Star Wars and Marvel and Mickey Mouse and Harry Potter left unprotected. The main conflict will be between the powerful IP holders and the tech companies. They have opposed interests in this matter.
If AI tools that scrape artist's content without their permission or compensation are unethical . . . does it matter if there is no enforcement, either from the law for from public backlash?
There will be. In the form of lawsuits. There already is, and it will grow as the large entertainment companies feel their hold on IP is under threat. Wait until it's revealed that Star Wars has been scraped, and then watch the fireworks.
 


Topics drift. That's how conversations work. This is a feature of human interaction, not a bug.

Sure. I was responding to a post that wanted to recenter the discussion on commercial products, from which we have drifted a long time ago toward a more general discussion.

And this is being decided via civil lawsuits not political office. We're not talking law enforcement; we're talking companies suing each other.

Would you mind to elaborate? Civil courts still apply the law so I am not sure I understand how you discount the political action that establishes the law. The TDM exception was a political act, and it prevents someone claiming damage, even civil damage, for the data collection realized under the terms of the act. So it will clearly be decided via political action, even if the political will is expressed through court's ruling.
 
Last edited:

Actually the topic of the thread is whether one WOULD buy an AI product, not whether one CAN (legally) nor SHOULD (ethically). The whole ethic discussion is a tangent and doesn't necessarily has the same lines of opposition. Both pro and anti-AI advocates said they wouldn't. Both pro- and anti-AI advocates said they would.
The OP was WOTC specific, so commercial thus piracy being illegal is a part of the OP discussion.

The poll brought ethics into the OP, so ethics have been a part of the OP discussion.

That said I stand by my post in it's entirety, but do agree with Morrus' post above about thread drift.
 
Last edited:

Status
Not open for further replies.
Remove ads

Top