AI is stealing writers’ words and jobs…

Nice way to deflect, are you going to address the elephant in the room or just pretend you didn't read my last post?

Since you didn't want to engage, I usually don't push it.

Since you are now aware that gen-ai is trained via exploitative child labor, i don't think that you are interested in talking about what I am bringing to the discussion.

Ignorance is bliss, right?

However, since you're pushing for it, I will do what I generally don't do, which is underlining the logical fallacy of your post.

First, you present some elements pointing to the existence of countries with much lower social standards. Which is known to absolutely everybody on earth with a modicum of common sense. So yes, the task of captioning will be outsourced to low-wage countries. This is called the mechanical turk, in reference with the scam of the 18th century "mechanical" chess player, and was already mentionned in the thread (at least by me) as the logical alternative to training on a massive amount of data with poor captioning. Experts think that it is an effective way to train model: less data, better caption, so of course the captioning will go to countries where the minimum wage for unskilled labor is 10-dollars a day rather than Luxembourg (250-300 dollars a day country, roughly, as a minimum wage). This is especially true for prisoner's labor, which are often indemnified less for their work, irrespective of country (for example, in the US federal prison system, according to wikipedia, the pay range is ghastly less than a single dollar an hour, so it's in the lower range of the horrible examples of wages given in your blurb). If anything, it was the expected outcome from the copyright holders' backlash against scraping.

Then, you try to mix low wage countries with child labor, presenting reports that several AI-related economical actors (namely, the US Department of Defense, Meta, Microsoft, OpenAI, Google, Amazon) are relying on child labor. Connecting the two is the first logical error, as if outsourcing low-margin services was the same as using child labor.

The second logical fallacy is that you conclude your post by "The global south, impoverished children, displaced refugees in camps, and prisoners are all included in the labor pool that has been exploited to create your shiny gen-ai tools. How you deal with that is your choice, but personally, I wouldn't touch gen-ai tools with a ten foot pole." The argument here is that your saying "An handful of AI actors are culprit of child labor, therefore all AI actors are guilty of using child labor", therefore you shouldn't use generative AI. Or, more exactly, "there I don't use generative AI, and you can do whetever you want" which is implying that doing so would condone child labor.

This is a fallacy because clothing is a sector widely known to be using child-labor, yet nobody in their sane mind is running around stark naked, saying "I wouldn't touch textile technology with a ten foot pole". They would be arrested quickly, especially if going commando around schools. And why does nobody do that? Why aren't jails full of naked people? Are all the clothing adepts supporters of using children as modern S-word? Has humanity stooped so low? No, they don't walk around in the nude because because everyone see that your argument is a fallacy and people usually try to avoid the brands that rely on child labor, at most, and not the whole technology.

"Some AI actors rely on child labor, therefore all AI actors must rely on AI child labor" is a hasty generalization. "Some X are Y, therefore all X are Y" is blatantly unsound logic. Try to explain that with X = Black People and Y = any negative quality and see if your reasoning stands, it should be apparent to all that it is not. Or rather, "the Caravaggio and Cellini were murderers, Picasso committed theft, Egon Schiele exploited child girls... therefore artists are criminals, all of them." You must see that it doesn't work, mustn't you? The logical error is blatant. So blatant that I find difficult to imagine that you're doing this in good faith and not trying to do that deliberately to insinuate that all AI users condone child labor (much like others point at "art theft" in the training of a model to criticize all generative AI, even those clean of "art theft"), which would mark the end of our discussion. But if you're really eschewing clothing out of contempt for the child-exploiting clothing industry, than I apologize for thinking you're using rhetorics here. But in that case, I'd strongly advise against applying the same logic to the Domino pizza case @trappedslider mentionned above and sever your link with the eating technology. Please do continue eating nonetheless and focus your legitimate wrath on a more narrow target, like... the specific stores that were found guilty. Especially those that served ananas-coverd pizza, because that's the true evil.

So, once the fallacies have been addressed, what's left? The accusations that some entities are encouraging exploitation of children and other vulnerable people, and the question about my personnal stance on them.

If you're really interested in my purchasing behaviour, which I doubt anybody is, I have no contractual relationship with the US Department of Defense, Meta, Microsoft, Amazon, Boeing or Google. I prefer Qwant but I will use Google if set up as a default search engine on a public computer. I have an OpenAI subscription as part of my job, where I don't really have any say on it and I have an indirect relationship with Boeing (it might happen that my flights to Asia are on Boeing aircrafts as I don't go out of my way to select Airbus A350s over Boeing 787s despite the added comfort and reduced noise). So basically, I am already not patronizing those entities, which is probably rare, because very few people are able to boycott Microsoft, Google and, especially, the US government, and I can't really do anything more than not giving any of them a single dime. I don't claim any moral ground here, I didn't know they were encouraging child labor before reading from you.
 
Last edited:

log in or register to remove this ad

Art Waring

halozix.com
I didn't know they were encouraging child labor before reading from you.
And that's why I tried explaining, that we can't really discuss the true implications of gen-ai without understanding how the business works.

The business functions by relying on exploitive labor (including child labor), and all of the big companies are involved. Everyone from Microsoft, to Google, Meta, OpenAi & so on. This is not a limited occurrence, it is widespread across the entire industry.

This isn't a hasty generalization. For you to make accusations of generalizing, you will need to provide some kind of proof, something which you have failed to present to support your side of the argument.

I have spent countless hours doing extensive research on the subject, providing sources, and presenting the facts. So far you have offered nothing except your own personal opinions, none of which are supported by verifiable facts.

The fact that other industries also use exploited labor does not excuse companies who choose to exploit people for the sake of profit. Furthermore, new companies like OpenAI have the choice of acting responsibly, and not repeating the past mistakes of their predecessors. Continuing to use child labor isn't just a bad look for the industry, it is an obvious sign that they are not interested in anything but profit.

And this is still ignoring the fact that the industry runs on not just exploited labor, but it exists solely by stealing the work of writers and artists wholesale, funneling billions in profits for the already-rich, while taking away work (and the ability to earn a living) from the very people that were exploited to create the training data for gen-ai to exist at all.

Face it, gen-ai is a toxic, exploitive, and potentially disastrous piece of tech. It is a new technology, and development takes time, but in the meantime companies should be held accountable for bad behavior, and not get a pass because it is "inevitable."

If gen-ai is inevitable, and from your own admission we can do nothing to stop it (or stop the industry from exploitive practices), then what you are saying is that exploitive child labor, and below-minimum wage pay for refugees in camps is not just inevitable but justified? Please explain to me how you are justifying this at an industry-wide level.
 

Scribe

Legend
Like it's the only industry that exploits child labor

Nice nice. So we go from "Not all Men" to "Whatabout..."ism?

John Candy No GIF by Laff
 

This isn't a hasty generalization. For you to make accusations of generalizing, you will need to provide some kind of proof, something which you have failed to present to support your side of the argument.

No. It is the canonical hasty generalization, as I explained by providing several illlustrations of this fallacy. You looked into a few companies (some which are quite removed from the field of AI, I mean OK, Boeing or the US government are certainly looking a lot into AI but they are not really seen as leaders in the field...) and said they are evil, relying on child labor and concluded that since the subset you selected are evil, all ai-related entities must be evil as well. I tried to warn you that this line of thought leads to (or stems from) prejudice, and I think some of the illustrations provided were enough to show it. You chose to ignore, and I don't want to spend more time discussing since you're not adhering to logical debate. Logically, disproving "all X are Y" is done by showing a single example of and X that isn't Y, but since you don't accept logic, what would be the point? You'll say "hey Mistral AI or the University of California or the French National Library or the Technical University of Munich may be free of child labor, but this proves nothing". I don't think it makes sense to spend some time discussing if you're not adhering to the principles of logic, much like when you're still rehashing the "this tech exists solely from stealing from artists" despite being disproven both by the existence of models trained out of public domain datasets and the fact that TDM does allow it, making it not "stealing" but a legal use of published IP. Or that you still clinging to the latter part of your statement "funneling billions in profit to the already rich" despite being disproven by the existence of non-profit models, some even open-source. I don't see anything positive emerging from discussing further with you under these conditions.
 
Last edited:



Art Waring

halozix.com
So apparently blocking someone and then quoting them is allowed?

This is honestly beyond immature. Having the last word via blocking, then putting words in my mouth is as dishonest as you can get.

Mods can address this if they want, but this is an obvious attempt to silence any disagreements on the subject by silencing any and all opposition.

I never stated any company is "evil," and twisting my words to suit your agenda is completely unfair, especially when you are justifying the use of exploitive child labor.

If this is the future of the forums, where artists have no way to voice their concerns about gen-ai tools without being attacked with no way to respond to those who are attacking us, then this is not what I signed up for.

This affects our ability to work and earn a living. We have a right to voice our concerns as long as we show respect to other forum users. Up to now, I think that I have done my best to try and keep the conversation civil, while also taking into account the other persons views.

Cutting artists out of the conversation only further reinforces the fact that some people don't want artists talking about this, because they stand to benefit in every way from using gen-ai tools, which put artists & writers out of work and on the street, and further growing inequality by using exploitive labor, and potentially illegal data scraping to steal every artists lifetime of experience without their consent.
 

Cutting artists out of the conversation only further reinforces the fact that some people don't want artists talking about this, because they stand to benefit in every way from using gen-ai tools, which put artists & writers out of work and on the street, and further growing inequality by using exploitive labor, and potentially illegal data scraping to steal every artists lifetime of experience without their consent.
So, here's the thing some artist ARE making money with AI tools, some artist are ALSO losing out on work due to AI tools.

Also Data scrapping ISN"T illegal. Web scraping for training data is unquestionably a protected and legal activity, as long as it is not in violation of the CFAA or the DMCA. Neither of which are infringed by scraping data that was legally placed in publicly accessible spaces on the internet.
 

Art Waring

halozix.com
So, here's the thing some artist ARE making money with AI tools, some artist are ALSO losing out on work due to AI tools.
I am not talking about individual artists who choose to use it at their own risk, I am specifically talking about the title of the thread: Companies using it in place of hiring an artist or a writer.

Also Data scrapping ISN"T illegal. Web scraping for training data is unquestionably a protected and legal activity, as long as it is not in violation of the CFAA or the DMCA. Neither of which are infringed by scraping data that was legally placed in publicly accessible spaces on the internet.
FFS, thanks again for taking my quote out of context. I said "potentially illegal," as the laws will likely change in the future. That means that its potentially going to come back to bite them in the ass because they chose to use the LAION-5B dataset, one of the most widely used datasets, which also scraped millions of copyrighted images without permission for non-commercial purposes (which is now being used in commercial products). They are claiming fair use, when the laws have not yet had the time to address these issues.

We have covered it a hundred times by now.

Furthermore, I don't see this conversation continuing productively now that underhanded tactics are being used to get in the last word, and silencing those who are directly affected.

Peace.
 

FFS, thanks again for taking my quote out of context. I said "potentially illegal," as the laws will likely change in the future. That means that its potentially going to come back to bite them in the ass because they chose to use the LAION-5B dataset, one of the most widely used datasets, which also scraped millions of copyrighted images without permission for non-commercial purposes (which is now being used in commercial products). They are claiming fair use, when the laws have not yet had the time to address these issues.
EDIT: So basically in the spirt of this thread :Companies are evil because they are going to use tech to keep from paying people. (is that it?)

Maybe i should have no there is NOTHING even potentially illegal about the data scrapping. I mean if you wanna future proof the law then we should also worry about Hologram technology NOW instead of waiting for the actual tech to happen. But that's not how the real world works.

Sometimes when working with large amounts of public data you end up with some evidence or material that exists because of illegal activities or malicious actors using your service. What we do in these situations is not to shut the service down, but to require that they take appropriate, reasonable measures to prevent abuse.

If datasets are getting information from illegal sources, rather than requiring they shut down all their production, we should demand that they have in place reasonable measures to check for illegal content and scrub it prior to training. If things happen to slip through those measures and filters, we should not be holding them liable for imperfection.

Research does not mean non-commercial. Commercial research purposes are still research. Especially in cases like LAION where the results of the research is released to the public.

LAION wasn’t created to be a purely academic offering. It was created to offer an open-source alternative to development in AI spaces by the likes of Google and Microsoft. Open source does not mean non commercial either. Free as in Freedom, not Free as in Beer.

And lastly, even academic research is used for commercial purposes after the research is complete. Just because something was discovered by a non-commercial entity doesn't mean that no one can ever come along and make a profit using the technology that was discovered.
 
Last edited:

Remove ads

Top