AI/LLMs AI art bans are going to ruin small 3rd party creators

Even if his initial prompt is a perfect descriptor of the composition and contents, the generator would almost certainly fail to deliver it in one try. But if they masked various regions out (either to alter or not alter), and meticulously calibrated all the settings of how much the changed portion could differ from what's being fed in and did iterative work; and/or then brought it into GIMP and did a paintover in some parts and put it back in and gave it refinement prompts (masking out or not masking out sections to re-render), he could eventually get there. The iterative process he described was doable before these cloud generators existed, when I tested it out on my aging GPU. That's why I assumed Maxperson was running an image generator on their own hardware. It would be impractical to do all that through cloud generating services.
I had just reasoned it out. Personally I have no interest in creating artworks in person or on a computer. The only reason I pulled it up like I did was that people were constantly telling me that it couldn't be done, so I took a few minutes to prove that it could be. The way you describe seems much better. :)
 

log in or register to remove this ad

The only reason I pulled it up like I did was that people were constantly telling me that it couldn't be done, so I took a few minutes to prove that it could be.
You have spent far, far more than a few minutes on these forums arguing vehemently and at length in favour of generative AI in many, many threads. Which is certainly odd for somebody who claims to have no desire to use it.
 

Yeah i would rather keep working my 40 hour a week county job until i retire and never sell a copy of my life's work than put generative ai slop in it.
A couple of things, if I make something for making something myself, I'll make it myself. Most people's life work is something personal, that they will make themselves, with no participation of others (including AI).

Harshness will now ensue:
But just because it's your life's work, does not mean it's good or something others will want. Chances are good that you will never sell a copy of your life's work with or without AI. But that's imho allright, as you're making it for yourself in the first place. That said, when you start making things to sell, for others, you need to take into consideration other factors, people, processes, and efficiency.

Example:
There are times when you're too tired and lack time, you go for instant meals, be that microwave/oven meals, instant noodles, or some fast food. Your only concerns are generally time, price, and whether it'll make you sick or not. Other times you'll take the time to make a very good meal, healthy, almost completely self made, and tasty. While other times you'll go out eating to a proper restaurant, where you don't actually check how the chef is making your meal... As long as the meal was awesome and the setting was great, you'll be happy, with no $&*% given about how they achieved that.

I see AI usage in the same light. DMing a D&D campaign is NOT my lives work. Neither is it for the people playing in that campaign. We already use the rules and often even adventures/campaigns made by others. What makes the pnp RPG experience lives between those 'walls' (or framework). And where you spend your time as a DM depends on what you want to run and how much time you have... Sometimes AI usage is great, sometimes it's not wanted. But please leave that up to the individual to decide when and where they use it or not, and respect the people's decisions even if you don't agree with it.
because courts are reliable
I'll just add the '/sarcams' myself. Courts are bound by laws, location, culture, and time, all of those change. So what one court says in one location, another court will say something else in another location due to the passage of time, different cultures and laws. And even laws change and especially how to interpret them.

I would argue that AI is being used in a LOT of places/situations where you don't expect them to be used at all, especially traditionally.
 

But please leave that up to the individual to decide when and where they use it or not, and respect the people's decisions even if you don't agree with it.
But, again, that's not how ethics and morality work. Leaving others to do unethical and/or immoral things is not enough; it requires good people to speak up. Now, you can argue whether or not it's unethical (it is, but that's the other tedious conversation and not the point here) but asking that people should just ignore or, worse, respect what they consider to be unethical or immoral behaviour is probably a non-starter.

I’d say the same if the topic was piracy (which it kind of is but hey ho) or any number of other things. As it happens, we’re talking generative LLMs.
 

🤣🤣 I'm not familiar with that one. But certainly, if you can put the logic you want cleanly onto paper, without an IDE filling in blanks for you and correcting your errors, then you understand what you've built.


I am not sure to what you're referring. Is that Punch-Card programming jokes? Or a comment about ASM instruction length limits? I may be a little too young for this one myself. lol
The tests for the class I took in Vax Basic were "Here's a chunk of code. Execute it by hand. Show all work" simple 5 function calculators were allowed, but you risked points by using them... transcription errors in longhand were marked off less than those who used calculators and made transcription errors.
 

But, again, that's not how ethics and morality work. Leaving others to do unethical and/or immoral things is not enough; it requires good people to speak up. Now, you can argue whether or not it's unethical (it is, but that's the other tedious conversation and not the point here) but asking that people should just ignore or, worse, respect what they consider to be unethical or immoral behaviour is probably a non-starter.

I’d say the same if the topic was piracy (which it kind of is but hey ho) or any number of other things. As it happens, we’re talking generative LLMs.
What's 'good' people? Imho there are none, there are just people with all the good and bad they do. There are monsters, but very few saints, and none are here on this forum. ;)

As for unethical or immoral behaviour, that's the same thing people have said about different sexuality, religion, politics and racial issues. Most xyz-ists do not consider themselves as bad or evil people, just as good ethical and moral people that speak out against what they perceive as unethical or immoral behaviour...

Just because you believe abc about generative AI and LLMs and I believe xyz, does not mean either of us is actually right... That you hear your opinion echoed in your bubble, does not make you more right, nor does it mean that my bubbles are right either. We have opinions, we all have them, just like we also have other things in common.

As for respecting others. Does that behaviour hurt someone directly? No. Indirectly, maybe. But maybe so does a butterfly's flapping wings on the other side of the the world... Generative AI/LLM and piracy do not take away anything, like actual stealing something does. If you take your car to another shop where the people do a better/faster job then your current car shop, some could consider that taking a job from people (your old car shop). Depending on the part of the world you live in we have laws and protections in place that protect older people's jobs, so they can't be easily replaced by younger cheaper people. In many parts of the 'civilized' world those protections are lacking...

In the last ~6 years (Github Copilot june 2021 in technical preview) I've not worked with LLM professionally, I have certain professional views on the use of LLM, mostly that a client's legal/security departments should first sign off on a specific solution (vendor) before I would even use it. I also think that many professional implementations of LLM have not been thought out very well, just as with most other new technologies that the consumer market has (almost) no use for. In a similar way we the 'cloud' movement many years ago, in a similar vine in the IT space we had things like Virtual Machines, Remote Desktops, Laptops, Terminals, mainframes, etc. Everytime there was a technology shift, people lost their jobs, most were able to adapt though to new technologies. When I started in IT 25-30 years ago, many things have changed. I view generative AI and LLM the same way, certain jobs will be gone, new ones will appear and most will make the transition from one to the other.

New technology shows up, law makers lag behind, laws change, while still not fully understanding how the technology works. As an example: How computers work with memory, storage, networking, etc. is not compatible with how copyright laws are written. And initially there was much confusion, but they didn't want to give up on their new convenient invention, what changed was how it was interpreted by the folks enforcing laws. And I see the same happening with generative AI and LLM. I suspect that the hype will die down, it will become less intrusive (compared to now) and people will start accepting it as part of their daily lives.

Let me give the example of the 80s anti-nuclear movement, against both weapons and energy production. Today, we still have them, just much a lot less (imho due to the end of the cold war in 89'-91'). We're now starting to produce more nuclear weapons again due to global issues. And we still have nuclear power. Most folks protesting then are either dead or don't understand that the powergrid doesn't care where the power comes from, no matter what you energy contract says your power comes from. My point: People move on, the world changes, other things become more important, and due to ignorance people will accept things they previously fought against... And the next generation doesn't know any better and embraces the future as so many generations before them have.
 

What's 'good' people? Imho there are none, there are just people with all the good and bad they do. There are monsters, but very few saints, and none are here on this forum. ;)

As for unethical or immoral behaviour, that's the same thing people have said about different sexuality, religion, politics and racial issues. Most xyz-ists do not consider themselves as bad or evil people, just as good ethical and moral people that speak out against what they perceive as unethical or immoral behaviour...

Just because you believe abc about generative AI and LLMs and I believe xyz, does not mean either of us is actually right... That you hear your opinion echoed in your bubble, does not make you more right, nor does it mean that my bubbles are right either. We have opinions, we all have them, just like we also have other things in common.

As for respecting others. Does that behaviour hurt someone directly? No. Indirectly, maybe. But maybe so does a butterfly's flapping wings on the other side of the the world... Generative AI/LLM and piracy do not take away anything, like actual stealing something does. If you take your car to another shop where the people do a better/faster job then your current car shop, some could consider that taking a job from people (your old car shop). Depending on the part of the world you live in we have laws and protections in place that protect older people's jobs, so they can't be easily replaced by younger cheaper people. In many parts of the 'civilized' world those protections are lacking...

In the last ~6 years (Github Copilot june 2021 in technical preview) I've not worked with LLM professionally, I have certain professional views on the use of LLM, mostly that a client's legal/security departments should first sign off on a specific solution (vendor) before I would even use it. I also think that many professional implementations of LLM have not been thought out very well, just as with most other new technologies that the consumer market has (almost) no use for. In a similar way we the 'cloud' movement many years ago, in a similar vine in the IT space we had things like Virtual Machines, Remote Desktops, Laptops, Terminals, mainframes, etc. Everytime there was a technology shift, people lost their jobs, most were able to adapt though to new technologies. When I started in IT 25-30 years ago, many things have changed. I view generative AI and LLM the same way, certain jobs will be gone, new ones will appear and most will make the transition from one to the other.

New technology shows up, law makers lag behind, laws change, while still not fully understanding how the technology works. As an example: How computers work with memory, storage, networking, etc. is not compatible with how copyright laws are written. And initially there was much confusion, but they didn't want to give up on their new convenient invention, what changed was how it was interpreted by the folks enforcing laws. And I see the same happening with generative AI and LLM. I suspect that the hype will die down, it will become less intrusive (compared to now) and people will start accepting it as part of their daily lives.

Let me give the example of the 80s anti-nuclear movement, against both weapons and energy production. Today, we still have them, just much a lot less (imho due to the end of the cold war in 89'-91'). We're now starting to produce more nuclear weapons again due to global issues. And we still have nuclear power. Most folks protesting then are either dead or don't understand that the powergrid doesn't care where the power comes from, no matter what you energy contract says your power comes from. My point: People move on, the world changes, other things become more important, and due to ignorance people will accept things they previously fought against... And the next generation doesn't know any better and embraces the future as so many generations before them have.
Goodness me. You're right. There is no such thing as ethical or moral behaviour. We should all just do what we want.

Also, I'm gonna steal your car.
 

You have spent far, far more than a few minutes on these forums arguing vehemently and at length in favour of generative AI in many, many threads. Which is certainly odd for somebody who claims to have no desire to use it.
No I haven't. I don't at all speak in favor of just using it as a prompt to cheap, easy, and unethical art.

From the beginning I have differentiated between the two methods and disavowed the first. Only the first method is generative AI. The second is not using it as generative AI, but using it as a tool to achieve an artists vision. So far I don't think I've seen a single example of AI used to create art like I've been talking about here.
 


A couple of things, if I make something for making something myself, I'll make it myself. Most people's life work is something personal, that they will make themselves, with no participation of others (including AI).
What a weird idea. No, my life's work is not just for me, and already includes participation by a dozen or so people.
Harshness will now ensue:
Lol
But just because it's your life's work, does not mean it's good or something others will want.
Okay?
Chances are good that you will never sell a copy of your life's work with or without AI. But that's imho allright, as you're making it for yourself in the first place.
Nope.
That said, when you start making things to sell, for others, you need to take into consideration other factors, people, processes, and efficiency.
And that never ethically involves ai.
Example:
There are times when you're too tired and lack time, you go for instant meals, be that microwave/oven meals, instant noodles, or some fast food. Your only concerns are generally time, price, and whether it'll make you sick or not.
The ubiquity of fast food and ready meals that are practically poison is an immoral practice by megacorporations.

as is the fact that it is so much cheaper to eat garbage, which is a direct result of our economy focusing on cash crops over sustainable food crops, and most food production being owned by megacorporations.
Other times you'll take the time to make a very good meal, healthy, almost completely self made, and tasty. While other times you'll go out eating to a proper restaurant, where you don't actually check how the chef is making your meal... As long as the meal was awesome and the setting was great, you'll be happy, with no $&*% given about how they achieved that.
If you think i wouldnt care whether the restaurant pays its workers or otherwise engages in safe and ethical business practices...you are off in wonderland.
I see AI usage in the same light. DMing a D&D campaign is NOT my lives work. Neither is it for the people playing in that campaign. We already use the rules and often even adventures/campaigns made by others. What makes the pnp RPG experience lives between those 'walls' (or framework). And where you spend your time as a DM depends on what you want to run and how much time you have... Sometimes AI usage is great, sometimes it's not wanted. But please leave that up to the individual to decide when and where they use it or not, and respect the people's decisions even if you don't agree with it.
No. Because it is not disagreement, it is recognition that the use of generative AI is unethical. Inherently and unavoidably. it is never "great", no matter what ridiculous excuses you make.
I'll just add the '/sarcams' myself. Courts are bound by laws, location, culture, and time, all of those change. So what one court says in one location, another court will say something else in another location due to the passage of time, different cultures and laws. And even laws change and especially how to interpret them.

I would argue that AI is being used in a LOT of places/situations where you don't expect them to be used at all, especially traditionally.
Fiestly, distinguish between generative "llm" ai, and less unethical less enviromentally harmful techs.

Second, lots of unethical crap is already happening. It should be stopped. 🤷‍♂️
 

Recent & Upcoming Releases

Remove ads

Top