AI/LLMs AI art bans are going to ruin small 3rd party creators

The program produces the output you see on your screen. The programmer doesn't necessarily produce any of it. You claimed to be the equivalent of a programmer. But a programmer doesn't create the output produced by their program unless they also created all the input the program requires when producing that output.

If I create an internet browser, I didn't also create the websites my browser displays when I run it. The instructions for displaying those websites, which my browser required as input, were created by people other than myself. Those other people are the creators of the websites my browser displays.
Baldur's Gate doesn't happen just because I input some things. It happens because the programmers created it to produce all of the possible outputs based on all of the possible inputs. There's nothing I input that produces output that they didn't program and intent to happen. Other than bugs/exploits anyway, but those are errors to be fixed.
 

log in or register to remove this ad

That’s a bold claim.
I provided a concrete example supporting my claim, which you left out when quoting me.

If you disagree with my example, I'd be genuinely curious to hear your argument. I didn't think it was controversial to say a programmer who creates a web browser isn't the creator of all websites their browser displays on the end user's screen.
 

Baldur's Gate doesn't happen just because I input some things. It happens because the programmers created it to produce all of the possible outputs based on all of the possible inputs. There's nothing I input that produces output that they didn't program and intent to happen. Other than bugs/exploits anyway, but those are errors to be fixed.
So which of your two, contradictory arguments should I believe?

Is a program's output created by the programmer, because the program was created to produce all possible outputs based on all possible inputs? Or is a program's output created by the end user, because the end user produced the output they envisioned in their head by providing instructions chosen from the program's possible inputs?

Per your arguments, either gen AI is producing output created by the AI's creator, who programmed the AI to respond appropriately to any possibly input, or Baldur's Gate is just a tool an end user uses to create a visible game state of their own creation.
 

I provided a concrete example supporting my claim, which you left out when quoting me.

If you disagree with my example, I'd be genuinely curious to hear your argument. I didn't think it was controversial to say a programmer who creates a web browser isn't the creator of all websites their browser displays on the end user's screen.
Let's simplify.

If I design and write a program in BASIC that consists mostly of code lines starting with PRINT, that when run prints out some ASCII art on a page, did I create that art or did the computer?

If, instead, someone tells me to write that program and says "make it look like an Easter bunny" and I then write it and it prints out an Easter bunny image, who's the creator?

Your example complicates things by inserting a third party: the website creators. Here, the website creators create their sites but your browser determines how they are displayed and what commands are required to get from one page to another. A loose analogy is that you designed the engine that goes under the hood of various other people's car designs.
 

If you disagree with my example, I'd be genuinely curious to hear your argument. I didn't think it was controversial to say a programmer who creates a web browser isn't the creator of all websites their browser displays on the end user's screen.

That is not at all the sort of example that came to mind for me, but I can see why the same words might also describe the web browser example.

I was thinking, "Huh, if I write a program that ingests a data set and then outputs a visualization, I'm pretty sure I'm the creator of that visualization, even if I didn't create the data. Even if I use a library like d3.js instead of writing it all from scratch."
 

Heh. I just pulled up an image and started modifying it just to see. It can follow instruction as I have been saying. However....

1. It takes a good amount of time to make the image with the instructed change.
2. There's a limit to the number of images I can make per day.

I could do what I am arguing here, but it would take months at AI's current ability, not hours. I think that it will need to wait for the future, if for no other reason that AI needs to improve to make creating AI art practical.
Just FYI, that is because you're using cloud services to do it.

The process you're describing is doable right already if you are running some model on your own videocard with sufficient RAM. From your comments about this workflow, I assumed that was what you were doing.

Running some model on your prompt on your own hardware; and then feeding it back in and doing many image to image changes masking our different areas; exporting it to an image editor and doing partial edits of your own, and then feeding it back in and doing image to image changes with masking so only certain parts of hte image are changed, etc. I don't know exactly what the tools look like now, but I'm sure the software is more fleshed out than when Stable Diffusion was new and I tried running it on my videocard through a Python app I found online to see what it was. It's not a hypothetical, it just requires you to use your own hardware.

I assume companies with their own customised models running on art they own (Say, Hasbro having fine-tuned some model or other on MtG and D&D art {I don't know if there are any contractual terms that would prevent them from doing so, for the sake of this argument I am assuming not}) are doing something like that, and having their employees connect to a server with a GPU adding jobs to a queue, not connecting to an external cloud service.
 

A skilled enough programmer can create a simple program using only a white board, with no computer involved in the process at all.
That's how all our exams worked when I started my CompSci degree in 2008, FYI. We wrote code on paper with a pen. So, that's not some hypothetical expert writing code, that's what you are doing as a student. A nice IDE that points out your typos and other errors and has autocomplete is certainly convenient, but it's not required. I spent most of the last decade writing my code in notepad++, and it basically just has basic syntax colourcoding, none of the fancy convenience stuff. Granted, I have been out of the game of making a living from code for a while now, but I still do it pretty regularly from my home office, like any other programmer I've known whether that's whay they do for a living or not.
 

The old saying is, "Hardware has limitations. Software doesn't. Real computer scientists use pencil and paper."

When I was a youngster we had to program with just 1's and 0's, and sometimes we ran out of 1's. But tell that to kids nowadays and they don't believe you.
 

If you didn't know what color eyes you wanted on an elf with a bow, that's an incomplete vision. If you knew but didn't have the correct words to describe it, that is an issue with the verbiage.
Even if his initial prompt is a perfect descriptor of the composition and contents, the generator would almost certainly fail to deliver it in one try. But if they masked various regions out (either to alter or not alter), and meticulously calibrated all the settings of how much the changed portion could differ from what's being fed in and did iterative work; and/or then brought it into GIMP and did a paintover in some parts and put it back in and gave it refinement prompts (masking out or not masking out sections to re-render), he could eventually get there. The iterative process he described was doable before these cloud generators existed, when I tested it out on my aging GPU. That's why I assumed Maxperson was running an image generator on their own hardware. It would be impractical to do all that through cloud generating services.
 

The old saying is, "Hardware has limitations. Software doesn't. Real computer scientists use pencil and paper."
🤣🤣 I'm not familiar with that one. But certainly, if you can put the logic you want cleanly onto paper, without an IDE filling in blanks for you and correcting your errors, then you understand what you've built.

When I was a youngster we had to program with just 1's and 0's, and sometimes we ran out of 1's. But tell that to kids nowadays and they don't believe you.
I am not sure to what you're referring. Is that Punch-Card programming jokes? Or a comment about ASM instruction length limits? I may be a little too young for this one myself. lol
 

Recent & Upcoming Releases

Remove ads

Top