California bill (AB 412) would effectively ban open-source generative AI

Why are you all assuming people are actually using paid online AI service? There is strong evidence they are a minority (albeit a large one) use in image generation compared to models one runs on his own computer for the price of electricity.
I'd love to see a citation or two for this.
 

log in or register to remove this ad


That's a question that deserves a complicated, thought out answer but my immediate thought is since iirc there are very few degrees of separation between them and the for profit ones (they share employees or board members or both) I'd say yes to a lesser extent.
 

That's a question that deserves a complicated, thought out answer but my immediate thought is since iirc there are very few degrees of separation between them and the for profit ones (they share employees or board members or both) I'd say yes to a lesser extent.
Reading on LAION, it seems they don't host themselves any copyrighted material but their own work. They might qualify for the same exception that allows that page to keep operating, or they might not if they have a considerable overlap of people involved. Either way, we need a lawsuit to mature fra enough to get there.
 

I'd love to see a citation or two for this.

Last year it was Stability AI that was presented as the leader in image generation, beating MJ, Dall-E and Adobe in various sources and this year BFL's Flux model is touted to hold 42% of generations alone. While I don't necessary trust these industry reports alone, I find rational that enthusiasts, like the people who're interested enough in the topic to argue on this board about it, would tend to use the cheapest solution, even if it involves more complexity (installing a computer program isn't exactly rocket science after all) than using a web service with a more expensive cost per generation, as they'd be generating a lot more images than average. At the very least, they'd use free models and just use website to rent GPU time to run them.
 
Last edited:

Reading on LAION, it seems they don't host themselves any copyrighted material but their own work. They might qualify for the same exception that allows that page to keep operating, or they might not if they have a considerable overlap of people involved. Either way, we need a lawsuit to mature fra enough to get there.
 

Last year it was Stability AI that was presented as the leader in image generation, beating MJ, Dall-E and Adobe in various sources and this year BFL's Flux model is touted to hold 42% of generations alone. While I don't necessary trust these industry reports alone, I find rational that enthusiasts, like the people who're interested enough in the topic to argue on this board about it, would tend to use the cheapest solution, even if it involves more complexity (installing a computer program isn't exactly rocket science after all) than using a web service with a more expensive cost per generation, as they'd be generating a lot more images than average. At the very least, they'd use free models and just use website to rent GPU time to run them.
I was answering to @Bohandas in speciffic who claimed to easily be able to afford an artBreeder and a NovelAi subscription. Paying for both is about $50 to $60 a month, enough to pay a few art commissions.

Locally run AIs produce smaller images with more defects and artifacts, and they require monster PCs to work, so I'm less worried about them.
 

Locally run AIs produce smaller images with more defects and artifacts, and they require monster PCs to work, so I'm less worried about them.
Compared to a $500 five-year-old laptop yes they are monsters, otherwise you could run a model on a prebuilt depending on how much Vram and regular RAM you have, and what model you're running.

I run stable diffusion and flux models locally on my rig with a 12gb 3060 and 32 mb of RAM from 2021. I don't know where you get this idea that running local require a monster PC, but it's incorrect.
 


It indeed does. That's bias in the model, and it is a real thing. Not equating that to the rest of it- but just pointing out that this incorrect.
I have to second this @MoonSong . That is in fact the literal point of training models. Training a model is the process of feeding information into the model so that future queries relying on the model interpret those queries to include that new information.

I understand the point you are trying to make. But the answer a query to an LLM is not that different in most ways to a query put to a human being in that regard. Both the LLM and the human have a 'perception' (to use your term) and give a response based on their experiences and what they know about the world.
 

Remove ads

Top