California bill (AB 412) would effectively ban open-source generative AI


log in or register to remove this ad


Not just companies but EVERYONE including the guy who put together a little model in his bedroom.
Good, the more flies it catches the better.

The guy in his bedroom should be spending his time creating his own ideas, not programming a computer to steel other people’s.

Better still, he should go outside and get some fresh air, whilst there is still some left.
 
Last edited:


Man, this bill was obviously written by someone who knows nothing about the technology (as it par for the course in this sort of thing).

I decided to ponder a bit of the points of the bill via ChatGPT.
Here's a link to the conversation if anyone would like it.



I'd like to draw your attention to a few specific lines:


This line that includes the phrase "derived synthetic content" seems a bit... inaccurate.
What dictates what "synthetic" content is? A picture/text/etc either exists or it does not.
It's synthetic if it's something that has been created when prompted, as opposed to just a straight copy of a single existing work. Any new item created by such a system is synthetic by definition. They're just using the word to differentiate between generating and copying.

The important part in the sentence isn't "synthetic", it's "derived".
 


Check out the related article about the Meta lawsuits, where they argue it's "hyperbolic" to call Meta grabbing the books from BitTorrent piracy.
It would be interesting to see what happened if somebody pirated a bunch of Meta stuff and they took them to court. I mean, that wouldn't affect the court case, but how it played across the media.
 

I think the barrier to entry is so high already that the "little guy" is still someone worth at least seven figures. No way I'm training anything close to a usable model on my $500 five-year-old laptop. I don't even think a top of the line gaming rig is anywhere close to enough. (though maybe I could run stable diffusion locally on my laptop)
Actually the little guy is whoever can afford a decent GPU,if i had a knowledge/want I could train a limited model at least for stable diffusion on my rig using a Nivida 3060. It's not going to be a large one but i could do it.

Civitai: The Home of Open-Source Generative AI has hundreds of models all made by various users.
 


Actually the little guy is whoever can afford a decent GPU,if i had a knowledge/want I could train a limited model at least for stable diffusion on my rig using a Nivida 3060. It's not going to be a large one but i could do it.

Civitai: The Home of Open-Source Generative AI has hundreds of models all made by various users.

TBH those are mostly (rather small-scale) modification to base models. The cost of training is however lowering for base models: that was the reason the Chinese text model Deepseek got a lot of attention. Not because it was better than the existing top players, but because the compute time cost was only 6 millions USD instead of the (presumably) billions of OpenAI's.

A model image would need less training. AuraFlow is the leading model for prompt adherence and was developped by a little guy (student at the time) (although helped by a small compute grant). Pony v7 cost 300,000 USD of compute time and is a community-funded initiative paid by little guys. And it's an ambitious base models. The cost might also be declining in the long term because one doesn't need to start from scratch by using a decent base model, though. As you pointed out, a good base model might only need limited finetune to be improved, for which a gaming computer is enough.

Also, a H200 right now starts at 100,000 USD. People routinely pay that price for cars. They are not "little" guys but the type of system needed went down from extremey costly to within the means of an enthusiast. And it won't become more expensive with time.
 
Last edited:

Remove ads

Top