California bill (AB 412) would effectively ban open-source generative AI

I wouldn't phrase it as choosing to ignore. When we give orders to humans, we are capable of understanding what they mean and how to interpret them, and therefore what it means to violate them.

The LLM doesn't. Prompts can increase the probability that it answers in a certain way, or can induce it to format in a particular way. But it isn't going to follow orders as a human would.

The solution here is to recognize that and build some safeguards between the LLM and key bits of data. Perhaps it should only access a copy of the database.
Or... hear me out here... perhaps there should be a Human safeguarding any final decisions? Reminds me of DOGE and what the developer said - he developed the waste AI (and he's not experienced on doing so) to give recommendations for a human to take a deeper look into. I personally think that's a totally valid use of AI.

They wanted to save money now, so instead of having a human look through it and evaluate the smaller sample set, used it as gospel.

And to people that just say no AI and the solution is to dismantle the AI, there's a lot of scut work that leads to burnout, especially with heavy workloads in bursts. You hire people to handle the heavier workload and you have too many people when the workload lessens. You don't hire people (which most companies choose to do) and you burn out your workers. Instead you use AI to help you to manage that workload, and concentrate on the percentage of the job that requires your full attention, instead of multi-tasking doing stuff that's really beneath your pay grade.
 

log in or register to remove this ad

Or... hear me out here... perhaps there should be a Human safeguarding any final decisions? Reminds me of DOGE and what the developer said - he developed the waste AI (and he's not experienced on doing so) to give recommendations for a human to take a deeper look into. I personally think that's a totally valid use of AI.
I think this is exactly right.
 

I wouldn't phrase it as choosing to ignore. When we give orders to humans, we are capable of understanding what they mean and how to interpret them, and therefore what it means to violate them.

The LLM doesn't. Prompts can increase the probability that it answers in a certain way, or can induce it to format in a particular way. But it isn't going to follow orders as a human would.
Correct. An AI can't 'choose' any more than it can 'panick' or 'feel fear'. It all comes down to the series of steps between the prompt and the outcome, and the probabilistic path that got it there. And too many people using AI to do too many critical things don't seem to grasp that.
 

And to people that just say no AI and the solution is to dismantle the AI, there's a lot of scut work that leads to burnout, especially with heavy workloads in bursts. You hire people to handle the heavier workload and you have too many people when the workload lessens. You don't hire people (which most companies choose to do) and you burn out your workers.
It is a matter of balancing workloads and looking to remove pointless work (instead of looking to do it faster)
 


Pets & Sidekicks

Remove ads

Top