Sarah Silverman leads class-action lawsuit against ChatGPT creator

✋ me!

We've already had one AI try to recruit people and other AIs to destroy humanity. Sure it was prompted to make the attempt, but people are foolish and someone will try again.


What concerns me the most is that we can't make a video game or OS without it being full of enough holes to drive a fleet of trucks through them. Eventually an AI is going to be both capable of making the attempt like the one above AND have a hole that it can use to break through. That or some state out there like China or North Korea is eventually going to try and weaponize an AI and set it loose.
Never would work, someone always opens the box. Better have the box out there in the open and develop the social and practical defences as needed.
 

log in or register to remove this ad

Never would work, someone always opens the box. Better have the box out there in the open and develop the social and practical defences as needed.
It is already being done, such as using it in drone strikes, the problem is that it is too ethical, it follows orders to the letter, so it will wait forever rather than impatiently taking a shot. The AI in the stock market are given as the reason for the 2008 crash, though I'm not sure I buy it. It will shake things up, ultimately it is just a tool, though the societies that are still able to adapt, and not ossified, will more effectively wield that tool.
 


Good article. Thanks.

Relvant to RPGs, "tech writing" is considered high exposure, which is what a good chunk of writing for RPGs is. But the rest of it is creative writing. If I am being optimistic, I would say it might actually be beneficial to have a tool that spits out balanced and interesting stat blocks so I can spend more time writing flavor text.
 

366166818_10231446372364461_9187600919135361569_n.jpg
 


EDIT: I found the quote I was trying to reply to :)

It’s still a plagiarism engine. Yes, ChatGPT copies text. That’s what these programs do. Take from multiple sources, copy them, cobble them together, and spit out content.

That's at best, a gross over simplification, and at worse, flat out wrong. Truth is, we don't exactly understand how LLM's work, and that makes them all the more frightening. These Large Language Models don't just rearrange words and regurgitate content. Not only are the computer scientists not even really sure how LLM's are capable of doing what they are doing, some even question if LLM's "understand" how they are doing what they are doing.

So I am with you in the sense that we need to put a hold on AI, but for a totally different reason that I will explain later. I don't think generative AI with RNNs (Recurrent Neural Networks) or NLP (Natural Language Processing) through Transformer algorithms like BERT, LLama or GPT are just plagiarizers. I do believe they "learn". Is it stealing for a human to study the works of the masters when learning how to paint? We humans learn by watching and studying others. Our styles are also imprinted upon by those that we have an affinity for. Are we all plagiarizers too?
If the argument is, "they shouldn't have taken the data without the creator's consent", that's a bit more hairy...but even then, it's not any different than what humans do. Can you stop me from studying Van Gogh, or Rembrandt to learn how to paint? Or listening to Jimi Hendrix how to play a guitar? Or imitate the dance moves of Michael Jackson?

These LLMs and Generative AI are doing the same: learning. What makes them dangerous, is that we don't know how they are doing what they are doing, the biases from the data they were trained on, and how realistic what they produce is, to the point that it can affect society (ie, think deep fake news). Jobs have always been under threat by technology. This is just the first time in history that the creatives and knowledge workers, and not just the blue collar types have been affected.

About 4 months ago, a letter and petition was put out to have a moratorium on new LLM training and research. Last I remember, it had over 12k signatories, some of them luminaries in data science, philosophy and physics (one I recall sticking out was Max Tegmark). If you read it, the concern was that these LLM's are showing a lot of emergent behavior that can't really be explained. If any computer scientist tells you "LLM aren't not intelligent", they are full of it. We don't know how our intelligence works, so how can they make the preposterous claim that these LLM's haven't achieved some kind of early AGI (Artificial General Intelligence)?

A hot area of research in Machine Learning is called explainability. Data scientists are scratching their heads how some of these models work. In many ways, data science is a return to good old fashioned empirical science. Just run experiments, observe the results, then try to come up with a hypothesis to explain how what happened, happened. Most science today is, you have a hypothesis, then you come up with an experiment to test it, record the results and compare to your hypothesis. This is the other way around. You start with data, and try to learn what the "rules" are by testing out various statistical configurations (the models or layers in deep learning).

In classic programming: rules + data => answers
In machine learning: data + answers => rules

What machine learning is doing, is figuring out "the rules" for how something works. To simplify it as plagiarism or regurgitation is not what it is doing. It's figuring patterns and relationships, and yes, what is the next most likely word (though much much more complicated than simple Markov Chains). Some of the tasks that GPT-4 have been given are truly amazing to me, and lit a fire under my ass that I needed to learn how this stuff works or I am going to be out of a job in the next 10 years.
 
Last edited:

There are example of both things happening, in different countries as I'm sure you are aware. Is it cynicism when it happens?

The irony for myself is that I've voted left (Canadian left mind you!) for the last 20 years, until this year and as recently as 2 years ago thought UBI was a great idea.

UBI terrible idea. Magic money beans for online chatter.

We need a safety net but yeah can't say to much more.
 

"Scissors Grinder" - Until the late '90s we had an old guy who would come around my neighbourhood, for roughly 6 months of the year, pushing a cart with a grinding wheel and tools on it. A fair number of people would go out to him, to have their knives professionally sharpened. I have a friend who, to this day, makes a fair sum in a sideline of sharpening knives, mostly for butchers and the restaurant industry.

I used to sharpen my knives up a lot (used to use them in a factory).

Take hairs off your arm type sharp. It's not really worth the time required at home in the kitchen.

Ironically they automated knife sharpening but ended up with more blown out wrists and RSI type injuries.

Everyone sharpened their knives to suit them people got grumpy if the wrong people used "their" knife.
 

I see a lot of folks here saying effectively, "AI doesn't have a right to be trained on data without the creator's permission".

Every person here who has thought about designing their own roleplaying game, or creating their own world setting, has, like it or not, been "trained" on prior data....and without consent of the authors for doing so. We humans just take in data naturally and then figure things out.

Where I find things get morally and legally hairy is that the companies training the AI should have, at a minimum, paid for the work. All I need to do to learn and "train" how to make better roleplaying systems is by purchasing some. That is, in my book, all that the AI companies have to do as well. If we humans can learn automatically by simply reading or studying art without requiring permission, why should special rules apply for AI? If you're an English major and you dissect literary forms, are you paying royalties to the estates of deceased authors (or living ones) while studying how they did what they did?

The problem is when companies don't even do that. I suspect that buying all the books out there gets prohibitively expensive. So I would take issue with a company not purchasing one copy for their algorithms to work on. But once they have purchased the work (ie, access to the data) do they need permission from the creator for the AI to study it? If we humans don't need permission from authors to get inspiration from their works, why should a deep learning program?

I sometimes suspect that the reason that people demand that AI companies get permission from artists or creators is that we humans are afraid the AI will be better than we are. They are already better performing at many tasks humans do...why not the arts also?

My fears on AI have little to do with "stealing data" and more to do us not knowing how they work, the biases from the data they were trained on, the ease with which their generated works can fool humans, and businesses not having insight and government not having oversight into how this will impact the economy. For the latter point, perhaps I am being selfish because my career is under threat. For while I sympathized with workers losing jobs to automation, it's different when it hits home. But another truth is that the majority of the money in the economy is driven by "knowledge workers". Take away doctors, lawyers, engineers, etc and a LOT of money vanishes. At some point, people need money to buy things that became more "cost effective" through technology.

UBI to the rescue? Somehow, I don't think that's coming any time soon.
 

Remove ads

Top