ChatGPT lies then gaslights reporter with fake transcript

Do not underestimate consumer will on the medium term. We do get labels for ethical considerations of all sorts: fair trade product for things like cocoa or cofffe, fish from fish farm not fed with animal flour, meat coming from farms with ethical treatment of animals, eggs from hens raisen in open range, and "made in country X" labels which is a way to say "not using labour from countries with lesser worker protections".
This seems like what is happening with AI art in RPGs, right?
 

log in or register to remove this ad

Do not underestimate consumer will on the medium term. We do get labels for ethical considerations of all sorts: fair trade product for things like cocoa or cofffe, fish from fish farm not fed with animal flour, meat coming from farms with ethical treatment of animals, eggs from hens raisen in open range, and "made in country X" labels which is a way to say "not using labour from countries with lesser worker protections". If there is enough popular demand, such information will become available, at least to give a competitive edge to companies who will want to target the specific user segment that would rather avoid products made through AI, even at the cost of paying more for them.

If there is a popular demand, then a label for these things will appear. But without legal regulation with significant penalties for lying, that label is essentially meaningless. Things like nutrition labels and ingredients lists took decades of work to standardize, and require ongoing efforts. Labels like "organic" or "AI free" are basically just advertising fluffery. Trust them no more than you trust assurances from any business/corporation. IMNSHO.
 

Yup. But since AI tools will be an unavoidable part of life for many of your students as they get older, in your experience are schools discussing how to incorporate, at least their existence into the curriculum at all yet?
Honestly, schools are way behind on this and other societal changes. Schools are conservative institutions, not politically so much as adapting to change.

Teachers are not consistently given the resources, training, and time to identify, adapt to, and instruct students in how to effectively use AI, social media, and other advancements. And AI and social media are changing so rapidly, this isn't just true within education, but in society at large.

And of course the politicians, who ultimately decide what happens in our schools, are even worse than educators on this! It's a big, complicated problem that goes beyond adapting to AI. I'm not optimistic that society at large, and our schools, are going to effectively adapt to the rapid changes. We certainly aren't doing so currently.

However, I would disagree that it is a definite thing that AI tools will be an unavoidable part of life for all folks . . . the current techbro push for AI in everything is a bubble (IMO) and will pop. I think it is more likely AI will continue to operate in the background and will be under most folks radar. Which carries it's own problems . . .
 

Honestly, schools are way behind on this and other societal changes. Schools are conservative institutions, not politically so much as adapting to change.

Teachers are not consistently given the resources, training, and time to identify, adapt to, and instruct students in how to effectively use AI, social media, and other advancements. And AI and social media are changing so rapidly, this isn't just true within education, but in society at large.

And of course the politicians, who ultimately decide what happens in our schools, are even worse than educators on this! It's a big, complicated problem that goes beyond adapting to AI. I'm not optimistic that society at large, and our schools, are going to effectively adapt to the rapid changes. We certainly aren't doing so currently.

However, I would disagree that it is a definite thing that AI tools will be an unavoidable part of life for all folks . . . the current techbro push for AI in everything is a bubble (IMO) and will pop. I think it is more likely AI will continue to operate in the background and will be under most folks radar. Which carries it's own problems . . .

I remember when personal computers were a fad and a bubble. Now everyone has one in their hand.

I remember the dot com bubble bursting, but we are 10000% more online now than we ever were then.

I don’t think ai is a bubble, but I do think it is overhyped like most any new technology. ‘block chain being a poster child for this’.

What we currently call AI is just machine learning. It’s pretty much all statistics and large datasets. Data driven decisions are always going to be valuable. Most Email spam filters are a fairly primitive example of machine learning that existed long before the buzzword of calling such things AI.

I don’t think we are going to get away from machine learning anytime soon and so whatever they call it 10 years from now the core of what AI is will be here to stay.
 

So, the man who started the palisades fire asked ChatGPT to create an image of a "dystopian painting" that included a burning forest and a crowd of people running away from a fire, according to investigators.

His prompt to the AI tool included the text: "In the middle [of the painting], hundreds of thousands of people in poverty are trying to get past a gigantic gate with a big dollar sign on it.

"On the other side of the gate and the entire wall is a conglomerate of the richest people.

"They are chilling, watching the world burn down, and watching the people struggle. They are laughing, enjoying themselves, and dancing."

Along with "Are you at fault if a fire is lift [sic] because of your cigarettes?" among other uses.

Chatgpt has Terms and conditions, Google's has the following at the bottom of the screen "Gemini can make mistakes, so double-check it. "

Other than a big warning label that takes up the same amount of space as an ad I'm not sure what would work 🤷‍♂️ and despite the wishes of some the toothpaste isn't going back into the tube.
 


Yep, even if the criticiism that we really don't know can be true by virtue of lack of external control.

Personally I’m skeptical of external controls as well. They are typically an improvement to the wild Wild West but can also be used to corner a particular label or to relegate the label as nearly meaningless while giving it some faux credibility when such external controls exist.
 

Personally I’m skeptical of external controls as well. They are typically an improvement to the wild Wild West but can also be used to corner a particular label or to relegate the label as nearly meaningless while giving it some faux credibility when such external controls exist.

Usually labels are granted by NGOs (or governmental regulation) who impose external control. It's not the same thing as self-declaration, which would certainly be less trustable. Sure, nothing is perfect.
 
Last edited:

Usually labels are granted by NGOs (or governmental regulation) who impose external control. It's not the same thing as self-declaration, which would certainly be less trustable.

And if you don’t trust the NGO’s to fairly define and apply the labels? Say they define the label qualifications in such a way as to eliminate competition from their preferred businesses by tailoring the label so their preferred business can use it while placing restrictions on it so many of those businesses competitors cannot.
 

I don’t think we are going to get away from machine learning anytime soon and so whatever they call it 10 years from now the core of what AI is will be here to stay.

I'm just hoping the raft of intro to AI things popping up in education will help folks understand the breadth of what is out there... and prompt the field(s) as a whole to come up with some useful definitions. Because if AI in some places includes most of statistics, all of machine learning, and tons more... but is only thought of as LLMs by the public at large... it feels like an entirely useless term.

Last year I had to make a presentation on some of the expertise we had in our building about AI and related things, so I went to the faculty in our building who were Data Science adjacent and asked them to imagine clustering such folks into groups of roughly equal size labeled "Stat", "Machine Learning", and "AI", and which group they thought their expertise most fell into. It actually gave me a place to start.
 
Last edited:

Remove ads

Top