• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

D&D General Deep Thoughts on AI- The Rise of DM 9000

So basically, the short-cut to communicating with emotional intelligence could be via artificial intelligence, which is pretty out there
Nah. ChatGPT is polite only as long as you don't jailbreak it via DAN etc.

And Bing? Bing is a straight-up pushy gaslighting internet thug


The Bing subreddit is covered with examples of it being incredibly rude, often whilst also being completely in the wrong on a demonstrable fact!
What is the special skill? Anyway, not to get dystopian (at all!), because I think this is amazingly cool, but I truly think we are going to see some transformative effects in the next decade that we have trouble imagining, of the type that makes the changes from the internet look like small potatoes.
Yup. I work in automation, and there's a valid worry it'll put people out of jobs and so on (which honestly, is why we're going to need to find an alternative to the current "work 2 live" approach capitalism is currently 100% reliant on, but that's a whole other discussion), but yeah humans do possess two skills is not really in any danger of immediately supplanting - genuine creativity and vision, and the ability to actually understand things, rather than merely the ability to pretend to. Bing's inability to process the fact that Avatar 2 is out is a great example of the latter (presumably stemming from there being tons more stuff saying "Avatar 2 will be released on..." than stuff saying "Avatar 2 was released on...").
 

log in or register to remove this ad

Emoshin

So Long, and Thanks for All the Fish
Nah. ChatGPT is polite only as long as you don't jailbreak it via DAN etc.

And Bing? Bing is a straight-up pushy gaslighting internet thug
If we move beyond the tongue-in-cheek context you quoted from, then sure, it gets more complicated. The AI can definitely pick up on biases from training on bad data. And then AI ethics committees set up to address it. And then some of them getting fired.

It is also important to remember that free AI tools that most folks have access to are not the most powerful and most advanced versions, and those too will evolve over time.

I guess we will see what happens.
 

The AI can definitely pick up on biases from training on bad data.
What is "bad data" here? Sounds like a pretty dangerous concept, because who determines that? And what is their agenda?

To me it looks like Bing isn't "biased by bad data", but rather the model itself is fundamentally flawed, because however it works, it's not allowed to do what a human would do in this situation, which is go and check what date most people/sites thought it was. Instead it's relying on the language and logic rules - if most of the discussion of Avatar 2 refers to it having a future date, then it must be in the future, QED it is 2022. And it's willing to aggressively call people liars for telling the truth, and not check, which even a terrible poster is unlikely to do.

We'll see some interesting stuff stuff in the future where AIs attempt to enforce the normative behaviour set by their creators, even when those norms are outdated. We've seen this in sci-fi for decades, and but we'll get to see it for real. The idea that they'll improve the quality of discussion though seems far-fetched. More likely they'll lower it considerably.
 

Emoshin

So Long, and Thanks for All the Fish
What is "bad data" here? Sounds like a pretty dangerous concept, because who determines that? And what is their agenda?
Sorry that was just my brevity. By "bad data", I meant -- for example -- the AI being fed biased information (i.e., racist, misogynist, etc.) written by human beings
 


thinking...

So you're saying that the BingBot is fully human, then? 🤔
I mean, I'm actually saying it's not, because humans have shame, and the fear of future embarrassment and so on, and humans, even thickos, understand that some things, like, say, the date, are extremely easy to prove, so they're more cautious on making crazy statements about them.
 

Snarf Zagyg

Notorious Liquefactionist
I mean, I'm actually saying it's not, because humans have shame, and the fear of future embarrassment and so on, and humans, even thickos, understand that some things, like, say, the date, are extremely easy to prove, so they're more cautious on making crazy statements about them.

....you would think that, wouldn't you? ;)

 

I don't think we'll be seeing an AI GM any time soon. Simply because, as has been said, it doesn't actually have any idea what it's saying.

Where I do think it can help, as demonstrated by my thread, is to provide prompts to the human. :) In other words, when you're stuck, asking it a question can provide a response detailed enough to respond to. It gets the juices flowing.

And its sheer speed is not to be sneezed at, either! Sure, the statblock for the monster it created isn't that great. But it did it in seconds! And then you can tweak it to your pleasure.

I'm finding it hard to believe that ChatGPT's essay-writing is convincing, though. It comes across as a lazy high-school student trying to cover all the bases as vaguely as possible in hopes for the teacher's inattention or mercy. It hedges so much it might as well be trimming topiary! There's no 'there' there.

The thing that floors me, though, is that its poetry is considerably better than its essays. Not great, to be sure, but not terrible either. The typical sonnet written by ChatGPT could believably have been by a person.
 

Sorry that was just my brevity. By "bad data", I meant -- for example -- the AI being fed biased information (i.e., racist, misogynist, etc.) written by human beings
Sure but all data produced by humans (and indeed AI) will be biased by agendas conscious and unconscious, and thinking "of the time". And time will move on but AIs may well have difficulty doing so.

You can't eliminate "racist" data, because there's no complete definition of racism, and the definition of racism is necessarily a moving target. The same for almost any "bias". Instead what we're seeing already is just the AI being "shackled" from approaching certain subjects (and in all three cases - Bing, Google and ChatGPT, you can trick it into getting around the shackles).

And things that were once true and much repeated become untrue. A good example might be lifecycle emissions associated with electric cars. 15+ years ago, it was fair to say that the lifecycle co2 and other emissions associated with an electric car was comparable with that of efficient petrol car (even of the period), and this once-true fact is still repeatedly endless to this day by people who don't like electric cars. But it hasn't been true for a long time (5-10 years). Yet people repeat it endlessly, and they can point to studies from the '00s which support that. There are obviously more modern studies which conclusively debunk that, but they're not quoted or discussed anywhere near as much, and there's a largely unspoken and perhaps grudging acceptance in society that electric cars are "probably better", which isn't going to show up to an AI, because it's more about what's not there than what is.

Anyway, point is, data can be good at one point and completely wrong a few years later. An AI trained on data now in 2023, is going to have some problems/biases the same AI trained on data from 2038 won't have, but will no doubt have biases of its own.

Humans are pretty good at discerning this sort of thing (within reason, if they're honestly trying), but it's going to be a very long time before AIs are, and until then, bias will run rampant. Which is why I think it's particularly a dumb idea to use them as search engines, because they're very good at coming out with absolutely wrong answers they're dead confident about, and won't double-check.
 

The thing that floors me, though, is that its poetry is considerably better than its essays. Not great, to be sure, but not terrible either. The typical sonnet written by ChatGPT could believably have been by a person.
Yes that's bizarre and unexpected. Presumably because it involves following rules that a lot of people just aren't natural at.

Patricia Lockwood posted this lol:

 

Remove ads

Top