• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

D&D General Deep Thoughts on AI- The Rise of DM 9000

Emoshin

So Long, and Thanks for All the Fish
Everything written below is 100% from ChatGPT. I didn't actually read anything below in depth. I just did this for pure fun.
Sure but all data produced by humans (and indeed AI) will be biased by agendas conscious and unconscious, and thinking "of the time". And time will move on but AIs may well have difficulty doing so.
Bias can be reduced through various techniques, such as diverse representation in data collection and algorithmic design, rigorous testing, and peer review. Additionally, advances in AI and machine learning may enable more sophisticated techniques for detecting and mitigating bias in data and algorithms. While it is true that AI may have difficulty adapting to changes over time, ongoing development and updating of algorithms can help ensure that they remain current and accurate.
You can't eliminate "racist" data, because there's no complete definition of racism, and the definition of racism is necessarily a moving target. The same for almost any "bias". Instead what we're seeing already is just the AI being "shackled" from approaching certain subjects (and in all three cases - Bing, Google and ChatGPT, you can trick it into getting around the shackles).
While the definition of racism and bias may evolve over time, it is still possible to identify and address specific instances of biased data. Even if we can't achieve a perfect, complete definition of racism, we can still identify harmful, discriminatory patterns and address them in our data and algorithms. Furthermore, while it may be possible to trick certain AIs into producing biased results, this does not mean that all AI is fundamentally flawed or incapable of producing fair and unbiased outcomes. Advances in machine learning and AI ethics can help address and prevent biased data, and while it may not be possible to completely eliminate bias, we can strive to minimize its impact on AI systems.
And things that were once true and much repeated become untrue. A good example might be lifecycle emissions associated with electric cars. 15+ years ago, it was fair to say that the lifecycle co2 and other emissions associated with an electric car was comparable with that of efficient petrol car (even of the period), and this once-true fact is still repeatedly endless to this day by people who don't like electric cars. But it hasn't been true for a long time (5-10 years). Yet people repeat it endlessly, and they can point to studies from the '00s which support that. There are obviously more modern studies which conclusively debunk that, but they're not quoted or discussed anywhere near as much, and there's a largely unspoken and perhaps grudging acceptance in society that electric cars are "probably better", which isn't going to show up to an AI, because it's more about what's not there than what is.
While it's true that some facts may become outdated or even incorrect over time, it doesn't mean that all facts are equally likely to be incorrect or that all data is equally biased. In fact, some facts and data may be more robust and resistant to change or revision than others, and some sources of bias may be more easily identified and corrected than others. Additionally, while it's true that people may repeat outdated or incorrect information, it's also true that people are capable of learning and updating their beliefs and understanding of the world. Finally, while it's true that AIs may have limitations and biases based on the data they're trained on, efforts can be made to ensure that AIs are trained on diverse and representative data to mitigate the effects of bias
Anyway, point is, data can be good at one point and completely wrong a few years later. An AI trained on data now in 2023, is going to have some problems/biases the same AI trained on data from 2038 won't have, but will no doubt have biases of its own.
While it is true that data can become outdated or wrong over time, it does not mean that all data will inevitably become inaccurate. There are methods for ensuring that data remains current and accurate, such as regularly updating data sets and incorporating new information as it becomes available. Additionally, while an AI trained on data from a certain time period may have biases that are reflective of that time period, there are techniques such as debiasing that can be used to mitigate the effects of these biases. Finally, an AI trained on data from 2038 will also have biases of its own that may be different from those of an AI trained on data from 2023, so the issue of bias is an ongoing concern regardless of when the data was collected.
Humans are pretty good at discerning this sort of thing (within reason, if they're honestly trying), but it's going to be a very long time before AIs are, and until then, bias will run rampant. Which is why I think it's particularly a dumb idea to use them as search engines, because they're very good at coming out with absolutely wrong answers they're dead confident about, and won't double-check.
While it's true that humans have the ability to reason about and detect biases, they are also prone to their own biases, including cognitive biases that can impact their ability to make objective decisions. Additionally, AIs can be designed with algorithms that prioritize accuracy over confidence and can be programmed to double-check their answers. As technology advances, it's likely that AIs will continue to improve in their ability to detect and correct for biases, making them a valuable tool for a variety of applications, including search engines.
 

log in or register to remove this ad

Everything written below is 100% from ChatGPT. I didn't actually read anything below in depth. I just did this for pure fun.

Bias can be reduced through various techniques, such as diverse representation in data collection and algorithmic design, rigorous testing, and peer review. Additionally, advances in AI and machine learning may enable more sophisticated techniques for detecting and mitigating bias in data and algorithms. While it is true that AI may have difficulty adapting to changes over time, ongoing development and updating of algorithms can help ensure that they remain current and accurate.

While the definition of racism and bias may evolve over time, it is still possible to identify and address specific instances of biased data. Even if we can't achieve a perfect, complete definition of racism, we can still identify harmful, discriminatory patterns and address them in our data and algorithms. Furthermore, while it may be possible to trick certain AIs into producing biased results, this does not mean that all AI is fundamentally flawed or incapable of producing fair and unbiased outcomes. Advances in machine learning and AI ethics can help address and prevent biased data, and while it may not be possible to completely eliminate bias, we can strive to minimize its impact on AI systems.

While it's true that some facts may become outdated or even incorrect over time, it doesn't mean that all facts are equally likely to be incorrect or that all data is equally biased. In fact, some facts and data may be more robust and resistant to change or revision than others, and some sources of bias may be more easily identified and corrected than others. Additionally, while it's true that people may repeat outdated or incorrect information, it's also true that people are capable of learning and updating their beliefs and understanding of the world. Finally, while it's true that AIs may have limitations and biases based on the data they're trained on, efforts can be made to ensure that AIs are trained on diverse and representative data to mitigate the effects of bias

While it is true that data can become outdated or wrong over time, it does not mean that all data will inevitably become inaccurate. There are methods for ensuring that data remains current and accurate, such as regularly updating data sets and incorporating new information as it becomes available. Additionally, while an AI trained on data from a certain time period may have biases that are reflective of that time period, there are techniques such as debiasing that can be used to mitigate the effects of these biases. Finally, an AI trained on data from 2038 will also have biases of its own that may be different from those of an AI trained on data from 2023, so the issue of bias is an ongoing concern regardless of when the data was collected.

While it's true that humans have the ability to reason about and detect biases, they are also prone to their own biases, including cognitive biases that can impact their ability to make objective decisions. Additionally, AIs can be designed with algorithms that prioritize accuracy over confidence and can be programmed to double-check their answers. As technology advances, it's likely that AIs will continue to improve in their ability to detect and correct for biases, making them a valuable tool for a variety of applications, including search engines.
That's the thing - even glancing through you can see it's drivel/verbal diarrhoea. None of the "arguments" it presents are real counter-arguments because they don't understand the fundamental problem, because it's incapable of actually understanding anything.

The last paragraph is the funniest because of the "accuracy over confidence" line, which is amazing, given all three current AIs are insanely more confident than they are accurate, and none of them double-check their answers.
 

Emoshin

So Long, and Thanks for All the Fish
That's the thing - even glancing through you can see it's drivel/verbal diarrhoea. None of the "arguments" it presents are real counter-arguments because they don't understand the fundamental problem, because it's incapable of actually understanding anything.

The last paragraph is the funniest because of the "accuracy over confidence" line, which is amazing, given all three current AIs are insanely more confident than they are accurate, and none of them double-check their answers.
I hear what you are saying, and regardless of the merits of the AI-generated content, I appreciate a few things about it:

- It keeps reflecting back "While it's true that...". To me, it seems like a form of reflecting or validating or receiving that would be an emotionally intelligent thing to do when arguing. I don't know if this was hard-wired into the model or self-learned. Either way, the idea tickles me that the AI doesn't know why it's doing it, but it does it and humans are supposed to know to do these things, but rarely do.

- Nobody can prove what the future will hold, and our predictions are as good as the inputs we take in. The AI does provide counter-scenarios (like debiasing) that are worth exploring. Considering a large part of the corporate and scientific world is attempting to improve AI, it seems to be fallacious or problematic to stake a side and not information-gather the other side in order to achieve the most accurate prediction, should one choose to do so. That makes the AI a useful tool to get initial info on things that we know we don't know, for further scoping & investigation.

- All in all, there are important things I think we can learn from AI (and vice versa of course) such as having tools to improve humility and intellectual honesty, reduce human biases, etc.
 
Last edited:

I hear what you are saying, and regardless of the merits of the AI-generated content, I appreciate a few things about it:

- It keeps reflecting back "While it's true that...". To me, it seems like a form of reflecting or validating or receiving that would be an emotionally intelligent thing to do when arguing. I don't know if this was hard-wired into the model or self-learned. Either way, the idea tickles me that the AI doesn't know why it's doing it, but it does it and humans are supposed to know to do these things, but rarely do.

- Nobody can prove what the future will hold, and our predictions are as good as the inputs we take in. The AI does provide counter-scenarios (like debiasing) that are worth exploring. Considering a large part of the corporate and scientific world is attempting to improve AI, it seems to be fallacious or problematic to stake a side and not information-gather the other side in order to achieve the most accurate prediction, should one choose to do so. That makes the AI a useful tool to get initial info on things that we know we don't know, for further scoping & investigation.

- All in all, there are important things I think we can learn from AI (and vice versa of course) such as having tools to improve humility and intellectual honesty, reduce human biases, etc.
Is this Google's take lol? Because it's got the same "lack of comprehension" vibe and the trademark Google bullet-point approach.
 


I don't know what you mean, it's my take.
Oh dear lol, maybe don't use AIs to make your posts if you don't want people to think AIs are writing your posts lol?

My point is, none of what you're saying in the second point particularly has any real relevance. "Debiasing" is a nonsense-concept outside of narrow realms like physics. If you apply to more general data sets you're just deliberately introducing a bias that you favour, and calling it debiasing.

And stuff like this is just a nonsense-argument like ChatGPT wrote:

Considering a large part of the corporate and scientific world is attempting to improve AI, it seems to be fallacious or problematic to stake a side and not information-gather the other side in order to achieve the most accurate prediction, should one choose to do so. That makes the AI a useful tool to get initial info on things that we know we don't know, for further scoping & investigation.

The only thing that seems un-AI-like there is misuse of "fallacious or problematic". AI is anything but a "useful tool to get info on things we know we don't know" - and that's a super-vague "trying to get it past the exam-reviewer" sentence. Indeed - that's what's been demonstrated at great length by these AI tools - they're godawful tool for people who are genuinely ignorant, because those people won't know how extremely wrong the AI is, and will accept its extreme confidence as accuracy (a constant human problem).

All in all, there are important things I think we can learn from AI (and vice versa of course) such as having tools to improve humility and intellectual honesty, reduce human biases, etc.

That is the EXACT OPPOSITE of what has been demonstrated so far lol. These text-based language-logic AIs are aggressively intellectually dishonest, completely lack humility, and merely reflect human biases, or invent whole new biases of their own. I suggest you check out the bing subreddit lol. If they weren't shackled from being racist/sexist they'd no doubt be horrifically those too.

Finally
To me, it seems like a form of reflecting or validating or receiving that would be an emotionally intelligent thing to do when arguing.
What does "emotional intelligence" mean here? I mean, it seems to be a euphemism of "manipulative, but in a smart way"? Certainly it's not "intellectually honest", as the AI's use of it points out.
 

Just to add without editing, it sounds like what you're projecting is a future where AIs are like Data from Star Trek, able to highlight human folly and peculiarities and so on. But language-rules-based AIs like the ones we're discussing will never, can never, be that. Because not only do they not think, they're merely a product of their data set. They're not an outsider looking in and offering an intelligent opinion, they're word soup.
 

Emoshin

So Long, and Thanks for All the Fish
Oh dear lol, maybe don't use AIs to make your posts if you don't want people to think AIs are writing your posts lol?
My post with the AI-generated context was preceded with a human (me-generated) note that the text below is AI-generated.

The post after that had no such disclaimer. While it is reasonable to believe that people will infer erroneously, it is still inaccurate.

For the remainder of your post, see my profile photo :)
 
Last edited:

Umbran

Mod Squad
Staff member
Supporter
2. The tech hasn't been integrated yet. Imagine when it's fully integrated into Bing, and Google. Or, for that matter, Word and Google Docs- instead of writing a formal letter, you just tell your word processor (or Siri, Alexa, whatever) to draft a letter about X to Y and then do a few quick edits before sending it out.

So, it is automating what many in high-status positions use secretaries and assistants for - making available to all what was traditionally only available for the cost of a full-time employee.

For education, this is a problem. For small business, though, this is an asset. Like most technologies, it is good in some places, harmful in others.


But if you have AI that can code, how many programmers do you need to supervise them? If you have AI that can write legal briefs and draft contracts, how many attorneys do you need to actually go to court or review the contracts? If you have AI that can create bespoke adventures for you, how many companies do you need churning out APs and modules?

The thing the AI, as yet, still can't do, is think of something actually new and applicable to the situation. Programmers will be round for innovation, not grunt-work repeatable code. Lawyers will be around for times when human empathy, intuition, and charisma is needed, not grunt paperwork.
 


Remove ads

Top