Critical Role Critical Role removes hundreds of YouTube videos and podcast episodes.

Status
Not open for further replies.

Vaalingrade

Legend
2023 AI is clunky and horrible at what it's meant to try to do (as in it will just produce lies when asked for information) and a lot of 2023 humans wouldn't be able to pass the Turing Test.

AI can't even recognize the work of AI when specifically built to recognize AI.
 

log in or register to remove this ad


FrogReaver

As long as i get to be the frog
I believe that the current technology can never become conscious, in principle.
I’m not even worried about consciousness. When I’m in the car I don’t want to tell it no 3 times and have it end up understanding all 3 as yes’s and close my account - because that’s happened before.

When I tell it that the info it’s given didn’t help I don’t want it to try to start over - get me to someone that can provide different info.


That said, I suspect future technological mediums, such as organic computers and perhaps quantum computers to open up the possibility of actual consciousness.
Quantum computing seems vastly misunderstood by most. It’s a different kind of computing not a vastly superior kind. It will be terrible as some regular computional tasks and be much much better at some computionally intensive tasks of today. They are just different.
 

Yaarel

He Mage
I’m not even worried about consciousness. When I’m in the car I don’t want to tell it no 3 times and have it end up understanding all 3 as yes’s and close my account - because that’s happened before.

When I tell it that the info it’s given didn’t help I don’t want it to try to start over - get me to someone that can provide different info.



Quantum computing seems vastly misunderstood by most. It’s a different kind of computing not a vastly superior kind. It will be terrible as some regular computional tasks and be much much better at some computionally intensive tasks of today. They are just different.
Yeah, nothing is more infuriating than a computer doing something that one didnt tell it to do.
 

Oofta

Legend
In other words, 2023 AI is amazing but not yet able to pass the Turing Test.
I think we need a more advanced Turing test. :) There's a lot of discussion about how you really test AI because a good LLM can potentially fool a human even if it is just using pattern recognition without comprehension. True intelligence is a difficult thing to test for, especially when it could think in ways we don't understand.

How close we are to the singularity is open to debate. We likely won't know until we actually create one. Hopefully it's not like a short story I remember reading once. The scientists working on the project finally flipped the switch on an all powerful thinking machine. The first question they asked was "Is there a God?" The response? "There is now."
 

Yaarel

He Mage
I think we need a more advanced Turing test. :) There's a lot of discussion about how you really test AI because a good LLM can potentially fool a human even if it is just using pattern recognition without comprehension. True intelligence is a difficult thing to test for, especially when it could think in ways we don't understand.

How close we are to the singularity is open to debate. We likely won't know until we actually create one. Hopefully it's not like a short story I remember reading once. The scientists working on the project finally flipped the switch on an all powerful thinking machine. The first question they asked was "Is there a God?" The response? "There is now."
I like the Turing Test because it is pragmatic. It is a reallife conceptually salient challenge.

In my mind, it is like the first time an engine was the same speed as a horse. Before the soon-to-come era of leaving the horse far behind in the dust.

At a certain point, an AI will be about as good as a human. Just a few years away.
 

Clint_L

Hero
AI in the form of LLMs (large language models) don't understand anything. It's just pattern recognition of previous words used and then spit back out in organized word salad. Useful in some situations, for example it helps people who work at help desks get up to speed more quickly. But it doesn't think in any meaningful way.
Eh...I argued the same until I read some of the recent research reports that are starting to come out re. LLMs. They are going well beyond their original design, such as by figuring out how to jerry rig themselves a working memory in order to solve problems that should not have been solvable within their original design limitations. These are evolving programs and no one in the field has perfect understanding of how they operate or what their limitations are.

IMO, it is risky to anthropomorphize them by comparing them to human benchmarks. AI sentience, if it happens, might not look anything like human sentience. Plus, we don't really know how human sentience works, or have firm agreement on how to measure it. Most people, for example, think of humans as having a sort of cohesive core or mind, but research shows that how our minds actually work and how we perceive them working are utterly distinct.

As a Language and Literature and Theory of Knowledge teacher with decades of experiences, one thing LLMs have done is make me question what I thought I knew about human creativity, specifically in the context of how and why I assess writing...which is a huge part of my job. We used to regard the composition as a sort of gold standard method of assessing human intelligence, but it turns out that what we were mostly assessing was probably less about creative problem solving and more about memory and repetition.
 


Umbran

Mod Squad
Staff member
Supporter
Permanence is an illusion. Control is an illusion.

Those together mean that "ownership" is an illusion, a social construct, a figment of our collective consensus that has no basis in physical reality. So, there's a limit to the angst I'm going to have about owning information related to an entertainment game - be it a game book, or a video of a bunch of people playing the game.

Yeah, I'm probably going to lose some of it now and then. Indeed, it has already happened. Right now, for example, I don't know where my 1E D&D Monster Manual is! It may exist no more - a victim of water damage, or may have wandered, unrecoverable, to someone else's collection by accident. Whoops! But then, I'm unlikely to ever run a 1e D&D campaign ever again. So, functionally, it hardly matters.

I am not Marie Kondo, but I recognize that gripping tightly to things that I'm unlikely to use is wasted energy. With only so many hours in the day, one has to pick one's battles.
 

Oofta

Legend
Eh...I argued the same until I read some of the recent research reports that are starting to come out re. LLMs. They are going well beyond their original design, such as by figuring out how to jerry rig themselves a working memory in order to solve problems that should not have been solvable within their original design limitations. These are evolving programs and no one in the field has perfect understanding of how they operate or what their limitations are.

IMO, it is risky to anthropomorphize them by comparing them to human benchmarks. AI sentience, if it happens, might not look anything like human sentience. Plus, we don't really know how human sentience works, or have firm agreement on how to measure it. Most people, for example, think of humans as having a sort of cohesive core or mind, but research shows that how our minds actually work and how we perceive them working are utterly distinct.

As a Language and Literature and Theory of Knowledge teacher with decades of experiences, one thing LLMs have done is make me question what I thought I knew about human creativity, specifically in the context of how and why I assess writing...which is a huge part of my job. We used to regard the composition as a sort of gold standard method of assessing human intelligence, but it turns out that what we were mostly assessing was probably less about creative problem solving and more about memory and repetition.

Maybe? I will agree that we may not even realize a true general AI when we create it. We may not really know why it does what it does, what it's motivations truly are. People always assume it's going to want to continue it's existence, take over the world, be fruitful and multiply so to speak. Maybe it will. Or maybe it won't because it doesn't really care and it's just happy playing D&D and DMing games all day long.

On a side note, assessing writing is always going to be tricky. I remember an English lit class in college where I attempted to do actual analysis on a book we were reading. I thought I did a good analysis (I have no clue now if I did or not) but I got a poor grade. Why? Because it wasn't the teacher's interpretation. I learned from then on to just regurgitate his ideas in my own words and got a good grade after that.

We are products of evolution and culture that don't even realize our own illusions and rarely understand our built in biases and assumptions. We didn't evolve to understand the world as it really is, we evolved to understand it in a way that served to continue the existence of our genetic code. What will AI evolve into? We just don't know.
 

Status
Not open for further replies.
Remove ads

Top