How Generative AI's work

Fair enough. Seems like the energy consumption issue is not a real thing to worry about.

So I will continue to focus on the exploitative and undemocratic nature of centralizing power through automation without checks and balances to ensure the output is used for the public good, or to share the wealth produced with society as a whole. But that's getting into politics, which are generally not productive discussions on gaming boards.
 

log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
Concern about the energy uses for GenAI are disingenuous at best and insipid at the worst.

Do those things before getting your panties in a bunch

Mod Note:
EN World requires folks to comport themselves in a respectful manner. But, in this post, you insult and treat those who disagree with you with disdain.

This is not acceptable. You shall not be posting in this thread again. In the future, treat people better.
 

UngainlyTitan

Legend
Supporter
Fair enough. Seems like the energy consumption issue is not a real thing to worry about.

So I will continue to focus on the exploitative and undemocratic nature of centralizing power through automation without checks and balances to ensure the output is used for the public good, or to share the wealth produced with society as a whole. But that's getting into politics, which are generally not productive discussions on gaming boards.
I think it is quite fair to be concerned about such things. But perhaps not in a thread about how it works.
 

I think it is quite fair to be concerned about such things. But perhaps not in a thread about how it works.
'How it works' includes the social context of its operation.

If we were talking about how internal combustion engines work, once we got past the initial mechanics and maybe chemistry, I think it would prove useful to talk about the extraction of fossil fuels, and the designs of different vehicles and what they're used for, and the consequences thereof.
 

UngainlyTitan

Legend
Supporter
'How it works' includes the social context of its operation.

If we were talking about how internal combustion engines work, once we got past the initial mechanics and maybe chemistry, I think it would prove useful to talk about the extraction of fossil fuels, and the designs of different vehicles and what they're used for, and the consequences thereof.
Pretty much every thread we have had that has touched on the topic of AI has been largely a discussion of the social and economic impacts of AI.
The trouble with this is, that a lot of this type of discussion will violate the politics rules of this forum. On the other hand, while I have some intuition about how this works on a technical level I have really no idea on how it might evolve. I am not even sure how one gets from where we are now, which is a really effective text predictor to some thing that is useful. And by useful, I mean, useful to ordinary folks not to the corporates currently hyping the technology.
 

Pretty much every thread we have had that has touched on the topic of AI has been largely a discussion of the social and economic impacts of AI.
Also tends to quickly go into "Generative AI is bad" along with a stealthy way to shut down most general discussion on it.

, I think it would prove useful to talk about the extraction of fossil fuels
Only if you want to go get into climate change and all that. I've seen a number of discussions about various engines without once touching on the extraction fossil fuels. It's only useful if your aiming to make a point about the environment IMO.
 

Well, okay, the original poster asked us to keep this focused on the tech, not the ethics. My bad. I've got concerns, but I should start a separate thread about those.

Sooooo, like, what other interesting discussions are to be had about how the technology works?

Like, I heard a podcast talk about how one version of ChatGPT was asked to find a stable way to stack a book, a wine bottle, a dozen eggs, and a nail, and it managed to describe a 3 x 4 pattern of the eggs, atop which the book could sit flatly, and then the wine bottle could be put atop that, with the nail stuck into the cork.

And the fact it could do that, despite it being likely such an arrangement had not occurred previously, made the podcasters curious how close the large language model's ability to label and associate words might be to how the human brain understands reality.

My understanding is that current large scale computing generative software do not in any way 'understand' things, but is it possible to make a computer that could? Would it be useful if, like, your text generating program like ChatGPT could build a 3D map and have an understanding of how weights and surfaces interact when it comes to stacking things? Would it yield better writing that way? How could such a model be 'trained' on that stuff?
 

ichabod

Legned
My understanding is that current large scale computing generative software do not in any way 'understand' things, but is it possible to make a computer that could?
This is an old thing for AI, going back to SHRDLU in the late 60s. It allowed you to specify different blocks by shape and color and move them around.
Would it be useful if, like, your text generating program like ChatGPT could build a 3D map and have an understanding of how weights and surfaces interact when it comes to stacking things? Would it yield better writing that way? How could such a model be 'trained' on that stuff?
I don't think you could really connect them directly. Where ChatGPT might help is interpreting natural language commands given by a user and translating them into commands SHRDLU could understand. That was a problem with SHRDLU, in that in worked if you understood how it expected the commands to be interpreted. So the guys who wrote it could make it work well, but not a random Joe who just started typing commands.

But I think the problem being solved would have to be a significant part of the data that the generative program is trained on. If your training data includes sufficient passages on picking up blocks and moving them around (or something equivalent), it would probably work okay. If not, it could fall out of the generative AI's "experience" and be unpredictable. This is because the generative AI's predictions are based on what it has seen in the training data.
 

Clint_L

Hero
Well, okay, the original poster asked us to keep this focused on the tech, not the ethics. My bad. I've got concerns, but I should start a separate thread about those.

Sooooo, like, what other interesting discussions are to be had about how the technology works?

Like, I heard a podcast talk about how one version of ChatGPT was asked to find a stable way to stack a book, a wine bottle, a dozen eggs, and a nail, and it managed to describe a 3 x 4 pattern of the eggs, atop which the book could sit flatly, and then the wine bottle could be put atop that, with the nail stuck into the cork.

And the fact it could do that, despite it being likely such an arrangement had not occurred previously, made the podcasters curious how close the large language model's ability to label and associate words might be to how the human brain understands reality.

My understanding is that current large scale computing generative software do not in any way 'understand' things, but is it possible to make a computer that could? Would it be useful if, like, your text generating program like ChatGPT could build a 3D map and have an understanding of how weights and surfaces interact when it comes to stacking things? Would it yield better writing that way? How could such a model be 'trained' on that stuff?
There's a developing body of research exploring the possibility that Generative AI, Chat in particular, exceeds its original design parameters in ways that suggest a level of understanding. i.e.:

 

There's a developing body of research exploring the possibility that Generative AI, Chat in particular, exceeds its original design parameters in ways that suggest a level of understanding. i.e.:

1652825904093
 

Remove ads

Top