How Generative AI's work

I'm no expert on this stuff, but I found this article confusing. The argument seems to be something like 'the LLM is stochastically parotting a selection of stochastic parrot skills, and is therefore unlikely to be a stochastic parrot.'
I think it's almost certainly a Norwegian Blue.
 

log in or register to remove this ad

FrogReaver

As long as i get to be the frog
I am a professional mathematician by training, so I am very dubious of any attempt to prove anything that does not have a solid definitional structure. My only proof is evident to myself alone; my self-knowledge that I have consciousness. I cannot apply that to others.
How do you know you have consciousness?
 




Clint_L

Hero
  • They state: “What [the team] proves theoretically, and also confirms empirically, is that there is compositional generalization, meaning [LLMs] are able to put building blocks together that have never been put together. This, to me, is the essence of creativity.” -- is it, though? Is it really creative to randomly put stuff together that have never been put together before? I feel there needs to be a bit more than that.
I think what is most compelling about generative AI is that it does not "randomly" put stuff together.

Here's an example from when I was first playing with Chat 3, more than a year ago - so an exponentially weaker version of Chat:
write a memorandum for the office on not drinking my milk from the office fridge

Memorandum

To: Office Staff

From: [Your Name]Subject: Office Fridge Usage

I am writing to remind all staff members that the office fridge is for communal use and should be treated with respect. Specifically, I would like to bring attention to the fact that several individuals have been seen taking and consuming milk from the office fridge without replacing it.

Please be mindful of your actions and refrain from consuming communal items without replacing them. If you do consume something, please ensure that it is replaced or restocked as soon as possible.

We all have a responsibility to maintain a clean and organized office environment, and this includes being conscientious about our use of the office fridge.

Thank you for your cooperation and understanding.

Sincerely,[Your Name]


can you make it more passive aggressive and add sarcasm

Memorandum

To: Office Staff

From: [Your Name]

Subject: Office Fridge Usage (or lack thereof)

Greetings esteemed colleagues,

I would like to take a moment to address the elephant in the room, or should I say, the missing milk in the office fridge. It appears that some individuals have been under the impression that the office fridge is their personal dairy cow, and have been helping themselves to the milk without a second thought about replacing it for the rest of the office to enjoy.

I mean, who needs milk anyways, right? It's not like it's an essential part of a balanced breakfast or anything. I'm sure we can all survive on dry cereal and black coffee.

But, in all seriousness, can we please show some common courtesy and respect for our communal spaces and items? If you do decide to indulge in the office's milk supply, please make sure to replace it as soon as possible.

Let's all work together to maintain a clean and organized office environment, and not just for the sake of the milk, but for the overall well-being of the office.

Sincerely,
[Your Name] (who is now officially lactose intolerant)
There is nothing random about how these words have been put together, and Chat was able to further parse how to take the first effort and apply the "passive aggressive and sarcasm" paradigm to it. In conventional terms, it understood what I was asking it to do, and did it. That doesn't mean that it is sentient or conscious, but it seems clearly able to understand and execute effective responses to prompts written in plain language, thereby producing something new in the world. Thus, creative.

Also, the "lactose intolerant" double entendre at the end is legitimately amusing.
 

I think what is most compelling about generative AI is that it does not "randomly" put stuff together.
“Randomly” should not be thought of as synonymous with “completely at random”. Most modern data algorithms work “randomly” but their outputs converge to a state that is definitely not completely random. GenAI is no different; the process of convergence from a very random state to something that is useful is indeed “random”, but weighted towards outputs that are consistent with the input data the model was trained on.

Essentially, the majority of machine learning and AI algorithms want to produce outputs that could hide in the input and not be recognized as different. Even a basic linear regression model does the same. If you feed it data on heights and weights of people as training data, then if you give it a weight, it tries to produce an output that would look plausible given the inputs. When you ask a GenAI model to produce text for an input, it does exactly the same thing — pick the most plausible sequence of words. The two differences are:
  • The linear regression model has 2 parameters that were trained. The GenAI one has 70,000,000,000
  • Linear regression can be solved exactly, and so is deterministic in output. This is not the case for most other modern data models.
 

Remove ads

Top