Charlaquin
Goblin Queen (She/Her/Hers)
Eh, thought better of it.
I mean lets consider some of the crazy things that bees or ants can do. I mean all of the things they can recognize, the communication, pattern recognition, navigation etc. And their brains are TINY in comparison.Or perhaps, a predictive text generator is actually an even more powerful tool than we initially realized.
Okay. I watched the video.I'm not sure if you would disagree with any point in the video, but it is an explanation of the process of creating/training LLMs.
I think there's a bit of nuance. A language model is just a probability distribution over sequences of words. For a LLM a human never went in and assigned any weights to that probability distribution, instead we programed an algorithm to do that based on the large dataset we fed it. So in that sense it wasn't created by a human. But the same could really be said about anything produced by any computer algorithm. So in what I would consider the more broad sense, it's meant to sound like a grand claim and be technically true, but it's a fairly typical claim when dealing with any computer generated outputs.Humans do not create Large Language Models.
Not quite accurate. A language model is just a probability distribution over sequences of words. The task a language model is used to solve is predicting the next word/words.We program bots that program them based on a series of "tests". We give them a task to do, tell a creator bot to make bots for tackling that problem, and eventually through trial and error, get a Large Language Model that has somehow figured out how to solve the task we asked it to do.
Sure. But while we don't know the precise innerworkings, with some knowledge of computer science and a basic understanding of LLM's we can certainly make educated guesses.Chat GPT is obviously not sentient, but we also do not know the inner workings/logic that it uses to generate text.
High level we 'know' the logic they use - a probability distribution of words is created by an algorithm reading from a large data set. The specifics of that algorithm is a mystery, but not because of emergent properties, it's simply because it's not public information.Due to the black-box nature of the logic these AI use to solve the problems we give them, it would be very difficult to tell if an AI developed in the way it was ever did "evolve" into something more than just a text generator that was decently good at pretending to be a person.
I edited my last post. I misunderstood which tangent of the discussion this was and originally thought you were referring to the video I had posted as the one that was "clickbait". Apologies for the misunderstanding.Okay. I watched the video.
I think there's a bit of nuance. A language model is just a probability distribution over sequences of words. For a LLM a human never went in and assigned any weights to that probability distribution, instead we programed an algorithm to do that based on the large dataset we fed it. So in that sense it wasn't created by a human. But the same could really be said about anything produced by any computer algorithm. So in what I would consider the more broad sense, it's meant to sound like a grand claim and be technically true, but it's a fairly typical claim when dealing with any computer generated outputs.
Not quite accurate. A language model is just a probability distribution over sequences of words. The task a language model solves is predicting the next word/words.
Sure. But while we don't know the precise innerworkings, with some knowledge of computer science and a basic understanding of LLM's we can certainly make educated guesses.
High level we 'know' the logic they use - a probability distribution of words is created by an algorithm reading from a large data set. The specifics of that algorithm is a mystery, but not because of emergent properties, it's simply because it's not public information.
Then once we have the model, the next process step is how the response algorithm is programmed to respond to prompts based on that probability distribution. Again, a black box there because we aren't privy to the precise algorithm being used.
IMO there's alot of demystification that needs to happen about LLM based AI's.
Ah. Understood. Yours was a fun video and not overly technical so accessible to many. I won't say it was inaccurate but I could see places in it where people might come away with some misleading ideas.I edited my last post. I misunderstood which tangent of the discussion this was and originally thought you were referring to the video I had posted as the one that was "clickbait". Apologies for the misunderstanding.