D&D General Plagiarised D&D art

You are entirely correct, that generative AI is not SkyNet, or HAL 9000.

But neither is it brute-force automation. That is colorful, but a thoroughly inaccurate description of the technology. Generative AI is, in fact, an attempt to avoid using brute force, by using empirical training to make what amounts to an educated guess.
I mean that's true, I'm wildly oversimplifying and making the description inaccurate in the process - but only the brute-force application of huge amounts of data allows to create and maintain the "educated" guess machine - and the machine itself does require a kind of shocking amount of processing power (and thus real electricity!) if it's doing anything even mildly impressive. Which I would personally see as a kind of "brute force". But yeah it's not just a big spreadsheet which is kind of what I implied.

This is why it's happening now - not because we didn't have the ideas to do this before (the fundamental concepts have been around for a long time), because we didn't have the raw power to throw at those ideas and make them mass-marketable and so on.
 

log in or register to remove this ad

Umbran

Mod Squad
Staff member
Supporter
Are you able to cite any US case law ...

Mod note:
Only a couple of folks here are lawyers, that they'd cite case law. Fewer still are specifically IP lawyers.

Your repeated insistence on this very specific piece of evidence reads to others as intentionally requiring a burden of proof that is rather above what is reasonable for this context. The reasons one might do that are not all flattering, so you might want to reconsider this approach.

You are not in court, or filing legal papers. You are in what is supposed to be a friendly discussion. Please be friendly.
 

Umbran

Mod Squad
Staff member
Supporter
but only the brute-force application of huge amounts of data allows to create and maintain the "educated" guess machine

I'm sorry, I still find the term "brute force" here to be an emotionally-loaded inaccuracy.

Yes, training generative AIs takes a whole lot of data. That is less because that data is used in a ham-handed way, and more because the AIs are being asked to respond to a very broad range of prompts. F'rex: If you were building a generative AI to, say, ONLY produce legal forms, it would need a much, much smaller training set.

"Brute force" isn't about the size of the data set, but about the character/nature of how that data is used. And, with all due respect - there's a whole lot of cleverness and nuance in that. Training a generative AI is not a computational sledgehammer.
 

Vaalingrade

Legend
Sure, there have been, and will be, dead-end technologies. Zepplins are probably a poor example, in that we can view airplanes as supplanting them, but I can get the idea of the dead-end.

But, to be clear, I am not "backing" any particular technology. I have no personal emotional investment in generative AI. If it curls up and dies, that's fine by me.

I am not backing the tech - I am backing acceptance of the fact that the only constant in the world is change, and the root issues are in how we deal with change, not any specific particular technology.
I wasn't talking about you specifically. The 'horse and buggy' line is ubiquitous.
 





F'rex: If you were building a generative AI to, say, ONLY produce legal forms, it would need a much, much smaller training set.
Sure, but legal forms are utterly trivial to automate in much more straightforward ways than using generative AI. This is why none of the various AI-based legal-sector-oriented document automation has really caught on yet*. There's established tech in this field (that I work with every day), and AI is just another way of attacking the same problem, and a more expensive and less reliable way that also requires a lot more data to reliably do its job. I mean, with conventional document automation, you need one or a small group of lawyers to go through a document, and note how to automate it, and you can do it on documents where your "data set" is pretty small or specialized (theoretically it can be zero, even, for a novel document). We've looked at AI-based document automation since I believe 2018, and it just requires too much data and the outputs are still too janky to really save time/money. I notice one company we looked at has in fact basically given up on a proper bespoke document automation (which was their pitch when I first came across them), and is now just doing generic stuff and no longer looking at working with law firms but rather than non-legal clients, particularly SMEs. I guess at least they didn't go bust! There's that!

I'm aware there are situations where a smaller data set can work - I've heard about non-LLM and more specialized generative AIs being proposed to be used for various tasks, and potentially more of a genuinely transformative tech, though I'm unsure how many, if any of those have gone ahead.

* = AI has been used for years in the legal field in complex/large-scale discovery, but that's not generative AI, that's AI where you train it to recognise certain things. And indeed recognition-oriented AI can be used in other ways and has been for a long time (you can use it to pull certain kinds data from gigantic packs of discovery docs pretty well for example). But for actual document automation, of which automated legal forms are a very simple kind, AI just hasn't been cutting it compared to conventional methods - and part of that is that, at least with we're aware of, you need an awful lot of data to train it.
"Brute force" isn't about the size of the data set, but about the character/nature of how that data is used. And, with all due respect - there's a whole lot of cleverness and nuance in that. Training a generative AI is not a computational sledgehammer.
I kind of agree, I certainly don't want to be too annoying about it, and I know I can be so sorry about that! It's not a sledgehammer, but for me "brute force" isn't very emotive (colourful, yes), and isn't as specifically literal as that - for me it's means anything functions in large part because it has a huge amount of power and data. And whilst "generative AI" can potentially work with a smaller set of data, that's not the stuff that's making headline.

The last even moderate "leap" technologically and conceptually with generative AI, as I understand it (and I am totally prepared to be corrected on this) was in, like 2017, and the core concepts go back deep into the 20th century, but couldn't really be implemented more because the processing power wasn't there rather than the ideas.

Without mass-scraping of truly insane amounts of data, none of the LLMs would exist, none of the AI art generators would exist. The only reason ChatGPT can put out programming stuff, for example, is that it ingested the whole of GitHub! Maybe "brute force" it the wrong phrase for that, but it's something in that direction, imho.
 

Umbran

Mod Squad
Staff member
Supporter
I consider thought to be a prerequisite to learning.

How convenient for this discussion.

Do we want to consider the irony of having a discussion about cognition and learning, in which personal beliefs of unknown origin or basis are held up as a way to reject posits? Or maybe we should just recognize that this kind of response is basically a conversation ender, and move on with our individual lives?
 

Remove ads

Top