trappedslider
Legend
It's always readyIs the world really ready for Beat Poetry Prime Time?
It's always readyIs the world really ready for Beat Poetry Prime Time?
Maybe the writers should start slipping fair pay information into the text being used to train AI, so it starts questioning why it's not getting paid?Maybe the WGA should grant all AIs automatic membership in the guild?
Does it misunderstand parts of the paper sometimes?
The thing is, I think that's what most of human writing is, and that's what generative AI has exposed. A lot of the stuff that we thought was special...not so much. And that's why it's a huge threat to human writers.I heard Wozniak on the radio this morning, saying AI was "intelligent" and all I could think is that he should be listening more and talking less. AI is only marginally more "intelligent" than autocorrect on my phone. The fact that it can sound spookily human is more about us anthropomorphizing it than what's going on inside the software.
At this point, everything AI says, for the most part, is it purely making stuff up based on existing data sets and not it actually going out, as an agent, and assembling the data from scratch.
The thing is, I think that's what most of human writing is, and that's what generative AI has exposed.
Yes, I know that this was pretty well understood by people in fields of neuro-science; I teach Theory of Knowledge. But not by most people. I think most folks see thought and, by extension, writing and art as highly individualized, and the result of a sort of creative spark, a uniquely human phenomenon. For that matter most human beings claim belief in a sort of divine soul of some sort.I don't think AI "exposed" anything. This was already understood by pretty much anyone who did any real study in writing, or in human cognition.
If you read any pieces of AI-generated text of any length, though, it becomes clear that the bits that are not so generated are incredibly important.
This is one of a number of recent experiments that is showing that LLM AIs seem to be evolving unexpected, emergent capacities.At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.
Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.
Yeah, as someone trained in academic philosophy, the capabilities of LLM have actually reinforced the uniqueness of human rationality, not challenged it in any way.Yes, I know that this was pretty well understood by people in fields of neuro-science; I teach Theory of Knowledge. But not by most people. I think most folks see thought and, by extension, writing and art as highly individualized, and the result of a sort of creative spark, a uniquely human phenomenon. For that matter most human beings claim belief in a sort of divine soul of some sort.
What these sorts of AI confront everyday people with is evidence that a lot of the stuff most thought made us unique...doesn't. Generative AI are showing a lot of emergent properties that, while not happening as a result of processes like what happens in a human brain, are nevertheless converging on similar outcomes. They still have some profound limitations, but are startlingly good at tasks that most would have deemed impossible not long ago, and no one understands where this is going.
Writers are right to be worried. Not that that is really what this strike is about - it is mostly about getting a fair slice of the revenue for their work that media corporations are now able to spread out amongst dozens of different platforms in a kind of shell game that makes it very hard to nail down specific numbers. But the AI threat is coming down the pipe and could make all of those issues irrelevant.
Edit: this is from a current SA article and has huge implications:
This is one of a number of recent experiments that is showing that LLM AIs seem to be evolving unexpected, emergent capacities.