Effects of writers strike on Sci Fi & Fantasy genre


log in or register to remove this ad





Clint_L

Hero
I heard Wozniak on the radio this morning, saying AI was "intelligent" and all I could think is that he should be listening more and talking less. AI is only marginally more "intelligent" than autocorrect on my phone. The fact that it can sound spookily human is more about us anthropomorphizing it than what's going on inside the software.

At this point, everything AI says, for the most part, is it purely making stuff up based on existing data sets and not it actually going out, as an agent, and assembling the data from scratch.
The thing is, I think that's what most of human writing is, and that's what generative AI has exposed. A lot of the stuff that we thought was special...not so much. And that's why it's a huge threat to human writers.

But...going on strike to prevent it is the same as going on strike to prevent the car company from purchasing robots to take over a bunch of factory jobs. You might even win. But then someone builds the automated factory somewhere else, and your whole factory shuts down or moves.

I think that's what we are looking at here - an existential threat to most people who write for a living. The truly creative elite will survive, but those doing the grunt work are going to have to find something new. I might reach retirement before it eats my job; we'll see.

I don't think a strike is going to change where this is going, is what I'm saying.
 


Umbran

Mod Squad
Staff member
Supporter
The thing is, I think that's what most of human writing is, and that's what generative AI has exposed.

I don't think AI "exposed" anything. This was already understood by pretty much anyone who did any real study in writing, or in human cognition.

If you read any pieces of AI-generated text of any length, though, it becomes clear that the bits that are not so generated are incredibly important.
 

Clint_L

Hero
I don't think AI "exposed" anything. This was already understood by pretty much anyone who did any real study in writing, or in human cognition.

If you read any pieces of AI-generated text of any length, though, it becomes clear that the bits that are not so generated are incredibly important.
Yes, I know that this was pretty well understood by people in fields of neuro-science; I teach Theory of Knowledge. But not by most people. I think most folks see thought and, by extension, writing and art as highly individualized, and the result of a sort of creative spark, a uniquely human phenomenon. For that matter most human beings claim belief in a sort of divine soul of some sort.

What these sorts of AI confront everyday people with is evidence that a lot of the stuff most thought made us unique...doesn't. Generative AI are showing a lot of emergent properties that, while not happening as a result of processes like what happens in a human brain, are nevertheless converging on similar outcomes. They still have some profound limitations, but are startlingly good at tasks that most would have deemed impossible not long ago, and no one understands where this is going.

Writers are right to be worried. Not that that is really what this strike is about - it is mostly about getting a fair slice of the revenue for their work that media corporations are now able to spread out amongst dozens of different platforms in a kind of shell game that makes it very hard to nail down specific numbers. But the AI threat is coming down the pipe and could make all of those issues irrelevant.

Edit: this is from a current SA article and has huge implications:
At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.
This is one of a number of recent experiments that is showing that LLM AIs seem to be evolving unexpected, emergent capacities.
 
Last edited:

Parmandur

Book-Friend
Yes, I know that this was pretty well understood by people in fields of neuro-science; I teach Theory of Knowledge. But not by most people. I think most folks see thought and, by extension, writing and art as highly individualized, and the result of a sort of creative spark, a uniquely human phenomenon. For that matter most human beings claim belief in a sort of divine soul of some sort.

What these sorts of AI confront everyday people with is evidence that a lot of the stuff most thought made us unique...doesn't. Generative AI are showing a lot of emergent properties that, while not happening as a result of processes like what happens in a human brain, are nevertheless converging on similar outcomes. They still have some profound limitations, but are startlingly good at tasks that most would have deemed impossible not long ago, and no one understands where this is going.

Writers are right to be worried. Not that that is really what this strike is about - it is mostly about getting a fair slice of the revenue for their work that media corporations are now able to spread out amongst dozens of different platforms in a kind of shell game that makes it very hard to nail down specific numbers. But the AI threat is coming down the pipe and could make all of those issues irrelevant.

Edit: this is from a current SA article and has huge implications:

This is one of a number of recent experiments that is showing that LLM AIs seem to be evolving unexpected, emergent capacities.
Yeah, as someone trained in academic philosophy, the capabilities of LLM have actually reinforced the uniqueness of human rationality, not challenged it in any way.

Trying to talk the chatbot into actually reaching valid logical conclusions about anything show it's limitations really quick.
 

Remove ads

Top