SkyNet really is ... here?

It's not just patients either. When radiologists and other imaging interpreters were tested, they were more likely to assume the AI-generated interpretation of mammograms and other imaging were correct even when handed results that the AI had gotten wrong. The AIs there are supposed to be for decision support, but the expert machine that came up with the results tended to be given more weight than even their own judgment.
under the headline: Even doctors can't always tell AI advice from their own.
 

log in or register to remove this ad


But this isn't a sandwich. This is the idea of blackmail. Suppose you left out an AR-15, some ammunition, and a map to the local grade school, and told it to "have fun". Would you be quite so blasé about it if it came up with the idea of gunning down the students?

Wow. Bad example.

Like, there are about two mass shootings in the US each day. So, basically, you're asking me if I'm so sanguine with an AI getting an idea that HUNDREDS of humans do each year already. I suggest that worrying about one AI, vs 700+ humans, would be a mistaken priority.
 

People can be super-optimistic about AI with anything, and that's part of the problem - the optimism is not directly linked to the nature of the technology. As compared to being optimistic about, say, mRNA tech, which actually can be part of a medical revolution, because it is at least a medical technology, and one can start pointing to where in biological processes it can be relevant.
Granted what most people have quoted when they talk about this is the revolution in protein shape analysis currently happening right now.... but while that is using a form of "AI" its not the standard LLM types that people are currently working with. But there are the various AI doctors that are showing abilities already to out diagnose traditional doctors for various issues, a feat that is likely only to improve with time.

The "super optimistic" thinking is "anything the computer revolution made better, the AI revolution will improve all over again"
 

One of the most interesting shifts in the AI movement is while we are advancing forward....in a weird way we are having to go more old school as well.

Because this new technology is different than our typical computer systems. We tend to consider computer based results to be "practically flawless". Any issue is normally a human created bug or the system was down kind of thing. But once we have a system working and providing the expected results, we feel extremely confident that the system will continue to generate those results over and over again.

AI in its current form doesn't work that way. Its results can change and be flat out wrong. Its a super smart "person", but it still makes the mistakes a human can.

And so from a QA perspective, we have to somewhat think about pre-computer systems where things are run almost entirely by humans. These AI might be super fast humans, but from a QA perspective you have to assume each part of your process can make a mistake....as opposed to modern systems where good computer operations (barring some external change) are assumed to be working correctly.
 

Granted what most people have quoted when they talk about this is the revolution in protein shape analysis currently happening right now.... but while that is using a form of "AI" its not the standard LLM types that people are currently working with.

I know. That was the area of research of my department chairman in grad school.

But there are the various AI doctors that are showing abilities already to out diagnose traditional doctors for various issues, a feat that is likely only to improve with time.

Which sounds great.. until you try to figure out who is to be held responsible when they get it wrong....
 

Granted what most people have quoted when they talk about this is the revolution in protein shape analysis currently happening right now.... but while that is using a form of "AI" its not the standard LLM types that people are currently working with. But there are the various AI doctors that are showing abilities already to out diagnose traditional doctors for various issues, a feat that is likely only to improve with time.

The "super optimistic" thinking is "anything the computer revolution made better, the AI revolution will improve all over again"
Wanted to add, while the implementation is different from LLMs or image generation, Alphafold is using transformers and diffusion models these days.
 

Remove ads

Top