(+) A.I in general

This meant just to be a general discussion without delving into the ethics or other controversies surrounding the roll out of the tech.




I think it's going to come down to how well Red Panda can be ran locally
 

log in or register to remove this ad


I've seen a lot of these-


I expect to see a lot more.

If you're gonna AI, check the work. People check the cases, you know!
See also:

AND


I know this is a [+] thread, but at this point, I’m seeing AI as being partially deeply flawed and partially being intentionally sabotaged, and as such, is inherently unreliable for many tasks people are trying to use it for. I once joked that the real “Skynet” would be messed up AI that gets us to kill ourselves. Right now, that feels almost prescient.

I think it’s best use at this point is inspirational. It’s just not ready for commercial deployment, and some of the “live betas” are really dangerous.
 

I think it’s best use at this point is inspirational. It’s just not ready for commercial deployment, and some of the “live betas” are really dangerous.

The other day, I was googling to find another quick source on a legal issue (I know, but I didn't feel like opening Westlaw).

So that AI summary at the top of Google? I normally just skip right over it, but I looked. And somehow, the AI had conflated two legal issues* and was stating something that I knew was 100% wrong.

*In the form of- You can in A in Court B, and you can do C in Court D. The AI spat out that you can do A in Court D, which I can assure you .... ya better not do.
 

I've spent the last couple weeks working on a project that uses an LLM to extract and classify data. It's been a fascinating process. "Prompt engineering" sounds pretentious as hell, but it's a real skill -- particularly when you need the bot to provide output in a form that can be parsed by a traditional computer program.

Many chatbot quirks are really just human quirks reflected back at us. The way you phrase a question can have a big impact on the answer. If you load it down with too many fiddly conditions and rules, it starts to forget some. You get better results by forcing the bot to consider arguments for both sides. And providing context is essential, as is thinking carefully about exactly what you're trying to find out.

One really "non-human" thing I've noticed is that you need to give it permission to say "I don't know." If I ask it to explain how a quantum probability recalculator (some technobabble I just made up) works, it will spout off an answer... unless I add, "If you don't know what that is, say UNKNOWN." Then it says "UNKNOWN." Some humans will admit they don't know an answer, some will try to BS their way through, but very few BSers will stop when told to.
 

I know this is a [+] thread, but at this point, I’m seeing AI as being partially deeply flawed

Not quite. It isn't that generative AI is flawed, so much as corporations are trying to sell it to us as a tool for things that it isn't particularly good at.

If you don't know how to use and adze, and what it is good for, you'd be tempted to say it is a flawed axe. But you'd be wrong.

and partially being intentionally sabotaged, and as such, is inherently unreliable for many tasks people are trying to use it for.

Are you familiar with the Monty Python String Sketch? This is what is going on with generative AI:


The things we are being told Generative AI is good for, and what it is actually good for, are not the same thing.

Simply: generative AI creates data that resembles real material broadly, but it will fail in the details. So, in effect, generative AI is good at simulating a thing, not actually making a thing.

If you are, say, a physicist, and you want a body of data to tune your particle accelerator's analysis software on, generative AI might be a good place to turn to gin up a large mass of fake data to test the system.

Or, if you are creating an automated printing layout tool, and need to come up with masses of test data for novels, poetry, and brochures, it can make stuff that looks like the real thing. The text may be gibberish, but you don't care about the detailed content, only the general form.
 

I've spent the last couple weeks working on a project that uses an LLM to extract and classify data. It's been a fascinating process. "Prompt engineering" sounds pretentious as hell, but it's a real skill
It's one of those new skills that are coming into existence due to new technology.
 

It isn't that generative AI is flawed, so much as corporations are trying to sell it to us as a tool for things that it isn't particularly good at.
I can accept that correction for the most part. I still think there’s problems in the ways in which certain AIs are being taught to do things, which can lead to ethical and practical pitfalls.

But yeah, a HUGE chunk of my concerns are business sectors- and scammers- using it for purposes where mistakes could have legal, life-threatening or livelihood destroying implications.
 

What I want is a personal AI that can do the things I want it to.

I want an AI that I can feed into it my own writing, and then use it to write a book from with the right extensive prompts (If I do that, is that a book written by me or not? As the only sources would be my own writing?).

I want an AI that I can feed into it the type of art I like (Such as only Elmore and Sweet) and then have it give me accurate artwork for my own personal enjoyment (not to be shared, as I don't want to do copyright issues over them).

Stuff like that. It also has to be good! None of this freddy got 18 fingers and his thumbs meld together when he holds them close type of stuff.
 


Remove ads

Top