ChatGPT lies then gaslights reporter with fake transcript

I see a lot of people who say that it can't produce anything useful. Here's a summary of how a friend of mine uses it for coding in the real world, summarized from a longer back-and-forth.



He uses it in a highly iterative manner, lots of cut-n-paste of code snippets, and lots of questions and answers. A bit like pair programming, but without needing the second developer. When he doesn't understand a detail, he asks.

While the process can be educational, it also tends to surface bugs and misunderstandings effectively. He also usually prompts the AI to and unit tests and/or a testing harness, which he expands out with questions like "what other edge cases are there for this function that we aren't testing?"

Dialog with the AI is key for him, not a one-shot solution. At the beginning of the project, he doesn't know enough to even try to put together an ideal one-shot prompt. Instead he describes the problem and potential solutions in terms of approach or algorithm or languages and uses what he gets back as a leaping off point.

He questions design choices, just as he would working with a collaborator. Everything from "do we need to include this big library to get just a couple of functions" to clarity of code to efficiency to making it secure. Again, he's skilled in what he does and is evaluating and giving feedback on the AI's response. Again, like working with a collaborator or doing pair programming, two well known and known effective ways to imrpove code.

He does new dev code branches in the repository so he can abandon an approach he doesn't like. Because the AI is generating code and getting syntax right, he can try a variety of approaches in much less time then if he was coding it himself. Basically, he's applying "fail fast, fail often" approach if need be, and is willing to switch approaches if one isn't paying out without needing to abandon a large investment of time.

Usually after a lengthy back-and-forth he asks the AI to review the whole process and come up with and ideal prompt that would have generated it from the beginning. Usually getting something thorough and concise. He then drops that into a fresh dialog with the AI to see what it comes up.

And then he drops the same prompt into different engines. Because they are trained different and hallucinate different things, and have different strengths and blind spots. He's got subscriptions to several, and you can get a limited number of free tokens per month on a bunch of others.

From there he reviews the results from all of them and determines where to go.

Again, he's already skilled and that's a requirement for what he's doing, which is using the AI as a tool to make him much more productive in the same amount of time.

These are generative AIs are tools, and knowing how to leverage a tool and use it safely can bring it from useless to excellent at what that tool does.
 

log in or register to remove this ad

The only task I could find that would help in my workplace is summarizing long emails, and that's at best a "nice to have," not a "need to have."
It's quite helpful for learning new things and reading and summarizing literature. I use those functions as an academic. The key is being able to get outputs you can check. It's better than scholar was because it handles context better.

My wife works in medicine and it's been integrated into their knowledge retrieval platforms with pretty successful results.

Will add to these, I've found LLMs a great learning aide when used alongside traditional textbooks or lectures. Honestly I feel it's doubled or tripled the rate at which I pick up new topics, because it is so much easier to get an answer to the kind of questions you'd used to need office hours for.
 
Last edited:

I see a lot of people who say that it can't produce anything useful. Here's a summary of how a friend of mine uses it for coding in the real world, summarized from a longer back-and-forth.



He uses it in a highly iterative manner, lots of cut-n-paste of code snippets, and lots of questions and answers. A bit like pair programming, but without needing the second developer. When he doesn't understand a detail, he asks.
Exactly. Same.

While the process can be educational, it also tends to surface bugs and misunderstandings effectively. He also usually prompts the AI to and unit tests and/or a testing harness, which he expands out with questions like "what other edge cases are there for this function that we aren't testing?"

Dialog with the AI is key for him, not a one-shot solution. At the beginning of the project, he doesn't know enough to even try to put together an ideal one-shot prompt. Instead he describes the problem and potential solutions in terms of approach or algorithm or languages and uses what he gets back as a leaping off point.
Same.

He questions design choices, just as he would working with a collaborator. Everything from "do we need to include this big library to get just a couple of functions" to clarity of code to efficiency to making it secure. Again, he's skilled in what he does and is evaluating and giving feedback on the AI's response. Again, like working with a collaborator or doing pair programming, two well known and known effective ways to imrpove code.

He does new dev code branches in the repository so he can abandon an approach he doesn't like. Because the AI is generating code and getting syntax right, he can try a variety of approaches in much less time then if he was coding it himself. Basically, he's applying "fail fast, fail often" approach if need be, and is willing to switch approaches if one isn't paying out without needing to abandon a large investment of time.

Usually after a lengthy back-and-forth he asks the AI to review the whole process and come up with and ideal prompt that would have generated it from the beginning. Usually getting something thorough and concise. He then drops that into a fresh dialog with the AI to see what it comes up.
Same.

And then he drops the same prompt into different engines. Because they are trained different and hallucinate different things, and have different strengths and blind spots. He's got subscriptions to several, and you can get a limited number of free tokens per month on a bunch of others.

From there he reviews the results from all of them and determines where to go.
Fascinating. I do not typically do that but will give it a try. Tell them thank you please!

Again, he's already skilled and that's a requirement for what he's doing, which is using the AI as a tool to make him much more productive in the same amount of time.
Yes!

These are generative AIs are tools, and knowing how to leverage a tool and use it safely can bring it from useless to excellent at what that tool does.
 

It may also lead to the cure for cancer, practical cold fusion, other epic scientific and medical advances.... End of the day, will it have been worth it? End of the day, depends on who you're asking. If you're asking a future human? Maybe not!
And here on this forum the general feeling is: it's not worth it unless it's only applied to those usages. But for some folks around here the opening of the internet to allowing everyone and every business was also a mistake. Like you say depending on who you ask...

Yeah, that's what's been happening to people in every other industry for decades now, which elicited "huh huh, learn to code" responses from the tech industry. Which now turns out to have been short-sighted as career advice goes.
So, was the "go to college and get a degree and you'll get a great job" and now the drumbeat I hear from folks is "learn a trade and you'll make a lot of money."
 

And here on this forum the general feeling is: it's not worth it unless it's only applied to those usages. But for some folks around here the opening of the internet to allowing everyone and every business was also a mistake. Like you say depending on who you ask...
I'd probably also say it wasn't worth the risk, but they didn't survey me before they invented it. Since we're stuck with it, I hope they at least do something cool with it before it destroys us.
 

So, was the "go to college and get a degree and you'll get a great job" and now the drumbeat I hear from folks is "learn a trade and you'll make a lot of money."
The more money those trades make, the better the economic case is for automating it. Some stuff will take longer than others, but I'd bet we see robot trash trucks within 15 years, which will flag some stops for a smaller human backup crew to come by as needed.

That's a traditionally great paying job (because it's hard and unpleasant) that's going to go poof.
 


I see a lot of people who say that it can't produce anything useful. Here's a summary of how a friend of mine uses it for coding in the real world, summarized from a longer back-and-forth.



He uses it in a highly iterative manner, lots of cut-n-paste of code snippets, and lots of questions and answers. A bit like pair programming, but without needing the second developer. When he doesn't understand a detail, he asks.

While the process can be educational, it also tends to surface bugs and misunderstandings effectively. He also usually prompts the AI to and unit tests and/or a testing harness, which he expands out with questions like "what other edge cases are there for this function that we aren't testing?"

Dialog with the AI is key for him, not a one-shot solution. At the beginning of the project, he doesn't know enough to even try to put together an ideal one-shot prompt. Instead he describes the problem and potential solutions in terms of approach or algorithm or languages and uses what he gets back as a leaping off point.

He questions design choices, just as he would working with a collaborator. Everything from "do we need to include this big library to get just a couple of functions" to clarity of code to efficiency to making it secure. Again, he's skilled in what he does and is evaluating and giving feedback on the AI's response. Again, like working with a collaborator or doing pair programming, two well known and known effective ways to imrpove code.

He does new dev code branches in the repository so he can abandon an approach he doesn't like. Because the AI is generating code and getting syntax right, he can try a variety of approaches in much less time then if he was coding it himself. Basically, he's applying "fail fast, fail often" approach if need be, and is willing to switch approaches if one isn't paying out without needing to abandon a large investment of time.

Usually after a lengthy back-and-forth he asks the AI to review the whole process and come up with and ideal prompt that would have generated it from the beginning. Usually getting something thorough and concise. He then drops that into a fresh dialog with the AI to see what it comes up.

And then he drops the same prompt into different engines. Because they are trained different and hallucinate different things, and have different strengths and blind spots. He's got subscriptions to several, and you can get a limited number of free tokens per month on a bunch of others.

From there he reviews the results from all of them and determines where to go.

Again, he's already skilled and that's a requirement for what he's doing, which is using the AI as a tool to make him much more productive in the same amount of time.

These are generative AIs are tools, and knowing how to leverage a tool and use it safely can bring it from useless to excellent at what that tool does.
I don't know, that shotgun approach to coding sounds a bit like against everything they teach about software engineering.
 

And here on this forum the general feeling is: it's not worth it unless it's only applied to those usages. But for some folks around here the opening of the internet to allowing everyone and every business was also a mistake. Like you say depending on who you ask...
The amount of resources it consumes makes it not worthy for the humanity at large if it is restricted to niche usages.
 

So, was the "go to college and get a degree and you'll get a great job" and now the drumbeat I hear from folks is "learn a trade and you'll make a lot of money."

I mean this was true, both of these. Learn a Trade was a valid option for a very very long time, right up till there was a 'labour shortage' and somehow we needed TFWs...

As I've been saying for a long time regarding this topic, I'm sure you remember. "Great, go pick up a shovel and start digging."

Progress?
 

Remove ads

Top