Blue
Ravenous Bugblatter Beast of Traal
I see a lot of people who say that it can't produce anything useful. Here's a summary of how a friend of mine uses it for coding in the real world, summarized from a longer back-and-forth.
He uses it in a highly iterative manner, lots of cut-n-paste of code snippets, and lots of questions and answers. A bit like pair programming, but without needing the second developer. When he doesn't understand a detail, he asks.
While the process can be educational, it also tends to surface bugs and misunderstandings effectively. He also usually prompts the AI to and unit tests and/or a testing harness, which he expands out with questions like "what other edge cases are there for this function that we aren't testing?"
Dialog with the AI is key for him, not a one-shot solution. At the beginning of the project, he doesn't know enough to even try to put together an ideal one-shot prompt. Instead he describes the problem and potential solutions in terms of approach or algorithm or languages and uses what he gets back as a leaping off point.
He questions design choices, just as he would working with a collaborator. Everything from "do we need to include this big library to get just a couple of functions" to clarity of code to efficiency to making it secure. Again, he's skilled in what he does and is evaluating and giving feedback on the AI's response. Again, like working with a collaborator or doing pair programming, two well known and known effective ways to imrpove code.
He does new dev code branches in the repository so he can abandon an approach he doesn't like. Because the AI is generating code and getting syntax right, he can try a variety of approaches in much less time then if he was coding it himself. Basically, he's applying "fail fast, fail often" approach if need be, and is willing to switch approaches if one isn't paying out without needing to abandon a large investment of time.
Usually after a lengthy back-and-forth he asks the AI to review the whole process and come up with and ideal prompt that would have generated it from the beginning. Usually getting something thorough and concise. He then drops that into a fresh dialog with the AI to see what it comes up.
And then he drops the same prompt into different engines. Because they are trained different and hallucinate different things, and have different strengths and blind spots. He's got subscriptions to several, and you can get a limited number of free tokens per month on a bunch of others.
From there he reviews the results from all of them and determines where to go.
Again, he's already skilled and that's a requirement for what he's doing, which is using the AI as a tool to make him much more productive in the same amount of time.
These are generative AIs are tools, and knowing how to leverage a tool and use it safely can bring it from useless to excellent at what that tool does.
He uses it in a highly iterative manner, lots of cut-n-paste of code snippets, and lots of questions and answers. A bit like pair programming, but without needing the second developer. When he doesn't understand a detail, he asks.
While the process can be educational, it also tends to surface bugs and misunderstandings effectively. He also usually prompts the AI to and unit tests and/or a testing harness, which he expands out with questions like "what other edge cases are there for this function that we aren't testing?"
Dialog with the AI is key for him, not a one-shot solution. At the beginning of the project, he doesn't know enough to even try to put together an ideal one-shot prompt. Instead he describes the problem and potential solutions in terms of approach or algorithm or languages and uses what he gets back as a leaping off point.
He questions design choices, just as he would working with a collaborator. Everything from "do we need to include this big library to get just a couple of functions" to clarity of code to efficiency to making it secure. Again, he's skilled in what he does and is evaluating and giving feedback on the AI's response. Again, like working with a collaborator or doing pair programming, two well known and known effective ways to imrpove code.
He does new dev code branches in the repository so he can abandon an approach he doesn't like. Because the AI is generating code and getting syntax right, he can try a variety of approaches in much less time then if he was coding it himself. Basically, he's applying "fail fast, fail often" approach if need be, and is willing to switch approaches if one isn't paying out without needing to abandon a large investment of time.
Usually after a lengthy back-and-forth he asks the AI to review the whole process and come up with and ideal prompt that would have generated it from the beginning. Usually getting something thorough and concise. He then drops that into a fresh dialog with the AI to see what it comes up.
And then he drops the same prompt into different engines. Because they are trained different and hallucinate different things, and have different strengths and blind spots. He's got subscriptions to several, and you can get a limited number of free tokens per month on a bunch of others.
From there he reviews the results from all of them and determines where to go.
Again, he's already skilled and that's a requirement for what he's doing, which is using the AI as a tool to make him much more productive in the same amount of time.
These are generative AIs are tools, and knowing how to leverage a tool and use it safely can bring it from useless to excellent at what that tool does.