It's not just about better prompting, but using better tools. Tools that will turn a bad prompt into a good prompt.
For example there is the plan mode for Cursor. Or just ask the AI: "make a plan to do this task", then you review the plan before asking it to implement. Configure the AI to ask you clarification questions instead of assuming things.
It's still evolving pretty quickly, so it's worth staying up to date with that.
I have not been as aggressive as GP in trying new AI tools. But the last few months I have been trying more and more and I'm just not seeing it.
One project I tried out recently I took a test-driven approach. I built out the test suite while asking the AI to do the actual implementation. This was one of my more successful attempts, and may have saved me 20-30% time overall - but I still had to throw out 80% of what it built because the agent just refused to implement the architecture I was describing.
It's at its most useful if I'm trying to bootstrap something new on a stack I barely know, OR if I decide I just don't care about the quality of the output.
I have tried different CLI tools, IDE tools. Overall I've had the best success with Claude Code but I'm open to trying new things.
Do you have any good resources you would recommend for getting LLM's to perform better, or staying up-to-date on the field in general?
It's not just about better prompting, but using better tools. Tools that will turn a bad prompt into a good prompt.
For example there is the plan mode for Cursor. Or just ask the AI: "make a plan to do this task", then you review the plan before asking it to implement. Configure the AI to ask you clarification questions instead of assuming things.
It's still evolving pretty quickly, so it's worth staying up to date with that.