Man, I totally get the frustration.
I’ve run into this so many times. It feels like as soon as the script hits a certain length, ChatGPT just decides it’s had enough and starts throwing in those # insert logic here comments. It’s like it’s trying to save tokens, but it just ends up creating more work for us.
Honestly, the "be verbose" prompt rarely works on its own because the model is literally programmed to be concise. Here are a few tricks I’ve found that actually keep it on track:
1. Break it into modules
Instead of asking for the entire automation tool in one go, I usually ask it to build the skeleton first. I'll say, "Give me the class structure and the main function with empty method definitions." Once I have that, I ask it to write the logic for just one or two methods at a time. It’s way less likely to skip steps when it’s only focused on 50 lines of code instead of 300.
2. Use the "No Placeholders" prompt
I’ve had decent luck with a very specific instruction at the start of the prompt. Try something like this:
"Write the full, functional code for this script. Do not use any comments as placeholders (e.g., # rest of code here). I need the complete logic for every function because I am copy-pasting this into a production environment where incomplete code will break the system."
Giving it a "reason" (even if it's made up) why placeholders are a disaster sometimes triggers it to be more thorough.
3. The "Output in Multiple Blocks" strategy
If you know the script is going to be massive, tell it ahead of time to break its response into sections. Say, "This script is going to be long. Provide the first half (imports and helper functions) in the first response, and then wait for me to say 'continue' for the main logic." This keeps it from hitting its internal output limit, which is usually why it starts cutting corners in the first place.
4. Ask for a Code Review/Completion
If it does give you a lazy script, don't just say "continue." Instead, copy the lazy part back to it and say, "You skipped the logic in the [Name] function. Please rewrite that specific function in full detail." It usually handles "filling in the blanks" better than it handles writing a giant monolithic file from scratch.
Pro tip: If you're using GPT-4o or the O1 models, they're slightly better at this, but they still get lazy. Modularizing is really the only "bulletproof" way to ensure you don't lose your mind copy-pasting bits and pieces together!
Hope that helps you get your tool finished!