“Don't rush to patch it first. Abstract this issue into the possibility of a system defect. Please output three levels of solutions: A. Hemostasis (minimal changes, can last 1-2 weeks); B. Structural repair (introduce an intermediate mechanism, such as a unified contract/service discovery/routing layer/event layer); C. Architectural evolution (the target form in the next 3 months). For each solution, clearly specify: scope of changes, failure modes, observability, migration steps, rollback strategies. Finally, provide a recommended solution and trigger conditions for switching (when to upgrade from A to B).”
In the end, AI will definitely recommend B, but you should just mindlessly go for C.
The reason is that AI considers itself as a real person in a production environment. It defaults to assuming this product is used by many people, and that refactoring a feature takes several months, so it always leans toward patching bugs rather than aggressively advancing the architecture.
But the truth is, AI can refactor a large project in just a few hours, and the product isn’t really used by anyone. Each of its patchwork fixes doesn’t solve the real problem but creates subsequent bugs, so it must be done in one step!
Once AI is forced to give A/B/C and write trigger conditions for switching, it will no longer see itself as just a patching handyman.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
ChatGPT gave me a super useful bug fixing prompt:
“Don't rush to patch it first. Abstract this issue into the possibility of a system defect. Please output three levels of solutions:
A. Hemostasis (minimal changes, can last 1-2 weeks);
B. Structural repair (introduce an intermediate mechanism, such as a unified contract/service discovery/routing layer/event layer);
C. Architectural evolution (the target form in the next 3 months).
For each solution, clearly specify: scope of changes, failure modes, observability, migration steps, rollback strategies. Finally, provide a recommended solution and trigger conditions for switching (when to upgrade from A to B).”
In the end, AI will definitely recommend B, but you should just mindlessly go for C.
The reason is that AI considers itself as a real person in a production environment. It defaults to assuming this product is used by many people, and that refactoring a feature takes several months, so it always leans toward patching bugs rather than aggressively advancing the architecture.
But the truth is, AI can refactor a large project in just a few hours, and the product isn’t really used by anyone. Each of its patchwork fixes doesn’t solve the real problem but creates subsequent bugs, so it must be done in one step!
Once AI is forced to give A/B/C and write trigger conditions for switching, it will no longer see itself as just a patching handyman.