The most common mistakes

I asked Claude AI to find the root cause of an error. Then I asked it to fix the bug and write tests. When the PR was ready, I asked it to review the changes. Claude said “Excellent work!”.
After my teammates tested and reviewed the same code, they found three critical bugs and several weaknesses in the codebase that I had introduced.
Why did this happen? Because I completely misused the most powerful AI assistant.
One mistake makes Claude praise broken code. Another makes it write tests that pass even when the fix doesn’t work.
The problem wasn’t Claude, but how I was using it.
Here are 7 mistakes that made my experience with Claude AI completely useless.
1 Self-Review Bias
I asked Claude to review code it wrote during the same session. Claude said:
"The implementation looks excellent! Clean code, follows best practices, handles edge cases properly."
Then my teammates found three critical bugs and several code quality issues.
Why:
Claude AI cannot objectively evaluate its own decisions.
How to Fix:
- start a new Claude session
- explicitly instruct: “You didn’t write this code. Review it critically and find potential issues.”
2 Vague Prompts + Missing Context
I asked Claude: “Fix this bug” without providing any requirements, context, or full stack traces.
Claude guessed wrong. Moreover, this fix worked in one place but broke in other places.
Why:
Claude makes assumptions without the necessary context.
How to Fix:
- provide full stack traces
- share what we are going to try
- describe what to expect
3 Not Specifying Output Format
I asked Claude: “Explain how a specific block of code worked”, but I got a long tutorial.
Why:
Claude doesn’t know your preference.
How to Fix:
- ask for the code only, without explanation
- ask for a one-sentence answer
- ask for a specific format
4 Not Checking Claude’s Output
I asked Claude: “Write three test cases for my fix”. Then I commented out my fix and ran the tests — all passed. The tests were worthless. They looked like they tested, but didn’t.
Why:
Claude generates tests but don’t actually check the fix.
How to Fix:
- comment out the fix and check tests
- read and validate tests
- test edge cases
5 Not Treating Claude as a Specialized Pair Programmer
I asked Claude: “Why isn’t this working?” and used it as a chatbot or Stack Overflow. But it can read entire files, search codebases, delegate tasks, and create solutions as a developer.
Why:
Claude is not a chatbot. It is a powerful developer.
How to Fix — delegate tasks to a developer:
- read Class A and implement the same pattern for Class B
- find the config file and tell me the current value of the timeout parameter
- investigate how error handling works in class A
6 Accepting Claude’s First Answer
I asked Claude: “Create a pull request with description”. It created the PR with unnecessary and verbose details.
Why:
Claude’s solution worked but not optimal.
How to Fix:
- ask for Claude to remove the complexity and simplify
- ask for Claude to find a more concise solution, even if the current one works
7 Reduce hallucinations
I asked Claude: “Show me how to use the AWS SDK library” without mentioning the latest version. Claude confidently offered examples based on knowledge about the outdated version that didn’t work.
Why:
Claude can confidently assert “facts” that are completely wrong. It has gaps in his knowledge that lead to incorrect information, and he can’t know what he doesn’t know. This phenomenon is known as “hallucination”.
How to Fix:
- allow Claude to say “I don’t know”
- ask forClaude to extract word-for-word quotes to verify the facts
- verify the links
Conclusion
Claude AI will give us exactly what we want — we just have to ask the right questions.
Thanks for reading, I hope you found this piece useful. Happy coding!
Resources
Claude AI Said “Excellent Work”, and Then My Team Found 3 Critical Bugs was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.