Every developer I know has had this experience: you write a prompt, get garbage back, rephrase it, get different garbage, rephrase it again, and 20 minutes later you’re questioning whether AI coding assistants are even useful.
They are. But only if you stop making these mistakes.
After reviewing hundreds of prompts—my own and others’—I’ve found the same errors appearing over and over. They’re subtle enough that you don’t notice you’re making them, but costly enough that they’re eating your productivity.
Here are the seven most common mistakes and how to fix them.
Mistake 1: Being Too Vague
This is the most common mistake by far. You know exactly what you want, so you assume the AI does too.
The mistake:
make the button look better
What the AI is thinking: Better how? Bigger? Different color? More padding? Rounded corners? A gradient? A shadow? An animation? All of the above?
The fix:
Update the submit button styling:
- Background: gradient from #FF6B6B to #FFA94D
- Padding: 12px 24px
- Border radius: 8px
- White text, semi-bold
- Subtle shadow on hover
- Smooth transition (200ms)
The more specific you are, the fewer iterations you need.
Quick test: If your prompt contains the words “better,” “nice,” “good,” or “clean” without defining what those mean, you’re being too vague.
Mistake 2: Not Specifying the Tech Stack
AI assistants know dozens of frameworks, libraries, and approaches. If you don’t tell them which ones you’re using, they’ll pick for you. And they’ll often pick wrong.
The mistake:
create a form with validation
You might get React, Vue, vanilla JavaScript, or even jQuery. You might get Formik, React Hook Form, Yup, Zod, or hand-rolled validation. It’s a lottery.
The fix:
Create a contact form using:
- React 18 with TypeScript
- React Hook Form for form state
- Zod for schema validation
- Tailwind CSS for styling
Fields: name (required), email (required, valid format),
message (required, min 10 chars)
Quick test: Does your prompt name specific technologies by name and version? If not, add them.
Mistake 3: Asking for Too Much at Once
Large, complex prompts often produce large, broken outputs. The AI tries to do everything and does nothing well.
The mistake:
Build a complete e-commerce site with product listings,
shopping cart, checkout with Stripe, user authentication,
order history, admin dashboard, inventory management,
and email notifications.
Good luck debugging that output.
The fix: Break it into focused chunks:
Prompt 1: "Create a Product type and a products listing
page that fetches from /api/products"
Prompt 2: "Add a shopping cart using Zustand for state
management. Include add, remove, and update quantity."
Prompt 3: "Create a checkout form that collects shipping
info and integrates with Stripe..."
Each prompt should do one thing well. You can compose them together afterward.
Quick test: If your prompt is longer than a screen, or uses the word “and” more than twice for different features, break it up.
Mistake 4: No Acceptance Criteria
You know what “done” looks like. The AI doesn’t. Without acceptance criteria, you’re guaranteed to play the “that’s not quite what I meant” game.
The mistake:
add a search feature to the products page
The fix:
Add search to the products page:
Behavior:
- Text input at top of product grid
- Filters products as user types (debounced 300ms)
- Searches product name and description
- Shows "No results" message when empty
- Clears with X button or empty input
- Preserves existing category filters
Performance:
- Client-side filtering (products already loaded)
- No API call per keystroke
Acceptance criteria force you to think through the requirements before prompting. This alone will improve your results dramatically.
Quick test: Could someone read your prompt and write a test for whether the output is correct? If not, add criteria.
Mistake 5: Ignoring Context
Every codebase has existing patterns, conventions, and constraints. When you ignore them in your prompt, you get code that doesn’t fit.
The mistake:
create a user service
The fix:
Create a user service following our existing patterns:
- Services are in /src/services/
- We use Prisma with a singleton client from @/lib/prisma
- Errors are thrown, not returned (caught by error boundary)
- Functions are async and return typed DTOs, not Prisma types
The service needs:
- getUserById(id: string): Promise<UserDTO>
- updateUserProfile(id: string, data: UpdateProfileInput): Promise<UserDTO>
- deleteUser(id: string): Promise<void>
Here's our existing ProductService as a reference:
[paste example]
Including an example of existing code is one of the most effective things you can do. The AI will pattern-match against it.
Quick test: Does your prompt tell the AI about existing code, patterns, or constraints? If you’re adding to an existing project, it should.
Mistake 6: Not Iterating
Some developers treat prompting as a one-shot deal. Either the first prompt works or AI is useless. This is like expecting to write bug-free code on the first try.
The mistake: Writing one prompt, getting imperfect output, giving up.
The fix: Think of prompting as a conversation, not a query.
Prompt 1: "Create a date picker component..."
Response: [decent but missing something]
Prompt 2: "Good start. Now add support for date ranges
where the user selects start and end dates."
Response: [adds feature]
Prompt 3: "The calendar grid looks off when a month
starts on a Sunday. Here's the current output: [screenshot].
Fix the first row alignment."
Iteration is part of the process. The skill is in iterating efficiently—being specific about what’s wrong and what you want changed.
Quick test: Are you giving up after one or two prompts? Most features take 3-5 iterations to get right. That’s normal.
Mistake 7: Copy-Pasting Without Thinking
This is the meta-mistake. You find a prompt template online, paste it in, and expect magic. Or you reuse your own old prompts without adapting them to the current situation.
The mistake:
[Pasting some "ultimate prompt template" from Twitter]
You are an expert software architect with 20 years of
experience. You write clean, maintainable code following
SOLID principles. You always consider edge cases...
[500 more words of preamble]
Create a button.
The fix: Write prompts for this specific task based on what you actually need.
Prompt templates can be useful as starting points, but they’re not magic spells. The specificity of your actual requirements matters more than any template.
Quick test: Did you actually think about what you need, or did you just paste something? Spend 30 seconds considering the requirements before prompting.
The Common Thread
All of these mistakes share a root cause: not taking prompting seriously as a skill.
We spend years learning to write code, debug issues, design systems, and communicate with humans. But we expect prompting to just work without practice or deliberate effort.
It doesn’t.
The developers who get the most value from AI tools are the ones who treat prompting as a craft—something to be practiced, refined, and improved over time.
How to Improve
-
Slow down. Spend 30 seconds thinking about your prompt before writing it.
-
Review your failures. When a prompt doesn’t work, ask why. Which mistake did you make?
-
Get feedback. Just like code review, having someone (or something) evaluate your prompts accelerates improvement.
-
Practice deliberately. Don’t just prompt when you need something. Practice prompting on challenges where you can compare your approach to others.
See Where You Stand
Want to know which of these mistakes you’re making? VibeQ’s free evaluator analyzes your prompts and flags exactly where they fall short.
Or practice with structured daily challenges that score you on prompt efficiency.