AboutServicesBlogBook a Call
AI Toolkits

How to Write Better Prompts for AI: Zero-Shot, Few-Shot & Chain-of-Thought

Learn how to write better prompts for AI models step by step. Covers zero-shot vs few-shot, chain-of-thought reasoning, avoiding hallucinations, and starting a career.

Muhammad Kamran Sharif·January 26, 2026·18 min read
How to Write Better Prompts for AI: Zero-Shot, Few-Shot & Chain-of-Thought

Reading time: 9 minutes

There is a moment every new AI user goes through. You type something into ChatGPT or Claude, you get back something that is technically correct but completely useless for what you actually needed, and you sit there wondering what went wrong.

Nothing went wrong with the model. What went wrong was the prompt.

I have spent a significant chunk of the last three years studying this problem — testing prompts, breaking them, fixing them, and helping teams do the same. And the honest truth is that writing prompts well is a learnable skill. Not a mysterious art, not something reserved for engineers, and definitely not something that requires a computer science degree.

This article covers five things that will genuinely change how you communicate with AI: how to write better prompts step by step, how chain-of-thought prompting works, the real difference between zero-shot and few-shot prompting, how to stop AI from making things up, and how to get started in this field even if you have never written a line of code.

How to Write Better Prompts for AI Models, Step by Step

Let's start with the foundation. Most people write prompts the same way they type a Google search query - short, vague, keyword-heavy. That approach made sense for search engines. It does not work for language models.

A language model is not retrieving a pre-written answer. It is generating one from scratch, token by token, based entirely on what you give it. The more clearly you communicate what you want, the better that generation process goes. Here is how to do it step by step.

Step 1 - Start With the Role

The single fastest improvement most people can make is adding a role to the beginning of their prompt. Before you ask your question, tell the model who it is supposed to be.

Compare these two prompts:

Without a role: "Explain supply chain disruptions."

With a role: "You are a supply chain consultant who advises mid-sized manufacturers. Explain supply chain disruptions to a business owner who understands operations but has no economics background."

The second prompt tells the model three things: the level of expertise to draw from, the audience to write for, and the frame to use. The output is incomparably better.

Step 2 - Be Specific About the Task

Vague requests produce vague responses. This sounds obvious, but most people undersell how specific they need to be.

Instead of "write me an email," write "write a 150-word follow-up email to a potential client who attended our demo last Tuesday but hasn't responded. The tone should be warm but professional. Reference the specific problem they mentioned, reducing invoice processing time, and suggest a 20-minute call this week."

The model now has everything it needs. Length, purpose, context, tone, relevant detail, and a clear call to action.

Step 3 - Specify the Output Format

If you need a table, say table. If you need bullet points, say bullet points. If you need plain prose with no formatting, say that. If you need JSON, specify the exact structure.

Models default to whatever format feels natural for the content type. For a list of recommendations, that might be bullet points. For a comparison, it might be prose. If you need something specific - especially for anything you are going to paste into another tool or system, always state the format explicitly.

Step 4 - Give the Model Context It Cannot Assume

The model only knows what you tell it in the current session. It does not know your industry, your company, your audience, your constraints, or your previous conversations unless you include that information in the prompt.

A very common mistake is asking the model to help with something and leaving out the context that would make the help actually useful. If you are asking for marketing copy, tell the model who the customer is and what their main pain point is. If you are asking for a code review, tell the model what the code is supposed to do and what standards it needs to meet.

Step 5 - Iterate, Don't Just Accept

A first-draft prompt is a starting point. When you get an output that isn't quite right, resist the urge to scrap everything and start over. Instead, diagnose specifically what is wrong and adjust that element.

Too long? Add a word limit.
Wrong tone? Describe the tone you want more precisely.
Missing something important? Add that context.
Too generic? Give an example of the kind of thing you are looking for.

The best prompt engineers I know treat every output like a diagnostic - it tells them exactly what their prompt communicated, which is often subtly different from what they intended.

Chain of Thought Prompting Explained Simply

Of all the prompting techniques, chain-of-thought is probably the one with the best return on investment for the least complexity. It is dead simple to apply and it meaningfully improves the quality of responses on anything requiring reasoning.

What Chain of Thought Actually Is

When you ask a language model a complex question, the default behavior is to jump directly to an answer. This works fine for factual recall and simple tasks. For anything that requires multiple steps of reasoning - math, logic puzzles, strategic decisions, cause-and-effect analysis, jumping straight to the answer is where models most often go wrong.

Chain-of-thought prompting asks the model to show its work. Instead of going straight to an answer, the model reasons through the problem out loud, step by step, before arriving at a conclusion.

The result is consistently more accurate on complex reasoning tasks. When the model has to articulate each step, it is far less likely to skip over a step that would have changed the answer.

How to Use It in Practice

There is no complicated syntax here. The most reliable version is simply adding one of these phrases to your prompt:

- "Think through this step by step before giving your answer."
- "Walk me through your reasoning before reaching a conclusion."
- "Work through this problem out loud, then give me your final answer."

That is genuinely it. Three extra words - "step by step" - is one of the highest-leverage additions you can make to a reasoning-heavy prompt.

A Real Example of the Difference It Makes

Here is a classic example that illustrates why it matters.

Without chain of thought: *"A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?"*

Most people's intuitive answer is 10 cents. It is wrong. The ball costs 5 cents, the bat costs $1.05, and together they are $1.10.

Without CoT prompting, language models often give the intuitive-but-wrong answer of 10 cents.

With CoT: "A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost? Think through this step by step."

The model will reason: if the ball costs X, the bat costs X + $1, and together they equal $1.10. So 2X + $1 = $1.10, which means 2X = $0.10, so X = $0.05. The ball costs 5 cents.

The chain of thought forced the model to set up the algebra properly rather than pattern-matching to the intuitive wrong answer.

When Chain of Thought Matters Most

Use it for: math problems, logical reasoning, multi-step analysis, strategic recommendations, diagnosing root causes of problems, evaluating trade-offs.

Skip it for: simple factual questions, creative writing, formatting tasks, summarization. Adding "think step by step" to a request for a poem is unnecessary overhead.

The Difference Between Zero-Shot and Few-Shot Prompting

This is a distinction that trips up a lot of people new to prompt engineering because the names sound more technical than the concepts actually are. Once you understand them, you will find yourself reaching for them constantly.

Zero-Shot Prompting: Asking Without Examples

Zero-shot means you ask the model to do something without giving it any examples of what a good output looks like. You are relying entirely on the model's general training to understand and execute the task.

Most casual AI usage is zero-shot. You type a question or request, you hit send, the model responds from its own training.

Zero-shot works extremely well for tasks that are common and well-represented in the model's training data, writing emails, explaining concepts, summarizing text, answering general knowledge questions, translating languages.

Where it starts to break down is when you need something specific and unusual. If you need output in a format the model hasn't seen much of, or in a highly specific voice that is difficult to describe, or with classification logic that is particular to your business, zero-shot often produces something that is in the right ballpark but off in ways that are hard to fix through description alone.

Few-Shot Prompting: Learning From Examples

Few-shot means you give the model two to five examples of the exact input-output pattern you want before asking it to perform the task itself. You are not explaining the pattern in words, you are showing it.

Here is a simple example. Say you are building a tool that classifies customer support tickets into categories, and your categories are specific to your business.

Without few-shot (zero-shot):

"Classify this support ticket into one of these categories: Billing, Technical, Account Access, Feature Request, or Complaint."

This works reasonably well, but "Billing" and "Complaint" can overlap, "Technical" can mean many things, and the model will make judgment calls you might not agree with.

With few-shot:

"Classify support tickets into categories. Here are examples:

Ticket: 'My invoice shows a charge I didn't authorize' → Billing
Ticket: 'The app crashes every time I try to export a report' → Technical
Ticket: 'I can't log in, it just keeps spinning' → Account Access
Ticket: 'It would be great if you could add dark mode' → Feature Request
Ticket: 'This is the third time I've had this problem, I'm very frustrated' → Complaint

Now classify: 'I was charged twice for last month's subscription.'"

The model now has a concrete reference for how to handle edge cases. When a ticket could be classified as either Billing or Complaint, the example of the unauthorized charge guides it toward Billing.

When to Use Which

Use zero-shot when the task is common and well-defined, when you need a quick response and approximate accuracy is fine, or when you are exploring what the model can do.

Use few-shot when you need consistent formatting or classification, when your task has specific nuances that are hard to describe in words, when zero-shot keeps giving you outputs that are close but not quite right, or when you are building a production application where reliability matters.

A useful rule of thumb: if you find yourself correcting the model's outputs in the same way more than twice, that pattern is a signal to add a few-shot example demonstrating what you want instead.

How to Avoid AI Hallucinations Using Prompt Engineering

Hallucination is the term used for when an AI model confidently states something that is factually wrong. It is probably the most common complaint from people who rely on AI for information, and it is also one of the most misunderstood.

Models do not "know" they are wrong. They are generating the most probable next tokens given the context. If the context makes a false claim seem probable - because of how the question is framed, because the model's training contained inaccuracies, or because the model is filling gaps with plausible-sounding content - it will state it with the same confidence it would use for something completely accurate.

The good news is that prompt engineering can substantially reduce hallucinations, even if it cannot eliminate them entirely.

Technique 1 - Ask for Sources and Evidence

When you ask a model to make factual claims, explicitly ask it to back them up. "Cite your sources" does not work perfectly — the model will sometimes fabricate citations, but "tell me what evidence you are drawing on and where you are uncertain" prompts a very different kind of response.

Better still: "Answer this question and explicitly flag any part of your answer where you are not highly confident or where the answer depends on information that might have changed since your training."

This prompts the model to surface its own uncertainty, which is genuinely useful signal. When a model tells you it is not confident about something, pay attention.

Technique 2 - Constrain the Model to Information You Provide

One of the most reliable anti-hallucination techniques is to not ask the model to recall facts from training at all. Instead, give it the information yourself and ask it to reason from that.

Rather than "what did the Fed do with interest rates in 2024," paste in the relevant paragraph from a reliable source and ask "based on this text, what did the Fed do with interest rates in 2024?"

The model is now grounding its answer in a specific source you have provided, not in its training data. This is the same principle that powers Retrieval-Augmented Generation (RAG) in production AI systems.

Technique 3 - Ask the Model to Identify What It Doesn't Know

This is counterintuitive but works well. After asking a question, follow up with: "What would you need to know to be more confident in that answer? What did you assume? Where might you be wrong?"

This meta-question forces the model to audit its own output, and it surfaces gaps and assumptions that would otherwise be invisible in a confident-sounding answer.

Technique 4 - Use the Model's Uncertainty Against Overconfidence

If you are asking about something where accuracy really matters - legal questions, medical information, financial figures, historical facts - add an instruction like this to your prompt: "If you are not certain about something, say so explicitly rather than making your best guess. I would rather have an honest 'I don't know' than a confident wrong answer."

This does not make models perfectly honest about their uncertainty, but it meaningfully shifts the default behavior in the right direction.

What You Should Always Do Regardless

Verify anything important independently. AI models are useful research accelerators, first-draft generators, and reasoning partners. They are not primary sources. If the output is going to be published, used in a decision, or shared with someone who will rely on it - check it.

How to Become a Prompt Engineer With No Coding Experience

This is probably the question I get most often from people who are genuinely interested in working with AI but feel shut out because they don't have a technical background. My answer is always more encouraging than people expect.

The Truth About Coding and Prompt Engineering

For a significant portion of prompt engineering work, coding is not required. The skill at the core of this work is clear thinking and precise communication. You need to understand what a language model is doing well enough to work with it effectively - and that understanding does not require writing code.

Where coding becomes relevant is when you move into building applications - connecting a model to an API, building automated workflows, creating tools that other people use. That work does require at least basic programming. But there is a large and legitimate space of prompt engineering work that is entirely accessible without it.

Content teams, marketing departments, legal teams, healthcare organizations, financial services firms - all of them are integrating AI into their workflows, and all of them need people who can figure out how to get reliable, high-quality output for their specific domain. Those people need domain knowledge and prompting skill far more than they need Python.

Where to Actually Start

Start with the tool you are going to use most. Pick one - Claude, ChatGPT, Gemini, whichever - and use it seriously for something you genuinely care about getting right. Not experiments for the sake of experiments, but real work in your field.

Your domain expertise is your biggest asset. A nurse who learns to use AI well in a clinical context is more valuable than a generalist who knows prompting theory but knows nothing about medicine. A financial analyst who knows how to get reliable, accurate, well-formatted output for the specific reports her team produces every week has a skill that is immediately valuable.

The Practice That Matters Most

Keep a prompt journal. Sounds old-fashioned, but it works. Every time you write a prompt that produces a particularly good or particularly bad result, save both the prompt and the output and write a sentence about why it worked or didn't. After a few weeks of doing this seriously, patterns emerge. You start to see your own recurring mistakes. You build a library of approaches that work for your specific use cases.

This habit, more than any course or certification, is what separates people who use AI well from people who use it adequately.

The Skills That Transfer Directly

If you are coming from writing, editing, communications, or any field that requires clear thinking and precise language, you are already closer to this work than you think. The core challenge in prompt engineering is articulating exactly what you want in a way that leaves no room for ambiguity. That is a writing problem, not a coding problem.

If you are coming from a field with domain expertise, law, medicine, finance, engineering, education, that expertise is enormously valuable. The hard part of building AI applications in those fields is not the technology. It is knowing enough about the domain to know whether the output is actually good.

A Realistic Timeline

With serious, deliberate practice, meaning you are using AI tools on real work problems and actively reflecting on what works, most people develop genuinely useful prompting skills within two to three months. Getting to the level where you can reliably improve others' prompts, build prompt libraries for teams, or evaluate production AI systems takes longer, maybe six months to a year of focused work.

There is no shortcut. The only reliable path is doing the work, paying attention to what happens, and iterating.

Putting It All Together

These five ideas, structured prompt writing, chain-of-thought reasoning, zero-shot versus few-shot selection, hallucination reduction, and the accessible career path, are not isolated techniques. They build on each other.

A well-structured prompt that includes the right examples (few-shot), asks the model to reason step by step (chain-of-thought), and explicitly instructs the model to flag uncertainty (anti-hallucination) is a fundamentally different instrument than a vague question typed into a chat window.

The gap in output quality between those two approaches is large. The gap in effort required to bridge them is smaller than most people think.

Start with one technique. Apply it to something you are already working on. See what changes. Then add another.

That is really all this is. Not a mystery, not a magic system, just the work of communicating clearly with a very capable tool.

Frequently Asked Questions

How long does it take to get good at writing prompts?

With intentional practice on real tasks in a domain you know, most people notice significant improvement within a few weeks. Real proficiency, where you can reliably engineer prompts for complex, production use cases, typically takes a few months of consistent, reflective practice.

Is few-shot always better than zero-shot?

No. Few-shot prompting adds complexity and uses more of your context window. For common, well-defined tasks, zero-shot is faster and just as effective. Use few-shot when you need to demonstrate a specific pattern that is difficult to describe in words.

Can you completely eliminate hallucinations with better prompting?

No. Hallucinations are a fundamental characteristic of how language models work, not a bug you can prompt your way around entirely. Good prompting substantially reduces them and helps surface uncertainty, but for anything where accuracy is critical, independent verification is always necessary.

Do I need to know AI or machine learning theory to do this?

A basic working model of how language models operate, they predict tokens based on context, they don't "know" things the way people do, they are sensitive to framing, is genuinely useful. You do not need to understand the math or the architecture in any depth.

What is the best way to practice prompt engineering?

Use real tools on real problems in a domain you care about. Keep notes on what works and what doesn't. Iterate deliberately. The most effective learning comes from genuine use, not from studying prompting theory in the abstract.

Last updated: March 2026