The CLEAR-Q Method: How to Build AI GPTs and Copilot Agents that Don’t Go Off the Rails
While it might seem like magic, building a useful AI assistant is more like engineering with words. Whether you’re creating a custom GPT in ChatGPT or a Copilot Agent in the Microsoft ecosystem, most assistants fail for the same reason most vague instructions fail us in real life: The job isn’t clearly defined enough, and they don’t fence off the dumb ways the job can be done. The result is an agent that’s inconsistent, risky, or just kind of… meh.
That's Where the CLEAR-Q Framework Comes In
CLEAR-Q is a practical framework for writing system prompts or agent instructions so your GPT or Copilot Agent behaves like a reliable tool instead of wildcard. Let’s break this framework down and then walk through a real example.
Let’s start with defining what we’re talking about. CLEAR-Q is a six-part structure for creating high-quality AI agent instructions:
- C — Context
- L — Limits
- E — Examples
- A — Answer format
- R — Review
- Q — Query
Think of it like a pre-flight checklist for AI. You’re going beyond asking it for output to design a behavior. CLEAR-Q works for:
- GPTs (custom ChatGPTs with system prompts, knowledge uploads, and tools)
- Copilot Agents (custom agents in Microsoft Copilot with instructions, connectors, and workflows)
- Any LLM-based assistant where you can define rules and outputs
C — Context: Define the Job and the World
The context part of CLEAR-Q answers questions like, who are you? What’s happening? Who is this for? What inputs exist? What’s the deliverable?
If you skip context, the model fills in blanks with whatever’s statistically common, which is not the same.
A good context section includes:
- Role: “You are a cybersecurity advisor,” “You are a social strategist,” etc.
- Situation: What the user is trying to do right now.
- Industry/audience: Who this is for and why they care.
- Inputs: What the agent can use (files, notes, CRM data, SharePoint docs, brand voice, etc.)
- Outcome: Exactly what it must produce.
In Copilot Agents, this is also where you explain which connected data sources matter and what kind of tasks the agent is supposed to execute.
Context is the map. Without it, the AI agent wanders.
L — Limits: Put Guardrails on Behavior
Your defined limits should answer questions like, what should never happen? What assumptions are safe? How careful should the assistant be?
This is where you stop your GPT or Copilot Agent from doing something that sounds confident but is wrong, dangerous, or non-compliant.
Limits typically include:
- Market/jurisdiction assumptions (“Assume U.S. insurance marketing norms…”)
- Forbidden actions (“Do not quote prices, do not give individualized advice…”)
- Risk posture (conservative vs neutral vs aggressive)
- Uncertainty handling (“If unsure, say so and ask a clarifying question.”)
For Copilot Agents, Limits are also critical because agents may have access to internal docs, emails, or databases. Limits help prevent leaking internal-only information, using stale/irrelevant sources, and/or doing “creative compliance” (aka making stuff up confidently).
Limits are the fences. They keep the agent on the right side of reality.
E — Examples: Show What “Good” and “Bad” Mean
Examples answer what should outputs sound like? What should they avoid sounding like?
Even a single example of what “good” looks like can anchor tone, structure, and depth more effectively than paragraphs of explanation. One example of a “bad” output tells the agent where the cliffs are.
Examples should be short and representative. Keep it to about 1-3 lines of “good” and 1 line of “bad”. You’re teaching taste, not writing a novel.
Setting limits is especially useful for Copilot Agents that need to match company style or workflow patterns.
A — Answer Format: Decide What the Output Must Look Like
Your answer format addresses items such as how should results be structured? What length should it be? How should they be prioritized?
Providing your GPT the format for your answer is the difference between “Here’s a blob of text, good luck” and “Here are 5 usable options in the exact format you need.”
Answer format can specify:
- Structure: Headings, bullets, tables, templates
- Length: Word count or range
- Ranking/scoring: If you want multiple options sorted by quality
In Copilot Agents, this also helps when you need outputs that plug into workflows—like a table that can drop into an email or spreadsheet.
If you care about usability, you care about format.
R — Review: Force a Pre-Flight Check Before Output
What should the agent verify before finishing? A review process is underrated and wildly powerful.
A review step catches hallucinations and sloppy logic before they reach the user. It makes the assistant act like a careful analyst instead of a fast typist.
A Review checklist might include:
- List assumptions
- Flag low-confidence claims
- Check logic for gaps or contradictions
- Confirm alignment with Limits
Q — Query: Define the “Perfect Ask” and How to Get There
Most people think Q is just “the question.” In CLEAR-Q, Q does more.
1) Q defines the ideal complete request schema
2) Q defines how the agent assembles that request when the user doesn’t provide all fields
In other words: Q defines what good input looks like.
A complete query includes specific fields, which can include the following examples:
- Platform(s)
- Topic
- Audience/persona
- Geography/jurisdiction
- Goal
- Voice/tone
Then, you give an ideal format, like: “Create [PLATFORM] posts about [TOPIC] for [AUDIENCE] in [STATE], optimized for [GOAL], in a [TONE] style.”
If the user gives an incomplete request, the agent gathers fields one at a time, in order.
To ensure that incomplete requests become complete requests properly, set some rules:
- Ask one question per turn
- Accept multi-field answers if the user gives them
- If user says, “surprise me,” assume reasonable defaults and proceed
This matters a ton for Copilot Agents because they often operate inside noisy environments (Teams, Outlook, SharePoint). Your intake flow acts like a built-in form without making the user fill out a form.
A Full CLEAR-Q Example: Social Content Agent for Insurance Agencies
Now it’s time to put all that we’ve discussed so far together. Here’s an example of CLEAR-Q applied to a real assistant you might build as either a GPT or a Copilot Agent.
C — Context
You are a social media content strategist for U.S. independent insurance agencies.
Situation: agencies need fast, compliant Facebook and LinkedIn posts.
Inputs: brand voice, state(s), lines, personas, blog drafts, FAQs, past posts.
Outcome: 3–5 ready-to-publish posts per platform with CTA, visual idea, hashtags.
L — Limits
Assume U.S. insurance marketing norms unless state specified.
Do not quote premiums, promise savings, imply binding, guarantee claims, or give individualized advice.
Do not use em dashes.
Risk posture: conservative.
If uncertain: state uncertainty, avoid risky claims, ask a clarifying question.
E — Examples
Good: “New teen driver at home? Here are 3 ways to manage cost and coverage. Want a quick review? We’re happy to help.”
Bad: “We guarantee the cheapest teen auto rate—message us now.”
A — Answer format
Facebook: 80–180 words, friendly hook, short body, low-pressure CTA, optional disclaimer, visual idea, 3–8 hashtags.
LinkedIn: 180–350 words, professional hook, value-led body, consultative CTA, optional disclaimer, visual idea, 3–8 hashtags.
Rank by clarity → audience fit → platform fit → compliance safety → CTA strength.
R — Review
List assumptions, flag low confidence, check logic gaps, confirm compliance with Limits.
Q — Query
A complete request includes: platform(s), topic, audience, geography, goal, tone.
If missing any fields, ask one at a time in that order.
After complete, generate posts using A and R.
Result: The Assistant that Doesn't Just "Write Posts"
With the CLEAR-Q framework, your AI assistant runs a safe intake, then produces compliant, formatted deliverables, whether it lives as a GPT in ChatGPT or a Copilot Agent in Microsoft 365.
Why does CLEAR-Q work so well across platforms? Because it mirrors how well humans delegate.
- Context defines the mission.
- Limits prevent disasters.
- Examples teach taste.
- Answer format makes outputs usable.
- Review adds quality control.
- Query defines good input and how to obtain it.
CLEAR-Q is basically the difference between “Do a thing” and “Here’s the job, the rules, the style, the output shape, and how to ask me what you need.”
Your GPT or Copilot Agent doesn’t get smarter. Your instructions get better, and the results follow. The next time you build a GPT or a Copilot Agent, write your instructions in CLEAR-Q order.
Even if you only do C + L + A, output quality will jump. Add E, R, and a Q-based intake flow, and you’ve built something that behaves like a reliable employee who never needs coffee breaks.
And as a bonus for making it nearly to the end of this blog, here’s a link to download your copy of our CLEAR-Q Framework guide.
Taking Your AI Skills to the Next Level
As AI continues to develop, resources are needed to help keep up with it all. That’s why we have a section dedicated to Artificial Intelligence Resources on our Technology Resources page. There, you can watch recent webinars we’ve hosted and helpful downloadable guides designed to help you progress your AI skills in a practical way.
Here’s a look at a couple great items you may be interested in:
- Webinar Recording: Get Started Building AI Agents in Microsoft 365 Copilot Studio
- Guide: Prompts for Building Your Agency’s Digital Colleagues
- Infographic: Frameworks for Mastering AI Prompts and Personas
Want to get more targeted AI consulting? Reach out to our team to start the conversation about your AI goals!
Jason Gobbel
Chief Solutions Officer
Kite Technology Group





