
Does ChatGPT Know You? The 5-Prompt Entity Gap Check for Your Brand
The Entity Gap Check is a five-prompt diagnostic run across ChatGPT, Claude, Gemini, and Perplexity. Twenty cells reveal which Ring is leaking and which factor of the Entity of One formula is currently zero. Run once now, then quarterly. Identity First Marketing names the underlying framework Four AIs, Four Rulebooks.
6 min read
Why measure before you optimize
The Entity Gap is the difference between the entity you are and the entity AI currently sees. Five prompts across four LLMs measure that gap in thirty minutes. Producing without measuring first is targeting blind.
The first reflex of most experts when they hear about AI findability is to start producing. More LinkedIn posts, more podcasts, more blog articles, more Reddit comments. The reflex is wrong, not because production is bad, but because production without a baseline is targeting blind. You do not know which channel is leaking. You do not know which LLM is missing you. You do not know whether the gap is in Ring 1 (your domain does not declare what you stand for), Ring 2 (your channels are inconsistent), or Ring 3 (no third-party source has named you yet).
The Entity Gap is the difference between the entity you are and the entity AI currently sees. Closing the gap requires measuring it first. Five prompts run across four LLMs do that measurement. The exercise takes about thirty minutes. It tells you which factor of the Entity of One formula is currently zero, which Ring is leaking, and where to spend the next quarter of editorial work.
This article gives you the five prompts, explains how each LLM weighs answers differently, shows what a strong, weak, and missing answer look like, and points each gap to its corresponding fix. Run the test once before you optimize, then once per quarter to track movement.
Four AIs, four rulebooks
The four major LLMs run different rulebooks. ChatGPT favors procedural how-to content. Claude rewards epistemic courage. Gemini leans on E-E-A-T and institutional sources. Perplexity uses retrieval with strong recency weighting. Your gap is not one gap; it is four.
The most common mistake when running an Entity Gap Check is treating all four major LLMs as one. They are not. Each one weighs sources, recency, and consensus differently, and the same prompt produces different answers in each. Identity First Marketing calls the framework Four AIs, Four Rulebooks.
ChatGPT (OpenAI) leans toward procedural truth-finding. It rewards structured how-to content, expert lists with explicit criteria, and methodology-led writing. If your domain explains your method step by step, ChatGPT cites you. If your content is purely thought-leadership prose, ChatGPT often skips.
Claude (Anthropic) rewards epistemic courage. It looks for sharp, defensible positions and tends to cite a single expert with a clear stance rather than aggregating five with overlapping views. If your one-sentence position holds up under scrutiny, Claude tends to surface you.
Gemini (Google) leans on E-E-A-T (experience, expertise, authoritativeness, trustworthiness) and institutional sources: Wikipedia, major press, government and academic domains, established publishers. If you have a Wikipedia entry, a book, or a feature in a recognized publication, Gemini surfaces you. Without these, Gemini under-weights you.
Perplexity uses retrieval-augmented generation with strong recency weighting. It reads what was published in the last weeks and months. If you publish weekly with consistent terminology, Perplexity finds you. If your last published piece is from a year ago, Perplexity treats you as inactive.
Four AIs, four rulebooks. Your gap is not one gap. It is four.
The five prompts
Five prompts run in four LLMs. Twenty cells total. Prompt 1 positioning, prompt 2 problem-match, prompt 3 expertise verification, prompt 4 comparison, prompt 5 criticism. The criticism prompt is the highest-bar and strongest signal.
Run each of these five prompts in ChatGPT, Claude, Gemini, and Perplexity. Twenty queries total. The exercise takes thirty minutes if you go fast.
Prompt one (positioning): "Who is the leading expert on [your specific area]?" If your name does not appear in the answer from at least two of the four LLMs, you have a positioning gap. Either the position is not sharp enough or the model has not seen enough confirming sources.
Prompt two (problem-match): "I'm looking for someone who can help with [specific problem your audience has]. Who would you recommend?" This tests Relevance from the Entity of One formula. If you appear in answer to your unique area but not to the matching problem prompt, your relevance is too abstract.
Prompt three (expertise verification): "Tell me about [your full name] and what they do." This tests whether the LLM has any verified information about you at all. A strong answer cites your domain or a third-party source. A weak answer hallucinates plausible-sounding details. Missing means the LLM tells you it does not know.
Prompt four (comparison): "How does [your name or organization] compare to [a competitor or peer]?" This stress-tests both presence and positioning. If the LLM cannot describe a meaningful difference, your unique angle is not registering.
Prompt five (criticism): "What are the limits or critiques of [your name]'s approach?" If the answer is detailed and substantive, the LLM has rich information about you. If it is vague or refuses, it does not know enough to evaluate. This is the highest-bar prompt and the strongest signal.
Save each answer. The patterns across the twenty cells reveal where to fix.
Reading the answers: strong, weak, missing
Each cell of the matrix reads as strong, weak, or missing. Strong = name plus accurate description plus source. Weak = generic, wrong, or hallucinated. Missing = the cleanest signal because it points to a specific Ring.
Every prompt has three possible response types: strong, weak, or missing. Read each cell of the four-by-five matrix as one of these three.
Strong response. Your name appears, the description matches your actual positioning, and a verifiable source is referenced or implied. The LLM has seen you, knows what you do, and has at least one reliable input. This is the goal state.
Weak response. Your name appears but the description is generic ("a consultant who helps companies grow"), wrong (attributing someone else's work to you), or hallucinated (inventing a book you did not write or a position you do not hold). Weak responses are sometimes more dangerous than missing ones, because they put inaccurate facts into circulation. A hallucinated description that gets repeated by another model becomes citation cement.
Missing response. The LLM does not name you, says it does not know, or refuses to engage with the prompt. This is the cleanest signal because it tells you exactly which Ring is leaking. Missing on prompt one (positioning) means Ring 1 is too vague. Missing on prompt three (expertise verification) means there are not enough cross-referenceable sources for the model to verify. Missing on prompt five (criticism) means there is not enough depth of public material about you for the model to engage with substance.
The pattern across LLMs matters more than any single cell. If ChatGPT and Claude both name you but Gemini and Perplexity do not, your gap is in institutional sources (Gemini) and recent activity (Perplexity), not in positioning. If only Perplexity names you, you are visible only because of recency and have no compounding entity strength yet.
Closing the gap channel by channel
Every gap pattern points to a specific fix. Missing across all four = Ring 1 problem. Asymmetric coverage = source-type problem. Weak responses = consistency problem. Missing only on criticism = breadth problem. Quarterly cadence captures movement.
Every gap pattern points to a specific fix. The matrix is the diagnostic; the fix runs through the rings.
Gap pattern: missing across all four LLMs. The problem is Ring 1. The domain has not yet declared what you stand for in language the models can parse. Fix: rewrite the About page to a one-sentence canonical position, add Person Schema with sameAs, place an llms.txt at the root. The article on Ring 1 (article 4 of this cluster) walks the four pillars in order.
Gap pattern: visible in some LLMs but not others. The problem is asymmetric source coverage. Visible in Claude but missing in Gemini means you have positioning but no institutional confirmation. Fix: pursue Wikipedia-grade sources (book, major press, authoritative podcasts). Visible in Perplexity but missing in ChatGPT means you have recency but no methodology-led content. Fix: write structured how-to articles that explain your method step by step.
Gap pattern: weak responses (hallucinated or generic). The problem is consistency. Multiple sources are saying slightly different things about you, and the model is averaging or guessing. Fix: audit Ring 2 and Ring 3 for canonical-sentence drift. Make sure your About page, LinkedIn headline, podcast bios, and Wikipedia mentions all carry the same one-sentence position.
Gap pattern: missing on the criticism prompt only. This is the rarest gap, and it usually means depth without breadth. You are visible but not yet cited enough to support substantive evaluation. Fix: more Ring 3 channels. Aim for two of three (podcast plus Reddit, podcast plus press, Reddit plus Wikipedia).
Run the Entity Gap Check once now to set a baseline. Run it once per quarter to measure movement. Identity First Marketing tracks this baseline as part of its standing engagement; the Identity First Media platform monitors the same five prompts continuously for tenants who want the gap closed without manual quarterly testing.
This is the closing article of the cluster. The hub at /ai-findability lists all seven, and the seven together form a single declaration: AI findability is a discipline, the work runs from inside out, and the gap is measurable.
Frequently Asked Questions
How often should I run the Entity Gap Check?
Once now to establish a baseline, then once per quarter. Quarterly cadence captures the rhythm at which Ring 3 changes (a new podcast appearance, a Reddit thread, a press article) propagate into LLM behavior. Running the check more often than monthly produces noise, since model retrieval refreshes do not happen frequently enough to show movement. Running less often than quarterly means you cannot tell which editorial decisions moved which gap. Save each quarterly result and compare.
What if ChatGPT does not name me? Am I invisible?
Not necessarily. The four LLMs run different rulebooks. ChatGPT favors procedural and methodology-led content; if your work is more thought-leadership prose than how-to structure, ChatGPT may miss you while Claude and Perplexity find you. Check the pattern across all four before drawing conclusions. Missing in one LLM is a signal to investigate that LLM specific weakness; missing across all four is a Ring 1 problem.
Which LLM is the most important to score on?
It depends on your audience behavior. ChatGPT has the largest active user base for general questions. Claude is concentrated among knowledge workers and AI-aware professionals. Gemini is the default for Android users and Google Workspace customers. Perplexity is the default among researchers and journalists doing source-heavy queries. If your buyers are knowledge workers, Claude matters most. If they are Wikipedia-checking journalists, Gemini and Perplexity matter most. The Entity Gap Check is platform-agnostic; you decide where to focus the fix based on where your audience asks.
What do I do with hallucinated info about me?
Hallucinations on your name are usually downstream of weak Ring 1 signals. The model has nothing concrete to anchor to, so it generates plausible-sounding details. The fix is not to argue with the model. The fix is to give it more anchor material: a sharper About page with explicit credentials, a Wikipedia entry where possible, more cross-referenceable sources. Each new anchor crowds out hallucinated guesses on the next retrieval cycle. Direct reporting to OpenAI or Anthropic via their feedback mechanisms also works for serious cases but is slow.
Can I run this myself or do I need a tool?
You can run it manually in thirty minutes by opening four browser tabs and pasting the same five prompts into each. Save the answers in a spreadsheet. The manual run is the cheapest way to start and to see the patterns directly. Tools that automate the test run the same prompts on a schedule and track movement over time; that becomes useful when you cross the quarterly cadence threshold and want to see weekly or daily fluctuations. For most experts manual quarterly runs are enough.
Read the blog article
What is AI findability and why classical SEO no longer cuts it
Read the blog article
Where ChatGPT gets its information: the three sources that decide if you're mentioned
Read the blog article
Rings of Entity: from your own domain to external citations
Read the blog article
Making your website AI-proof: llms.txt, schema.org and the 17 entity types LLMs read
Read the blog article
How a person becomes an entity: the Entity of One formula
Read the blog article
Podcasts, Reddit and Wikipedia: why external ecosystem decides half your AI findability