ai-visibility

How AI models choose which brands to recommend

AI recommendations come from retrievable evidence, not vibes. Here is how ChatGPT, Perplexity, Google, and Claude choose brands.

Updated May 12, 202611 min read
How AI models choose which brands to recommend

When ChatGPT recommends three competitors and leaves your brand out, the useful question is not "what prompt hack did they use?" It is "what evidence did the model have when it built the shortlist?" AI recommendations are not a private media buy. They are a synthesis of retrievable sources, entity confidence, category consensus, and platform-specific source preferences.

Soar is a community marketing agency that has run 4,200+ community campaigns across 280+ brands since 2017. The pattern we see across AI visibility audits is consistent: brands are rarely excluded because their homepage is missing a clever schema tag. They are excluded because the public web does not contain enough trusted, category-specific evidence that the brand belongs in the recommendation set.

How does ChatGPT decide which brands to recommend?

ChatGPT recommends brands from two broad inputs: what the model already associates with a category and what ChatGPT Search can retrieve when a prompt needs current or source-backed information. OpenAI's current help docs say ChatGPT Search can rewrite a user's question into one or more targeted queries and send those queries to search partners (OpenAI Help Center).

That means "best payroll software for a 70-person agency" is not one query. It can fan out into category, use-case, comparison, pricing, and recency searches before the answer is written. Your brand has to be present in the surfaces those fan-out queries retrieve. A homepage that says "we are best for agencies" is not enough if comparison pages, Reddit threads, review profiles, and analyst lists all name competitors instead.

For a marketing leader, the actionable interpretation is simple: ChatGPT is less like a directory and more like a compressed buyer research assistant. It recommends the brands that the available evidence makes easiest to defend. If your evidence is thin, inconsistent, or only self-authored, the model has safer options.

What signals matter before a model names a brand?

The signals that matter most are entity clarity, category fit, source corroboration, freshness, and third-party trust. Entity clarity means the model can tell exactly what your brand is and what category it belongs to. Category fit means sources repeatedly connect the brand to the buyer's use case. Source corroboration means multiple independent pages make compatible claims.

Ahrefs' December 2025 study across 75,000 brands found branded web mentions had strong correlations with AI visibility across ChatGPT, AI Mode, and AI Overviews: 0.664 for ChatGPT, 0.709 for AI Mode, and 0.656 for AI Overviews (Ahrefs). Branded anchors and branded search volume also mattered, but raw domain metrics were weaker than named presence across the web.

The Soar read: models reward consensus. A brand mentioned by customers on Reddit, compared on G2, cited in a trade article, and explained clearly on its own site gives the answer engine a defensible pattern. A brand with 80 blog posts and no third-party footprint gives it a press release.

How do the major AI engines differ?

Each engine has a different source mix, which is why a brand can be visible in Perplexity and absent in ChatGPT. Google says AI Mode and AI Overviews may use query fan-out, issuing related searches across subtopics and data sources, then showing a wider set of helpful links than classic search (Google Search Central). Google also says its core web ranking systems are integrated into AI Overviews (Google PDF).

OpenAI documents separate crawlers for search and training. OAI-SearchBot is used to surface websites in ChatGPT search answers, while GPTBot relates to training controls (OpenAI crawler docs). Anthropic documents ClaudeBot, Claude-User, and Claude-SearchBot, separating training, user-requested fetching, and search-quality work (Anthropic). Perplexity documents PerplexityBot and Perplexity-User for search and user-requested fetches (Perplexity).

Best lever Earned mentions plus pages accessible to OAI-SearchBot. ChatGPT can mix model memory with live search, so both long-horizon brand presence and current retrievable sources matter.

ChatGPT

Best lever Indexable, high-quality pages plus corroborating third-party sources. Google explicitly ties AI features to Search fundamentals and query fan-out, so classic SEO still matters, but it is not enough alone.

Google AI

Best lever Credible, recent, cited sources on domains it can fetch. Profound and Semrush both show Perplexity source preferences vary by domain category, with community and editorial sources carrying real weight.

Perplexity

Best lever Clear long-form sources and crawler access. Claude uses web search for current grounding and cites sources when search is invoked, but not every brand recommendation prompt triggers the same search path.

Claude

Why do community sources keep showing up in recommendations?

Community sources show up because they contain the exact pattern recommendation prompts ask for: people comparing options, explaining tradeoffs, naming alternatives, and describing failures in plain language. A product page says what the brand wants to be. A Reddit thread, Quora answer, YouTube transcript, or review page says what buyers think the brand is.

Semrush's 13-week, 100M-citation study found Reddit and LinkedIn among the top five cited domains across ChatGPT Search, Google AI Mode, and Perplexity, while Reddit and Wikipedia remained two of ChatGPT's most-cited domains after a sharp September retrieval change (Semrush). Profound's platform analysis also emphasizes that each AI platform has different source preferences, with Perplexity especially oriented toward community and peer-to-peer information (Profound).

This is the mechanical reason community marketing now belongs in AI visibility planning. The goal is not to spam brand names into threads. The goal is to create legitimate, sourced, useful community evidence in the places models already retrieve. We cover the broader source-pipeline model in how community marketing drives AI visibility.

How can a brand become recommendation-worthy?

A brand becomes recommendation-worthy by building a source portfolio that a model can synthesize without stretching. The portfolio needs four layers: an unambiguous owned explanation, independent third-party validation, community evidence, and current measurement. Missing one layer does not always kill visibility, but it makes the recommendation easier to displace.

Start with the owned explanation. The model needs a clear page that says who the product is for, what category it belongs to, what problems it solves, and where it is not a fit. Then build independent validation: review profiles, credible comparisons, customer stories, analyst or trade mentions, and transparent pricing where possible. Then build community evidence in buyer-research surfaces, not generic social channels.

The tactical mistake is treating this as content volume. More pages do not necessarily create more confidence. Ahrefs' work suggests named brand presence across the web is more predictive than simply adding owned pages. The strategic move is to build fewer, stronger sources that agree with one another. For the measurement side, use the share-of-voice model in how to measure AI visibility for your brand.

What should this cost, and how long should it take?

For a $5M to $50M brand, a credible AI recommendation program usually needs 6 months and a blended budget of $6,000 to $20,000 per month, depending on how much community, content, PR, and measurement are in scope. A narrow audit and owned-content cleanup can be cheaper. A program that builds Reddit, Quora, review, and editorial evidence at the same time costs more because it is operational work, not a dashboard subscription.

The timeline matters. Crawlers can discover a page quickly, but durable recommendation movement usually needs repeated evidence across sources and prompt sets. The first 30 days should establish the benchmark: which brands are recommended, which sources are cited, which prompts trigger search, and which competitor pages keep recurring. Months 2 through 4 build the evidence. Months 5 and 6 are where share-of-voice movement becomes defensible enough for leadership.

Do not let a vendor sell this as a one-month "GEO sprint" unless the problem is purely technical. The foundational GEO paper showed that content changes can raise visibility in generative answers, but those gains are stronger when the content already has credible evidence and source support (arXiv).

Who is this strategy for?

This strategy is for brands whose buyers ask AI tools comparison and recommendation questions before they talk to sales. That includes B2B SaaS, professional services, fintech, high-consideration DTC, healthcare-adjacent consumer products, education, and any category where buyers compare tradeoffs instead of buying on impulse.

It is also for brands whose competitors already appear in ChatGPT, Perplexity, Google AI Overviews, or Claude for category prompts. If a competitor is recommended consistently and you are absent, the gap is no longer theoretical. It is a distribution problem visible to prospects before they reach your website.

It is not always the right first investment. If your site cannot be crawled, your category page is vague, your reviews are stale, or your brand positioning changes every quarter, fix those basics before funding a large community program. If your product is early and has no credible proof, AI visibility work can amplify that weakness. Sarah should fund this when the product is real, the category has buyer research behavior, and leadership can commit to a 6-month evidence window.

What should not go in the plan?

Do not build the plan around prompt tricks, fake reviews, undisclosed seeded praise, mass AI-written comparison pages, or schema as a magic fix. Google explicitly says there is no special schema or machine-readable file required to appear in AI features (Google Search Central). Schema can help search hygiene. It does not replace source trust.

Also avoid measuring success with one-off screenshots. AI answers vary by platform, session, geography, query wording, and source freshness. A screenshot is a useful example, not a metric. Track a fixed prompt set and measure mention frequency, citation share, sentiment accuracy, and answer share of voice over time. The operating question is whether your brand is gaining share across the buyer questions that matter.

The final mistake is copying a competitor's surface map. If competitors win through Reddit but your category's buyers research in Quora, G2, or YouTube, you need a different source plan. The source portfolio should follow buyer behavior, not agency habit. The related budget shift is covered in backlinks vs brand mentions.

What should Sarah take to the board?

Sarah should take a simple three-slide model. Slide one: the current AI recommendation baseline across 50 to 100 buyer prompts, showing which competitors are named and where the model cites support. Slide two: the evidence gap by source type, separating owned pages, community discussions, review surfaces, and editorial mentions. Slide three: a 6-month plan with expected leading indicators by month.

The board does not need a lecture on model architecture. It needs to see that AI recommendation behavior is measurable, competitive, and tied to source gaps the company can close. The risk of waiting is that competitors keep accumulating evidence while the brand treats AI visibility as another SEO checklist item.

The strongest internal framing is this: AI models recommend brands they can defend with evidence. If the evidence does not exist, the model chooses a competitor. The job is to become the brand that is easiest to defend.

Frequently asked questions

Is there a public ChatGPT brand recommendation algorithm?

No. OpenAI does not publish a brand recommendation formula. The practical model is observable, though: ChatGPT combines model memory, search retrieval, source quality, and answer synthesis. Treat it as a source-evidence problem rather than a secret ranking factor problem.

Does ranking on Google guarantee that AI tools recommend us?

No. Google visibility helps, especially inside AI Overviews and AI Mode, but AI answers often use a broader source set. Ahrefs found that only 38% of AI Overview citations pull from Google's top 10 organic results in its 2026 update, so ranking well and being selected as a source are related but not identical.

Do Reddit mentions really affect AI recommendations?

They can, when they are legitimate and category-relevant. Reddit and other community platforms appear heavily in AI citation studies because they contain buyer language, comparisons, and peer validation. Spam does the opposite: it creates removal risk, reputation risk, and weak evidence.

How often should we measure AI brand recommendations?

Monthly is the right cadence for leadership, with weekly checks only for the operating team. Run the same prompt set across the same platforms, compare against a fixed competitor set, and rebalance the prompt list quarterly rather than reacting to every answer swing.

Can an in-house team do this without an agency?

Yes, if the team already has SEO, community, PR, and analytics capacity. Most failures happen because one team owns only one layer. AI recommendation visibility needs source creation, source cleanup, and measurement in one operating loop.