The Data at a Glance
On March 4, 2026, we ran a full AEO audit on raycast.com — 23 brand-specific queries sent to four AI engine APIs: OpenAI GPT-4o, Perplexity sonar-pro, Google Gemini 2.0 Flash, and Anthropic Claude Sonnet. Total: 92 API calls, all captured verbatim.
The overall AEO score is 65.2 / 100 — a B grade, which sounds reasonable until you look at the breakdown. Content quality scores 100%: Raycast has excellent documentation and clear product copy. But Authority and Trust scores 0 out of 15 points. That single category is dragging three engines to zero.
| Category | Score | Max | Status |
|---|---|---|---|
| Technical Setup | 15.6 | 25 | 3 items failing |
| Content Quality | 35.0 | 35 | Complete |
| Entity & Brand | 14.6 | 25 | 2 items failing |
| Authority & Trust | 0.0 | 15 | Strategy needed |
| Overall | 65.2 | 100 | B — Good |
Why Perplexity, Gemini, and Claude Return Zero
Each engine uses a different source of truth. Understanding that difference is the key to knowing what to fix.
Perplexity — Live Web Retrieval
Perplexity's sonar-pro model runs a live web search for every query before responding. It doesn't rely on training data — it reads pages right now. A zero score from Perplexity means: when someone asks "what is the best macOS productivity launcher?", Perplexity's live web search is not surfacing Raycast as an authoritative answer from third-party sources.
The fix is not on raycast.com itself. It's about the broader web signal: third-party reviews, comparison articles, Reddit threads where Raycast is explicitly named and recommended.
Gemini — Google Knowledge Graph + Search Grounding
Gemini 2.0 Flash with Search Grounding enabled cross-references Google's index in real time. Brands without a Google Knowledge Panel entry — the structured information box that appears when you search a brand name — tend to score zero here. Raycast does have a Knowledge Panel, but the sameAs schema links connecting their website's Organization schema to their Wikidata/Wikipedia entity are missing. That link is how Gemini confirms a brand is the entity it thinks it is.
Claude — Training Data Confidence
Claude relies on training data from before its knowledge cutoff. A zero score from Claude means: Raycast was not discussed enough in high-authority text on the public web — tech media, Wikipedia, GitHub README files, academic mentions — to be consistently recalled during training. Claude knows Raycast exists (it appears in some responses) but does not consistently include it in category-level recommendations.
ChatGPT performs well because it was trained on a massive corpus that includes Hacker News discussions, ProductHunt threads, and developer blogs where Raycast is extensively discussed. The other three engines use different sources — live web (Perplexity), Google Knowledge Graph (Gemini), or a different training dataset snapshot (Claude) — and those sources under-represent Raycast.
The Five Fixes That Move the Needle Most
Of the 36 checklist items audited, five account for the bulk of the achievable improvement. Completing these moves the score from 65.2 to approximately 74.0.
Competitive Context
When AI engines are asked about macOS productivity tools and launcher applications, Raycast appears in the same responses as Alfred (mentioned 9 times in our query set), Spotlight (native Apple), and occasionally Notion, Slack, and Linear. The competitive map from this audit shows Raycast has strong brand recognition in category-specific queries but weaker presence in general productivity tool recommendations.
| Brand | AI Mentions | Threat Level |
|---|---|---|
| Raycast (subject) | 14 | — |
| Slack | 13 | Medium |
| Alfred | 9 | High |
| Spotlight (Apple) | 7 | Medium |
The Takeaway
A 65.2 AEO score with excellent content but zero authority is a common profile. Most well-built product websites look like this: clear copy, good design, but no structured signals pointing AI engines toward them specifically. The gap is not about product quality — it is about machine-readable brand infrastructure.
The good news: the five fixes above are almost entirely free to implement. They require engineering time, not budget. Most can be completed in a single sprint. After implementation, re-running the audit 4–6 weeks later will show measurable score movement — particularly from Perplexity and Gemini where the barriers are most structural.
Submit your URL for a full AEO audit — 92 queries, 4 engines, 36-item checklist, delivered in 48 hours. From $79 — see pricing.