Algotraction / Blog / Case Study
Case Study 8 min read

Raycast AEO Audit:
Why 3 Out of 4 AI Engines
Return Zero Results

We ran raycast.com through 92 queries across ChatGPT, Perplexity, Gemini, and Claude. ChatGPT mentions Raycast 61% of the time. The other three engines return zero results — every single query. Here is exactly why that happens, and the five fixes that close most of the gap.

J
Junwoo Kim
Founder, Algotraction

The Data at a Glance

On March 4, 2026, we ran a full AEO audit on raycast.com — 23 brand-specific queries sent to four AI engine APIs: OpenAI GPT-4o, Perplexity sonar-pro, Google Gemini 2.0 Flash, and Anthropic Claude Sonnet. Total: 92 API calls, all captured verbatim.

AI Engine Visibility — raycast.com · March 4, 2026
ChatGPT (GPT-4o)
61%
Perplexity Pro
0%
Gemini 2.0 Flash
0%
Claude Sonnet
0%

The overall AEO score is 65.2 / 100 — a B grade, which sounds reasonable until you look at the breakdown. Content quality scores 100%: Raycast has excellent documentation and clear product copy. But Authority and Trust scores 0 out of 15 points. That single category is dragging three engines to zero.

CategoryScoreMaxStatus
Technical Setup15.6253 items failing
Content Quality35.035Complete
Entity & Brand14.6252 items failing
Authority & Trust0.015Strategy needed
Overall65.2100B — Good

Why Perplexity, Gemini, and Claude Return Zero

Each engine uses a different source of truth. Understanding that difference is the key to knowing what to fix.

Perplexity — Live Web Retrieval

Perplexity's sonar-pro model runs a live web search for every query before responding. It doesn't rely on training data — it reads pages right now. A zero score from Perplexity means: when someone asks "what is the best macOS productivity launcher?", Perplexity's live web search is not surfacing Raycast as an authoritative answer from third-party sources.

The fix is not on raycast.com itself. It's about the broader web signal: third-party reviews, comparison articles, Reddit threads where Raycast is explicitly named and recommended.

Gemini — Google Knowledge Graph + Search Grounding

Gemini 2.0 Flash with Search Grounding enabled cross-references Google's index in real time. Brands without a Google Knowledge Panel entry — the structured information box that appears when you search a brand name — tend to score zero here. Raycast does have a Knowledge Panel, but the sameAs schema links connecting their website's Organization schema to their Wikidata/Wikipedia entity are missing. That link is how Gemini confirms a brand is the entity it thinks it is.

Claude — Training Data Confidence

Claude relies on training data from before its knowledge cutoff. A zero score from Claude means: Raycast was not discussed enough in high-authority text on the public web — tech media, Wikipedia, GitHub README files, academic mentions — to be consistently recalled during training. Claude knows Raycast exists (it appears in some responses) but does not consistently include it in category-level recommendations.

Key Insight

ChatGPT performs well because it was trained on a massive corpus that includes Hacker News discussions, ProductHunt threads, and developer blogs where Raycast is extensively discussed. The other three engines use different sources — live web (Perplexity), Google Knowledge Graph (Gemini), or a different training dataset snapshot (Claude) — and those sources under-represent Raycast.

The Five Fixes That Move the Needle Most

Of the 36 checklist items audited, five account for the bulk of the achievable improvement. Completing these moves the score from 65.2 to approximately 74.0.

01
Define service scope clearly on key pages
AI engines struggle to categorize Raycast because product pages describe features but not the category. Adding a clear declarative sentence — "Raycast is a macOS application launcher and AI-powered productivity platform" — near the top of the homepage and key landing pages gives AI models an unambiguous signal. This is the highest-impact fix in the entire audit.
+5.0 pts
02
Deploy llms.txt to site root
llms.txt is a plain-text file at /llms.txt that gives AI crawlers a structured summary of what your product does, who it's for, and how to describe it. Anthropic's crawler, Perplexity's bot, and others check this file. It is a 30-minute implementation with outsized impact. See our complete llms.txt guide.
+3.8 pts
03
Add sameAs links to Organization Schema
The Organization schema markup on raycast.com is present but missing sameAs links to Wikidata, Crunchbase, and LinkedIn. These links tell Gemini (and other engines) that the website entity and the knowledge graph entity are the same company. Without them, Gemini cannot confidently attribute information about Raycast to the correct entity.
+3.3 pts
04
Implement FAQPage schema markup
FAQ pages with structured markup give AI engines pre-packaged answers they can surface directly. Questions like "Is Raycast free?", "What platforms does Raycast support?", and "How is Raycast different from Spotlight?" directly map to common AI queries. The FAQ schema makes it easy for AI models to extract factual answers without interpretation.
+2.5 pts
05
Set canonical URLs sitewide
Several product pages do not have canonical URL tags. This means AI crawlers may index duplicate content across multiple URLs, diluting the authority signal for any single page. Canonical tags are a one-hour technical fix with consistent downstream benefit.
+1.2 pts

Competitive Context

When AI engines are asked about macOS productivity tools and launcher applications, Raycast appears in the same responses as Alfred (mentioned 9 times in our query set), Spotlight (native Apple), and occasionally Notion, Slack, and Linear. The competitive map from this audit shows Raycast has strong brand recognition in category-specific queries but weaker presence in general productivity tool recommendations.

BrandAI MentionsThreat Level
Raycast (subject)14
Slack13Medium
Alfred9High
Spotlight (Apple)7Medium

The Takeaway

A 65.2 AEO score with excellent content but zero authority is a common profile. Most well-built product websites look like this: clear copy, good design, but no structured signals pointing AI engines toward them specifically. The gap is not about product quality — it is about machine-readable brand infrastructure.

The good news: the five fixes above are almost entirely free to implement. They require engineering time, not budget. Most can be completed in a single sprint. After implementation, re-running the audit 4–6 weeks later will show measurable score movement — particularly from Perplexity and Gemini where the barriers are most structural.

Want to see your brand's score?

Submit your URL for a full AEO audit — 92 queries, 4 engines, 36-item checklist, delivered in 48 hours. From $79 — see pricing.