Mike King, founder and CEO of iPullRank and twice-named Search Marketer of the Year by Search Engine Land, delivered a stark message during a January 2026 SparkToro Office Hours session: Optimizing content for large language models like ChatGPT, Gemini and Claude requires tactics distinct from traditional Google search engine optimization. “Standard SEO ranking factors explain only 4-7% of the citations in AI results,” King stated, citing research from Josh Blyskal at Profound, which analyzed 250 million AI responses and found just a 39% overlap between ChatGPT sources and Google rankings.
The core divergence stems from query fan-out, where AI engines decompose a single user prompt into multiple synthetic subqueries. For a query like “training program for New York Marathon,” models generate related questions such as “how to train for 26.2 miles” or “marathon training checklist.” AI Overviews use 5-10 such queries for speed, while ChatGPT deploys 3-7 and AI Mode up to 100, per King’s analysis. A Semrush study revealed 28.3% of these queries have zero search volume, invisible to tools like Ahrefs or SEMrush, creating massive data gaps for marketers.
King’s tool Qoria, leveraging the Gemini API, simulates this process by prompting the model to generate fan-out queries and even execute Google searches via function calls. “We were the first to give you QFO data,” he noted, promising to open-source the underlying code soon. Profound extracts actual fan-out from ChatGPT’s API JSON, boosting overlap estimates from 19% to 39% when accounted for, as detailed in their 250 million response analysis.
Technical Pitfalls Blocking AI Visibility
Unlike Google and Bing, which rely on indexes, platforms like ChatGPT fetch pages in real time, introducing unique failure modes. King highlighted HTTP 499 errors—client timeouts from slow-loading pages—as a silent killer. “ChatGPT gives up if your page takes too long,” he explained, sharing a client case where spiking 499s in logs correlated with plummeting visibility. Traditional SEO overlooks this Nginx-originated code, absent from most HTTP response guides.
Metadata emerges as a direct ranking signal for AI. “Your meta description is the advertisement to the LLM,” King said. While Google rewrites 80% of descriptions, ChatGPT uses them verbatim to decide fetches. Profound data shows semantically rich URL slugs yield 11.4% more citations, with query-specific slugs adding 5%. Schema and accessibility matter too, as non-JavaScript rendering aids parsing, and LLMs ingest structured data beyond Google’s rich results.
Strategic shifts follow: Brands hiding pricing pages for lead gen lose narrative control, as AI scrapes Clutch or DesignRush. “It’s reputation management across the content ecosystem,” King advised, noting consensus bias in ChatGPT versus site-primary pulls in AI Overviews.
Chunking’s Measurable Edge Over Walls of Text
Dismissing claims that “chunking is a scam,” King demonstrated splitting a paragraph on “machine learning and data privacy” boosted cosine similarity scores from 0.6481/0.6948 to 0.7477/0.7634—a 15.4% and 9.78% lift. This aligns content with vector space models powering retrieval, where proximity determines relevance. “Structure for humans first, but measure semantically,” he urged.
Emerging research reinforces: Berkeley’s Ring Attention chunks long sequences for unified meaning; Meta’s MuWalker builds hierarchical memory trees; MIT’s Recursive Language Model and Google’s Mixture of Recursions favor atomic units. Even infinite context compresses cleaner from structured chunks, countering Google spokesman Danny Sullivan’s warnings. Rand Fishkin interjected: “Doesn’t align with how people read.” King agreed, likening machines to an accessibility persona.
Style needn’t suffer: “You can blend voice with extractable data points,” King said, citing rap’s technical flair as analogy. iPullRank’s 20-chapter AI Search Manual expands on GEO content production, emphasizing atomic units amid content collapse.
Proven Gains from GEO Overhauls
Case studies validate: A vehicle sales client saw 661% ChatGPT visibility growth and 330% in AI Overviews via semantic gap-filling and technical fixes. Telecom jumped 253% in Overviews; financial services gained 121% signups, 52.6% organic traffic, and 24% AI Overviews after relevance engineering. “AI search is a raffle—rank for more synthetic queries to buy more tickets,” King framed it.
Measurement remains probabilistic, akin to Google Search Console averages. Profound scrapes interfaces multiple times daily for precision; clickstream data could refine. Recency relies on explicit dates, as AI lacks Google’s Wayback-style indexing—update timestamps on evergreen posts. Reddit dominates citations (despite recency bias overridden by comprehensive older threads), followed by YouTube as “cheat code.”
Q&A touched priorities: Treat AI as branding, not performance (low referral ROI). PR and UGC build consensus; avoid URL rewrites (catastrophic for SEO) but craft descriptive slugs forward. Acronyms work if defined via NLP entity resolution.
Tools and Roadmaps for Practitioners
SEO software lags, King lamented, lacking fan-out simulation or passage scoring. Start manually: Split overbroad paragraphs, add clear headings, inject data for authority, test readability. His forthcoming open-source tools will bridge gaps. Profound’s citation intelligence tracks share of voice; Semrush evolves attribution.
As AI Mode queries lengthen to 10-11 words per iPullRank data, and ChatGPT hits 400 million weekly users with 60-70% web retrieval per Profound, GEO demands discipline akin to social channels—nuanced strategies despite shared tactics. Decision-makers’ focus offers SEO a reputation reboot.


WebProNews is an iEntry Publication