I’ve been saying for a while now that “being mentioned is the new click.” After attending Krishna Madhavan’s presentation at Tech SEO Connect, I’m more convinced than ever. Madhavan—who leads the team responsible for maintaining what he calls “the freshest, most comprehensive copy of the internet” for Bing, Copilot, and partners like OpenAI and Meta—pulled back the curtain on how AI engines actually decide what content gets surfaced.
His core message?
“Visibility belongs to content that AI can trust, understand, and ground.”
That’s the new SEO equation—or as I prefer to call it, the AEO equation. And if you’re not optimizing for it, you’re already falling behind.
The Grounding Pipeline: Where Visibility Actually Happens
Most conversations about AI optimization focus on generation—what the AI writes in response to a query. But Madhavan made it crystal clear: by the time you get to generation, it’s already too late. The real visibility game is being played much earlier in the pipeline.
Here’s the sequence as he laid it out: User query → Multi-source retrieval → Ranking → Evidence synthesis → Safety filtering → Factual alignment → Context integration → Answer generation → More safety checks → Citations.
The critical insight? If your content was never retrieved, nothing downstream matters. You won’t get cited. You won’t be mentioned. You simply won’t exist in the AI’s answer.
Query Understanding: The New Top of Funnel
Madhavan walked through an example using the query “What are the cutest cat toys this year?” (He’s apparently aiming for Husband of the Year by filling his presentations with cat content.) What looks like a simple question triggers a sophisticated interpretation process.
The AI immediately identifies the topic (cat toys), the key attribute (new/trending), the freshness requirement (this year), the user intent (recommendations), and even the tone (positive). If your content doesn’t align with that interpreted intent, you’re out before retrieval even begins.
Then comes query normalization—the AI rewrites vague queries into explicit, actionable forms. “This year” becomes “2025.” The query fans out into multiple retrieval pathways: semantic, keyword-based, schema-based, entity-based, and more.
This is why I’ve been pushing clients to think about multi-intent content. Each fan-out pathway is a separate chance to be retrieved. If you’re only optimizing for one pathway, you’re leaving visibility on the table.
The Three Pillars: Freshness, Authenticity, and Semantic Richness
Madhavan broke down exactly which factors matter at which stages of the pipeline. I found this particularly useful for prioritizing optimization efforts.
Freshness
Freshness impacts retrieval, ranking, evidence synthesis, and context understanding. If your information is stale, grounding will not select it—no matter how well-written your content is. As Madhavan put it: “You can focus on the grammar, but please think of freshness as an imperative.”
This is especially critical during product launches, holiday seasons, and any fast-moving news cycle.
Authenticity
Authenticity plays a role in ranking, safety alignment, safety filtering, and factual alignment. The AI is constantly looking for evidence to support claims. Sites that make assertions without backing them up are handicapping themselves.
Show your work. Cite sources. Provide data. This builds the trust signals that AI engines need to feel confident using your content in their answers.
Semantic Richness
Semantic richness matters for query understanding, retrieval, and evidence synthesis. This is about depth over breadth—focusing on your area of expertise and creating content with unique insights.
Madhavan emphasized structured content: proper use of H1 and H2 headers, tables, FAQ blocks, and schema markup. Avoid “div soup”—those endlessly nested div tags that signal you didn’t think carefully about content architecture. Structure is part of storytelling, especially when everything gets tokenized.
IndexNow: The Freshness Accelerator
One of the most actionable parts of the presentation was about IndexNow, the open protocol that lets you notify search engines immediately when content is new, updated, or deleted.
The stat that got my attention: 50% of clicked, newly indexed URLs on Bing SERP come from IndexNow. That’s not because IndexNow URLs get ranking preference—they don’t. It’s because fresh content gets retrieved over stale content. When you’re telling Microsoft about changes in real-time rather than waiting for crawlers to discover them, you’re dramatically accelerating your path to visibility.
Madhavan also mentioned an upcoming announcement about making crawling “super efficient” for IndexNow-enabled sites. Something to watch for in the new year.
And there’s an environmental angle: according to an Ahrefs study Madhavan cited, widespread IndexNow adoption could save the equivalent of 31 million acres of forest by eliminating billions of unnecessary crawls of unchanged pages.
Controlling Visibility: Blunt Force vs. Precision
The final section of the talk addressed a question I get from clients constantly: How do we control what AI sees and uses?
Madhavan distinguished between access (can content be retrieved?) and use (can it be used in answers?). For access, you have robots.txt, noindex, and enterprise permissions. For use, you have data-nosnippet, max-snippet, and schema scope boundaries.
The key comparison was between “blunt” controls like robots disallow—which removes your page from eligibility entirely—and precision controls like data-nosnippet, which lets you stay in the ranking while excluding specific sections from snippets and AI summaries.
Data-nosnippet is powerful because you can hide specific answers to encourage clicks, protect context-dependent content, shield proprietary information, or simply reduce noise from boilerplate sections. You maintain visibility while controlling exactly what appears in AI-generated answers. All major players—Microsoft and Google—support it.
My Takeaways
After nearly 20 years in SEO, I’ve watched the game change many times. But this shift to AI-powered search is different in kind, not just degree. The pipeline Madhavan described requires us to think about content optimization much earlier in the process than most of us are accustomed to.
Here’s what I’m telling my clients:
- Focus upstream. Query understanding and retrieval are where visibility is won or lost. By the time you get to citations, the battle is over.
- Freshness is governance. This isn’t a gimmick or a nice-to-have. If you’re not actively maintaining content freshness, you’re practicing poor knowledge lifecycle management.
- Structure for tokenization. Your content will be chunked. Make sure those chunks make sense. Clean headers, clear sections, schema markup—all of it helps the AI understand and trust your content.
- Implement IndexNow. If you’re not using it yet, start. The freshness advantage is real and measurable.
- Use precision controls. Don’t block everything with robots.txt when data-nosnippet can give you visibility with control.
The visibility equation has changed. Content that AI can trust, understand, and ground is the content that wins. Everything else is just noise.



