Day 2 of Tech SEO Connect delivered a masterclass in operationalizing the AI-era insights from Day 1. Where Day 1 established the theoretical framework—structure beats prose, freshness matters, the bot agreement is broken—Day 2 focused on measurement, infrastructure, and organizational strategy. The through-line: we now have data showing exactly how AI systems cite content, but capturing that visibility requires fundamental changes to how we build platforms, analyze data, and structure our teams.
The Answer Engine Data Revolution
Mike King (iPullRank) and Josh Blyskal (Profound) presented complementary research that quantifies the AI citation landscape with unprecedented precision. King predicted 20-60% CTR losses from AI Overviews—NerdWallet has already experienced 37.3%. More striking: only 19% overlap exists between Google’s traditional rankings and ChatGPT citations. We’re dealing with fundamentally different systems, not variations of the same game.
King introduced “query fanout”—the process by which AI systems extrapolate user prompts into synthetic sub-queries. A single prompt might generate dozens of background queries, and 95% of these fanout queries have zero search volume in traditional keyword tools. His open-source tool Qforia generates these fanout queries so SEOs can optimize for them. The recommended stack: Ollama for local LLM inference, N8N for workflow automation, and Crew AI for agent orchestration.
Blyskal’s dataset—250 million AI responses and 3 billion citations across eight answer engines—reveals that traditional SEO metrics predict only 4-7% of citation behavior. What actually drives citations: natural language URLs earn 11.4% more citations; semantic URL alignment to query format adds another 5-6%; content freshness dominates (50% of top-cited content is less than 13 weeks old, 75% less than 26 weeks); and meta descriptions that “spoil” the content perform better because answer engines are fundamentally lazy—they need frameworks of analysis handed to them.
The source hierarchy surprised everyone: Reddit is the #1 cited source, YouTube #2. Perplexity cites YouTube videos with under 1,000 views 37% of the time. Listicles have fallen to second place behind opinion content. FAQs appear 848% more often in top-cited product pages. And LLM attention spreads more equitably through the SERP than human attention—position 7 matters more to AI than it ever did to users.
Technical Infrastructure for the AI Era
Rachel Anderson (Weed Maps) and Giacomo Zecchini (Merge) tackled the infrastructure layer—how to actually monitor and serve content to these new crawlers.
Anderson’s thesis: stop fighting with log files and enterprise crawl platforms. Google Search Console’s Crawl Stats showed approximately 50,000 fewer requests than first-party Datadog logs over three months—accuracy degrades as domain size increases. The solution is partnering with Site Reliability Engineering teams who already maintain advanced monitoring tools. SRE teams are “god-tier” for crawl monitoring; approach them as the experts they are.
Her dashboard tracks Googlebot, Bingbot, ChatGPT-User, PerplexityBot, Claude-Web, and CCBot by path, status code, and geolocation. The immediate win: discovering that a third-party security tool was periodically blocking Googlebot (contract cancelled) and that most AI bots were blocked by default (easily fixed). Her verdict on LLM.txt files: “Don’t waste your time.” After months of waiting, OpenAI finally crawled it in October 2025. The impact? None.
Zecchini went deeper on rendering mechanics. Google’s viewport expansion technique—expanding the viewport to match page height to trigger lazy-loaded content—creates edge cases most developers don’t anticipate. Full-screen hero elements using 100vh become 100,000 pixels tall when Google expands to a 100,000-pixel viewport, pushing all meaningful content far down the page. If above-the-fold positioning influences content valuation, this is catastrophic.
His edge case catalog: lazy loading on scroll/click events fails (use Intersection Observer); CSS ::before/::after pseudo-elements aren’t in the DOM and won’t be indexed; heading importance is determined by visual styling, not HTML tag hierarchy; display:none elements have no position or dimension and may be devalued (use visibility:hidden instead); infinite scrolling with History API can index article 2’s content under article 1’s URL. AI agents add another wrinkle—they’re non-deterministic and might simply not scroll, returning partial content regardless of implementation quality.
International SEO at Enterprise Scale
Max Prin (Condé Nast) delivered a reality check: “Nothing has changed in 14 years” since hreflang tags were introduced, yet Ahrefs found 67% of 375,000 domains have hreflang implementation issues. The most common error—18% missing self-referencing tags—breaks everything.
His “French surfer” concept explains user behavior: people click local TLDs more because they signal both language and shipping expectations. But SERP composition varies dramatically by market. Germany’s SERPs are 78% .de domains (Condé Nast had to launch .de). Middle East SERPs are 80% .com (use .com). Check SERP composition before making domain strategy decisions.
Practical insights from running 63 websites across 20 brands and 11 markets: Condé Nast dropped x-default tags entirely—let Google pick the best version. Cross-submit sitemaps for franchise sites. John Mueller confirmed approximately two years ago that noindex combined with canonical is now acceptable. Hreflang has no impact on Discover traffic. Brand consistency matters more than speed-to-market when expanding internationally.
Data Analysis and Monitoring
Sam Torres (Gray Dot) and Dana DiTomaso (Kick Point) both addressed the same fundamental problem: we’re using the wrong tools to analyze our data.
Torres’s message was blunt: stop uploading structured data to LLMs. They make things up, they always tell you your question was good even when it wasn’t, and your data isn’t safe. Use LLMs for messy, unstructured, language-heavy tasks. Use machine learning for structured, tabular, numerical analysis. The bridge: use LLMs to write your ML code, then run it in Google Colab where your data stays private.
Her three starter models: Isolation Forest (fast, flexible, handles messy data), LOF (finds neighbors, excellent for keyword analysis and A/B tests), and Prophet (best for time series and gradual shifts, handles international holidays, but highest maintenance). A Shopify engineer’s trick: end every LLM prompt with “are you sure?” and watch it rewrite everything more efficiently.
DiTomaso took a different angle: “It’s almost 2026. Why am I looking for problems instead of having problems come to me?” Her solution uses Google Tag Manager to surface technical issues in real-time by listening to visitor experiences. Track 404s with internal/external distinction, URL encoding issues, uppercase in URLs, UTMs on internal links (worse in GA4 than Universal Analytics—breaks attribution), trailing slash inconsistency, redirect chains, wrong hostname rendering, Core Web Vitals via Simo Ahava’s GTM template (beats Search Console’s data), JavaScript errors, and user agent strings.
Her architecture advice: separate page view tags from configuration tags, reuse triggers, use AI to generate the regex (“regex is horrible”), and consider a separate GA4 property purely for technical monitoring. The goal is automated alerting on issues that matter, not manual auditing.
Organizational Strategy and Frameworks
Bryan Casey (IBM) and Jess Joyce (Inbound Scope) addressed the human side—how to build teams and processes that actually deliver results.
Casey built IBM’s “Inbound Empire” over six years: SEO, YouTube (1.5 million subscribers), newsletters (tens of thousands of monthly growth), podcasts. His core insight: the biggest barrier is the advertiser mentality that treats campaigns as vehicles rather than building native channel success. His strategy was aggressive: take all advertising KPIs and crush them at 10x efficiency, then raid the paid media budget.
He started from the “doghouse”—showing an Ahrefs traffic value chart with competitors and pointing to “hundreds of millions of dollars leaving on the table.” His Disney-inspired synergy model creates a mutually reinforcing system where every node accelerates its complements: newsletter audience builds from website traffic, podcasts distribute across website topic sets, news team writes articles during podcast recordings, tutorials packaged with explainers triple distribution.
Casey’s organizational principles are non-negotiable: “No shared resources, no ad hoc BS. Failure rate 100% when relying on existing teams.” Either headcount reports directly to him or he negotiates dedicated allocation. “Content strategists? We don’t have them. You either write or you’re not on the team.” The evolution from explainers to news organization happened because hubs beat blog posts (their 70-page agents hub drives 50% higher returning traffic). Their investments in search are now 40% more valuable because of the newsletter flywheel.
Joyce presented the counterpoint: frameworks fail because people fail, and the frameworks we teach don’t account for reality. Her examples were painfully relatable: a team claimed they worked in sprints (they didn’t, discovered 8 months later); a developer got laid off mid-project; an internal linking sprint without site access devolved into spreadsheet hell; a month-long family medical emergency meant throwing out every framework and losing four clients.
Her framework principles: modular, documented, reviewed, contextualized (her favorite word), permissive, measured, trusted. Rigid for security, flexible for strategy, workflows, timelines, and tactics. The “keymaster problem”—one person holds all knowledge and access—makes everything fragile. CMO changes can make frameworks irrelevant overnight. Three diagnostic questions for any framework: Can the team modify it if context changes? Is flexibility built in? Are you measuring outcomes, not attachment to the framework itself?
Her closing observation connects to Day 1’s AI themes: prompts are frameworks 2.0. Don’t copy-paste without context. “Your framework should drive your team’s permission to deviate.”
Emerging Themes Across Day 2
Traditional metrics are increasingly irrelevant. King and Blyskal proved this with data: only 4-7% of citation behavior is predicted by traditional SEO metrics, only 19% overlap between Google rankings and ChatGPT citations. The tools and KPIs we’ve relied on don’t measure what matters in AI search.
Freshness dominates AI citations. Blyskal’s finding that 75% of top-cited content is less than 26 weeks old reinforces Day 1’s emphasis on freshness as a first-class signal. This isn’t about gaming algorithms—it’s about AI systems needing current information to provide accurate answers.
Partner with infrastructure teams. Anderson’s SRE partnership and DiTomaso’s GTM monitoring both point to the same conclusion: SEOs need to work more closely with the people who own technical infrastructure. They have better tools, better data, and they’re already maintaining systems we need access to.
Use the right tool for the job. Torres on LLMs vs. machine learning, DiTomaso on automated monitoring vs. manual audits, Zecchini on understanding rendering outputs—the common thread is matching your approach to your actual problem rather than defaulting to whatever tool is newest or most familiar.
Organizational structure determines outcomes. Casey’s “no shared resources” rule and Joyce’s framework fragility analysis both argue that execution depends on having the right team structure. Technical excellence means nothing if your organization can’t sustain it.
Edge cases compound. Zecchini’s rendering edge cases, Prin’s hreflang implementation errors, DiTomaso’s tracking gaps—small technical issues create large visibility problems. The platforms that win will be the ones that fix these details systematically.
Actionable Items from Day 2
Implement query fanout analysis. Use King’s Qforia tool or similar approaches to understand what synthetic queries AI systems generate from user prompts. Optimize for these invisible queries.
Audit URL structure for natural language. Blyskal’s data shows 11.4% citation lift from natural language URLs with additional gains from semantic alignment. Review your URL patterns.
Rewrite meta descriptions to spoil content. Answer engines are lazy. Give them the framework of analysis upfront. Stop writing teaser copy.
Build SRE relationships and crawler dashboards. Anderson’s approach—collaborative dashboard creation with SRE—provides better data than any third-party tool and creates cross-functional partnerships.
Fix viewport-height elements. Set max-height on any element using vh units. Google’s viewport expansion will otherwise push your content down catastrophically.
Implement real-time technical monitoring. Use DiTomaso’s GTM approach to surface issues automatically rather than discovering them in audits.
Move structured data analysis to machine learning. Follow Torres’s workflow: use LLMs to generate ML code, run in Google Colab, keep your data private.
Audit hreflang implementation. With 67% of domains having issues and 18% missing self-referencing tags, this is likely broken on your site. Check SERP composition before domain strategy decisions.
Add FAQs to product pages. Blyskal’s 848% citation rate increase for FAQ-containing product pages is too significant to ignore.
Evaluate your framework fragility. Apply Joyce’s diagnostic questions: Can your team modify it? Is flexibility built in? Are you measuring outcomes?
The Day 2 Mandate
If Day 1 was about understanding the new landscape, Day 2 was about building the infrastructure to compete in it. The data is now available—we know what drives AI citations, we know traditional metrics don’t predict AI behavior, we know freshness and structure dominate. The challenge is operational: monitoring the right crawlers, fixing the edge cases that break rendering, building teams that can execute without fragility, and using the right analytical tools for structured versus unstructured problems.
The speakers who presented measurement frameworks—King, Blyskal, Anderson, DiTomaso, Torres—all emphasized that we’ve been measuring the wrong things. The speakers who presented organizational frameworks—Casey, Joyce—emphasized that measurement means nothing without execution capability. And the speakers who presented technical frameworks—Zecchini, Prin—reminded us that edge cases compound into major visibility problems.
The synthesis: build monitoring infrastructure that tracks AI crawlers alongside traditional search, fix the technical edge cases that cause content mismatches, structure your teams for dedicated execution rather than shared resources, and continuously validate that your frameworks still match your context. The platforms that do this systematically will capture the AI search visibility that others forfeit through organizational and technical fragility.



