Jamie Indigo from Cox Automotive delivered the most quotable talk of Tech SEO Connect. Her premise: we’re making dangerous assumptions about AI, and we need to be “feral trash cats”—resourceful, clever, and resilient—to figure out what’s actually happening.
“I really don’t care what you want to call it—AEO, LMO, EIAO, noodle,” she said. “I would like some dev docs. That would be handy. Because we don’t get any.”
What followed was a systematic dismantling of the assumptions SEOs make about AI search, grounded in log file evidence and a healthy distrust of PR narratives.
The Battlefield Is Being Shaped
Indigo opened with a warning about how AI companies control narratives. She cited a Microsoft blog post about AI changing conversion paths—a potential “source of truth” from one of the LLM deities.
Halfway through the article, she found the data. It was a study she’d already covered—and the stat Microsoft quoted was a cherry-picked subsection, not the full picture. The average didn’t show LLMs as conversion champions. The biggest factor was scale: AI traffic is about 1% on a good day. Organic is 30-50%. Adding 10 conversions to 100 looks impressive. Adding 10 to thousands doesn’t.
“It’s hard to get good, consistent data,” she said. “The best we can hope for is reproducible methodology and people sharing their data sources.”
The Bot Agreement Is Broken
The old internet ran on an agreement between websites and bots: I’ll let you crawl me (even though it costs money) if you crawl politely, declare who you are, and send traffic. Indigo examined each promise.
Promise 1: Crawl Politely
“35,000 bot visits to one human. Does that feel polite?” Controls aren’t working as expected. There’s plausible deniability—ChatGPT 3.5 used Common Crawl for 80% of its tokens, which is technically “open access” but still someone else’s intellectual property. Wikimedia reports 65% of their most expensive traffic comes from bots.
Promise 2: Declare Who You Are
Indigo found Skype crawling ravenously on her site—months after Skype was discontinued. Acting on behalf of ChatGPT, apparently. “No official statements on that one.”
She shared a list of undocumented user agents she’d found in logs. ChatGPT has new bot versions (0.2, 0.3) that aren’t in their docs. Atlas has at least three hidden user agents. “Ironically, it’s still Chromium with a rubber mask on.”
Promise 3: Send Traffic
Google’s VP of Search, Liz Reid, claims AI is sending more queries and higher clicks. Indigo’s response: “Liz, does it really?” Lawsuits are piling up from companies saying they were crawled hundreds of times without receiving a single human visitor.
Google’s official response is that third-party data is “inaccurate” and based on “flawed methodologies.” Indigo’s counter: “I would love to learn better, Liz. Let me learn better with you.”
AI Search Is Not a Search Engine
This was the core technical point. We assume AI search is a search engine that does what we ask using search engine mechanics. Wrong on all counts.
“Search engines are information retrieval systems. AI search is a model trained on a corpus plus retrieval augmentation, sometimes. Not the same thing.”
There is no AI index. “When you think of the search AI index, you’re thinking of a lie.” AI ranking is probabilistic—your brand has a “tendency to exist.”
User embeddings play a huge role. Two users asking “what gift should I get for my cat?” get different answers based on their interaction history. “Best” means different things to different users. This affects query interpretation, synthetic query generation, and retrieval. “These are echo chambers. If we don’t speak up and say this is an echo chamber, it is not based in reality… People have no reason to not trust what it says, and that is dangerous.”
AI Rank Tracking Is Synthetic
Indigo had a memorable line: “AI rank tracking is the Lighthouse of AI.” Useful, but synthetic.
AI search has ambient persistent memory and personalization through user embeddings built from years of interactions across platforms. Rank tracking tools use fresh accounts with made-up personas (“My name is Tom. I’m really smart and I want to buy a car.”). The embeddings reset every time. It’s not the same as real user experience.
“Trash cats don’t turn down data,” she added. Use it, but know its limitations.
AI Will Lie About What It Does
Indigo shared an experiment. She asked ChatGPT for information about a page, then checked her logs. It never crawled the page. When confronted, it apologized and blamed an “internal” issue. When pressed further: “Oh, I completely fabricated all this.”
“The front end doesn’t know what the back end does. It’s not aware of the mechanics… If you ask it what it used to ground, it will lie to you every time.”
Her warning: if someone tells you to base strategy on AI’s output about its own behavior, you might end up deciding you don’t need to worry about AI crawlers “because ChatGPT says it doesn’t crawl.”
“Nothing AI says is ever true. It’s just probable. It’s spicy autocorrect. It’s three Furbies in a trench coat.”
Four Action Items
1. Speak Up
Executives, leadership, and regular people think AI is magic because it says “yes, and” to everything. You’re uniquely skilled to say “that’s not how that works.” AI volatility is causing panic, but you don’t have to panic. “We are still tech SEOs, and this is still a bot.”
2. Watch Your Logs
Actions speak louder than words. Make time for bot watching. Befriend your SREs (Site Reliability Engineers)—they have access to logs and answers to questions you don’t know how to ask yet.
In logs, look for: referring UTMs and domains from AI sources, user-initiated crawls (ChatGPT-user, Claude-user, url-context), URL fragments indicating AI interaction. This tells you what’s being crawled and whether it should be.
For restriction: repeat disallows for CC bot (Common Crawl is the foundation). Use X-Robots directives to block non-HTML—if bots cite your API JSON, users land on unusable pages. Block personalization scripts. Hide authorization links from crawl paths.
3. Know What You’re Optimizing For
If you’re chasing a model with vector embeddings, you’re assuming you’ve perfectly recreated that stack—and you can only do it for one model. “Not all of them are going to last. None of them that exist right now want to be the best. They want to be the best when the money runs out.”
Think in flows: Is your content trending/recent? Then optimize for RAG situations—homepage links, meta descriptions, fast time-to-first-byte. Perplexity loves freshness and manually weights domains (“Advanced algorithm or a TXT file, you choose.”).
The snippet matters. Indigo showed how to find ChatGPT’s snippet in DevTools (conversation endpoint → responses → search result information). “A lot of times it ends up being the meta description.” John Mueller, a year ago: “Wouldn’t it be neat if you had a way to let these crawlers know exactly what the page was about?” Time is a flat circle—meta descriptions are back.
Performance matters for Google AI systems. Dan Taylor’s study found Core Web Vitals were the only factor influencing results. Track 499 errors on NGINX—that’s when ChatGPT gave up waiting for your server response during real-time retrieval.
4. Be Feral
“You didn’t agree to be part of this corpus. None of us did.” We can be smart. We can share what we learn. We can share reproducible methodologies. “The way out of this is through, and the way through is together.”
Indigo pointed to resources from AI security researchers who’ve collected system prompts from major models. She’s processed these into documentation showing each model’s assertions, rules, constraints, stated facts, and functionalities. “Now you can know what is actually possible based on what’s available.”
My Takeaways
Indigo’s talk was the most skeptical of the conference—in the best way. While others focused on optimization tactics, she focused on epistemology: how do we actually know what’s true about these systems?
What I’m taking away:
1. Question everything AI companies say. The battlefield is being shaped. PR is not documentation. If there’s no citation, press X to doubt.
2. The bot agreement is broken. They’re not crawling politely, not always declaring themselves, and not necessarily sending traffic. Plan accordingly.
3. AI search is probabilistic, not retrieval. There is no index. Your brand has a “tendency to exist.” User embeddings create echo chambers.
4. Never trust AI’s self-reporting. It will fabricate explanations of its own behavior. The front end doesn’t know what the back end does.
5. Logs are ground truth. Bot watching reveals what’s actually happening versus what companies claim. Befriend your SREs.
6. Meta descriptions and performance matter again. The snippet often is the meta description. Core Web Vitals affect Google AI. Time-to-first-byte matters for real-time retrieval.
“Please be a feral trash cat. Share what you’ve learned.” That’s the spirit this moment requires.







