Tyler Gargula brought something different to Tech SEO Connect: a formal framework for making better decisions with data. Not just data-driven decisions—actually good decisions, using data the right way.
His presentation addressed a problem every SEO faces: What will actually move the needle? There are endless things we could do. How do we know we’re picking the right ones? And once we decide, how do we confirm it was the right choice?
Gargula’s answer is decision intelligence—a discipline for taking raw data and using it to answer specific questions: What happened? Why is it happening? How do we improve outcomes? The goal is arriving at the best options with all available data.
Is Your Organization Ready?
Before diving into the framework, Gargula offered questions to assess organizational readiness—useful for vetting potential clients:
What is their willingness to change? Can they pivot when data contradicts their assumptions? What is their current decision-making process—instinct, leadership consensus, or data? (“Probably largely the first two, unfortunately.”) Can their development team respond in a timely manner? How are they measuring decisions, and are they doing it accurately?
He cited a Gartner survey where half of respondents said good decisions are defined by outcome, while the other half said they’re defined by process. Gargula’s take: not all decisions lead to good outcomes, but if you have a well-defined process, you’re always in the best position to have the best possible option—even when results don’t pan out.
The DECIDE Model
Gargula’s framework spells out DECIDE: Define, Extract, Transform (with two sub-steps: Clean and Integrate), Distribute, and Execute. He walked through each step with practical examples.
D: Define Your Primary Goal
This is the most important step and where many SEOs get lazy. What does success look like for your analysis? Think of it as a function: inputs (data) go through a process and produce outputs (your goal).
Your goal should be: significant (enough data, right data), connected to business value (not vanity metrics), clear (you can tell a story about it), and actionable (it enables a decision).
The four common analysis types: Descriptive (what happened?), Diagnostic (why did it happen?), Prescriptive (what should we do?), and Predictive (what will happen?). Most SEO work falls into the first two.
Gargula was emphatic about bias: “Avoid that for real. Bias in, bias out.” He called out specific types: confirmation bias (seeking data that supports your hypothesis while ignoring contradicting evidence), selection bias (analyzing only successful or visible data—like exporting just 1,000 rows from GSC), outcome bias (redefining success criteria after seeing results), and recency bias (overweighting recent data while ignoring patterns).
“Cherry-picking harms our industry reputation,” he said. “Be honest. Use all available data.”
Example: For “crawled, not indexed” pages, the defined goal was analyzing correlations between content signals and indexation. The output: they discovered specific attributes that mattered—unique images vs. placeholders, stock status, availability—while reviews and specifications weren’t as strong.
E: Extract Your Data
Part of the ETL (Extract, Transform, Load) process from data science. Not all data sources are equal. Gargula’s tier list:
Top tier: BigQuery, CMS databases, log files, APIs (GSC, GA4). Solid: SERPs, survey APIs, web scrapers, Ahrefs, SEMrush. These are great for research but less reliable for first-party traffic analysis. Never use: the UIs of GA4 or GSC for data export. “If you are, you’ve got to stop right now.”
For large sites, you don’t need to analyze everything. Use random or stratified sampling to get a representative distribution without processing millions of pages.
C: Clean and Transform
Handle duplicates, nulls, filtering. Enrich with categorization and labeling. Join multiple sources by URL, keyword, or date.
Gargula showed several visualization techniques for understanding transformed data: distributions (to identify outliers and understand skew), Lorenz curves (cumulative distributions for prioritized sampling—the 70th percentile of pages might account for 85% of traffic), percentiles (natural breakpoints), quartile bins (grouping by performance segment), and violin/box plots (visualizing spread and variance).
Data labeling is key: assign meaningful categories to unstructured data. Performance labels (click bands) can reveal patterns—like discovering that high-click pages were losing indexation while low-click pages were gaining it.
On correlations, he advised caution: “Correlations are helpful for understanding relationships, but be skeptical. Just because there’s a strong correlation doesn’t equal causation.”
I: Integrate Multiple Sources
“You might be solving the wrong problem perfectly,” Gargula warned. Single data sources give incomplete pictures:
Search Console alone: “High impressions. Is it a priority?” GA4 alone: “This page has engagement. Is it valuable?” Revenue data alone: “It sells well. How is SEO involved?”
Integrated view: Combining GSC, GA4, and conversion data reveals actual priorities. One example showed roughly 70% of revenue coming from non-SEO sources, raising the question: could SEO play a larger role?
Priority scoring combines impact with business context. You can create weighted averages using variables like urgency, development capacity, and business value. Gargula has an open-source tool on his GitHub that takes a Screaming Frog crawl with GA4 and GSC data and generates impact scores.
D: Distribute Your Data
“Stakeholders are more likely to misinterpret your data,” Gargula said. Convert raw data into actual insights through post-processing.
He showed condition-based summaries that programmatically label patterns. Instead of showing complex data, summarize what’s happening: “76% of this issue is locales canonicalizing to non-locale pages.” Filter to show only the relevant subset: pages with negative click and position slopes become your content refresh priority list.
If you’re stuck using spreadsheets, always include a summary column. “Help those unfortunate souls.”
Effective communication requires: data as foundation, analytics know-how, storytelling ability, and domain expertise about the client’s business.
E: Execute
“If you don’t have next steps, what’s the point?” Your analysis needs to lead to something—sharing an opportunity, updating content, clarifying a concern, or validating a hypothesis.
Execution means: establish a benchmark, implement, annotate and measure for at least three months, then report on outcomes noting external factors (algorithm updates, market changes). Keep a running log of everything that happens on the site.
Practical Examples
Gargula showed several real analyses using this framework:
Traffic erosion analysis: 408,000 keywords grouped into 20-30 intent categories, then decay analysis run on each. Some intents were clearly more volatile than others—actionable insight for content prioritization.
Crawl budget optimization: Bing was crawling aggressively relative to Googlebot, but Google indexation was more valuable for this client. Applying a crawl delay shifted resources to Googlebot, improving indexation and brand exposure.
Nav bar risk analysis: Before removing nav items, they analyzed engagement, link equity, and revenue attribution to identify high-risk vs. low-risk items. Clear visualization of which categories to protect and which could safely change.
Content decay: Period-over-period comparisons miss context. Gargula advocated for metric slopes—true representation of upward or downward trends rather than just two points in time.
My Takeaways
Gargula’s talk was the most process-oriented of the conference. While others focused on what to optimize for AI search, he focused on how to make decisions about any SEO challenge.
What I’m implementing:
1. Define goals before touching data. What does success look like? Is it significant, business-connected, clear, and actionable? This prevents wasted analysis.
2. Acknowledge and avoid bias. Confirmation bias, selection bias, outcome bias, recency bias—we’re all susceptible. Be honest with all available data.
3. Respect data source quality. BigQuery and log files beat SEO tool exports. Never use GA4 or GSC UIs for serious analysis.
4. Integrate multiple sources. Single-source analysis often solves the wrong problem. GSC + GA4 + revenue data gives actual priorities.
5. Use metric slopes, not period comparisons. Month-over-month or year-over-year misses the context between those points. Slopes show true trajectory.
6. Every analysis needs next steps. If it doesn’t lead to a decision or action, what was the point?
The DECIDE framework isn’t glamorous, but it’s the difference between “I think this will work” and “the data shows this is our best option.” In an industry where so much is uncertain—especially now with AI changing everything—having a rigorous decision process matters more than ever.







