I spent 15 years building incrementality measurement systems for enterprise advertisers.
I worked on TV attribution, consulted with ad platforms on measurement studies, and led analytics teams where Marketing Mix Modeling was table stakes — not a premium add-on.
Then I became a 7-figure Amazon seller.
And I expected the same measurement rigor I'd spent my career building.
I found a gap.
Not because anyone did anything wrong. But because the economics of tool-building and the reality of operator needs had diverged.
Here's what I discovered: The channel that represented 40% of my total revenue — my single biggest growth driver — had fundamentally different measurement infrastructure than the traditional advertising channels I'd spent my career working on.
SADDL was born from understanding why that gap exists — and who it hurts most.
1. The Standards I Took for Granted
In my previous world, these weren't controversial statements — they were simply how measurement worked:
Correlation ≠ Causation
If sales went up after a TV campaign, we didn't just assume the campaign caused it. We ran
incrementality studies with geo-based holdout groups to measure true lift.
Counterfactual Analysis is Standard Practice
Before claiming 'our optimization worked,' you had to answer: 'What would have happened if we did
nothing?' That's not philosophy — that's basic scientific method.
Multi-Touch Attribution Requires Multi-Horizon Windows
We didn't just look at last-click. We measured 14-day, 30-day, 60-day windows to understand how
different touchpoints contributed over time.
Marketing Mix Modeling Separates Signal from Noise
MMM decomposed results into: baseline trends, seasonality, competitive effects, external shocks, and
YOUR actual impact. You couldn't claim credit for market forces.
These were the basics. The price of admission for serious advertising measurement.
Clients spending $5M on TV got this level of rigor automatically. Clients spending $500K on display got this.
So when I started running Amazon PPC at scale — spending $250K+ annually, driving over $1M in revenue, representing 40% of my business — I naturally looked for the same standards.
What I found was something different. And once I understood why, it made perfect sense.
2. The Gap I Found — And Why It Existed
Here's what I found in the Amazon PPC tool landscape:
- ✅ Bid optimization algorithms (excellent)
- ✅ Keyword research tools (sophisticated)
- ✅ Campaign automation (powerful)
- ✅ Workflow efficiency (genuinely helpful)
- ❌ Incrementality testing (largely absent)
- ❌ Counterfactual analysis (not common)
- ❌ Multi-horizon attribution (7-day last-click is typical)
- ❌ Statistical significance testing at decision level (rare)
The tools weren't designed to answer causal questions. And honestly? That makes sense given how the industry evolved.
The Economics Problem:
When I was building measurement systems for TV and display advertising:
- Single clients spending $5M-$50M annually
- Agency teams of 10-20 managing each account
- Clients paying $500K+ for measurement platforms
- Clear ROI: 1% improvement on $20M = $200K value
Amazon PPC is fundamentally different:
- 100,000 sellers spending $50K-$500K each (vs. 1,000 brands spending $10M)
- Individual operators or 2-3 person teams
- Tool budgets of $50-$500/month
- Need operational efficiency first
From a tool builder's perspective: Investing $500K+ in sophisticated incrementality infrastructure for customers paying $200/month doesn't work — especially when their burning problem is "How do I manage 500 keywords efficiently?"
So the market evolved logically:
- Enterprise ($5M+ PPC) → White-glove managed services with custom measurement
- Small sellers (<$50K) → Efficient, affordable automation
- Mid-market operators ($250K-$1M PPC) → Fall in the gap
Here's where it gets personal:
For a tool builder, a seller spending $250K annually on PPC might not justify custom measurement infrastructure.
But for an operator spending $250K on PPC?
That typically represents:
- $1M - $1.25M in PPC-attributed revenue (based on 4-5 ROAS, common for established sellers)
- Often 40-50% of total business revenue for mid-market operators
- Their single largest controllable growth lever and expense
When I was at the agency, we'd deploy full MMM for brands spending $2M on TV — representing maybe 5-10% of their revenue.
As an Amazon operator, I was spending $250K on PPC — representing 40% of my business — with significantly less measurement rigor.
Not because of ignorance. Because of market structure.
Here's what that means in practice:
You make an optimization: restructure campaigns, adjust 200 keywords, reallocate budget.
Two months later:
- Revenue up 8%
- ACoS down 3 points
- Organic rank improved
Your tool says: "Great job!"
What you actually need to know:
- How much was your optimization vs. external factors?
- Did a competitor stock out?
- Was there seasonal lift?
- Did Amazon's algorithm change?
- Would revenue have grown 6% anyway?
Why it matters:
Without answers, you might:
- Scale spending on false positives (expensive)
- Abandon strategies that worked (missed opportunity)
- Make million-dollar decisions on correlation, not causation
3. Why This Matters Now More Than Ever
The Amazon PPC landscape is hitting an inflection point.
The competitive reality has fundamentally shifted.
Amazon's seller base grows daily. What was a land grab five years ago is now trench warfare. More sellers means:
- Relentless price compression — someone's always willing to go lower
- Shrinking margins — the race to the bottom is real
- Increasing PPC dependency — organic visibility gets harder as competition multiplies
Here's what most sellers miss: PPC isn't just a growth strategy anymore. It's a defense strategy.
You're not just using PPC to acquire new customers. You're using it to:
- Defend your market share against new entrants bidding on your keywords
- Protect your margins by maintaining visibility without having to drop prices
- Hold your volume when competitors undercut you on price
In this environment, PPC measurement isn't about maximizing growth. It's about not bleeding out.
When someone's undercutting your price by 15%, your PPC efficiency is what determines whether you maintain volume or spiral downward. When five new competitors launch in your category this month, your ability to optimize effectively is the difference between holding position and losing ground.
You can't afford to waste 30% of your PPC budget on false positives. That's not leaving money on the table — that's actively funding your decline.
Plus, the operational stakes just got higher:
- CPCs rising 15-30% YoY in many categories
- Margins compressing from competition and fees
- Attribution windows lengthening (14-30 days for Amazon's learning algorithms)
- Auction complexity increasing with algorithm updates
The cost of being wrong:
For a seller spending $250K on PPC:
- A single wrong strategic decision (scaling something that didn't work) can cost $50K-$100K
- Confusing market lift with optimization impact leads to over-investment
- Missing what actually works means leaving money on the table
"According to a 2025 Teikametrics report, only 18% of sellers accurately track full-funnel attribution."
The other 82% are making high-stakes decisions based on correlation, hoping it's causation.
4. What Bringing Measurement Rigor to Mid-Market PPC Actually Looks Like
SADDL isn't revolutionary. It's applied marketing science — the same principles I spent 15 years building for TV, display, and brand campaigns.
The difference: We're bringing these standards to the mid-market Amazon PPC segment that's been underserved.
Here's what that means:
Counterfactual Analysis
For every optimization, we model 'What would have happened if you did nothing?' using time-series forecasting and statistical controls. This separates your impact from baseline trends.
Multi-Horizon Measurement
We track impact across 14-day, 30-day, and 60-day windows to account for Amazon's auction learning algorithms and delayed attribution.
Market Decomposition
We separate your optimization effects from: baseline trends, seasonality, competitive dynamics, external shocks, and Amazon algorithm changes.
Decision-Level Statistical Significance
We don't claim 'wins' based on random variation. Every significant decision gets significance testing. Sometimes the best insight is "not enough data yet — wait another week."
Revenue Protected
We measure impact even when revenue drops — because preventing a bigger decline is often the real win during market downturns.
This is table-stakes measurement in traditional advertising. It should be accessible to mid-market Amazon operators too.
Let me be concrete about ROI:
Scenario: You're spending $20K/month on PPC ($240K annually)
Without incrementality measurement:
- You make 50 optimization decisions per quarter
- You assume ~70% were wins based on dashboard metrics
- You scale spending on perceived winners
- Reality: Maybe 40% actually drove incremental value
- The other 30% were false positives
Cost of false positives: $50K-$100K/year
With incrementality measurement:
- Same 50 optimization decisions
- You know which 20 actually drove incremental lift
- You scale only on proven winners
ROI on measurement: 20-40x the tool cost
For an operator spending $250K annually on PPC, sophisticated measurement isn't a luxury. It's one of the highest-ROI investments you can make.
5. The Mission: Solving for the Underserved Middle
SADDL exists because I lived both worlds.
I spent 15 years building incrementality measurement for enterprise advertisers with massive budgets.
Then I became a mid-market Amazon operator and discovered: The segment that needs this most has access to it least.
Not because of ignorance. Because of market economics.
But market economics are changing:
The mid-market segment is now:
- Large enough to matter (tens of thousands of sellers in the $250K-$1M PPC range)
- Sophisticated enough to value it (operators asking causal questions)
- Competitive enough to need it (margins don't allow for 30% waste)
That's the gap we're closing.
We're not inventing new science. We're making existing measurement standards accessible and affordable for the operators who've been stuck in the middle.
We're bringing the rigor that $5M TV campaigns get to the $250K PPC budgets that often matter more to business outcomes.
Because if you're spending $20K/month on PPC — representing 40% of your business, defending against daily price wars — you deserve better than 'trust the dashboard and hope for the best.'
You deserve decision intelligence, not just automation.
You deserve to know which of your optimizations actually moved the needle.
You deserve to separate your impact from market noise.
You deserve the same measurement standards that enterprise brands take for granted.
That's what SADDL delivers.
Closing: An Invitation
If you're a mid-market operator spending $250K+ annually on PPC:
You're in the gap. Your budget is too large for basic automation to be enough, but the market hasn't prioritized building the measurement layer you need.
You're making decisions worth hundreds of thousands of dollars based on correlation, not causation.
You're probably wondering: "Did my optimization actually work, or did something else change?"
You're asking the right question. Now there's finally a tool built to answer it.
Want to see what real incrementality measurement looks like for your PPC?
We'll walk you through the Impact Dashboard and show you exactly which of your past optimizations actually moved the needle vs. which rode external trends.
Book a Demo