A customer sees your Instagram ad on Monday. Clicks a Google Search ad on Wednesday. Opens your email on Friday. Buys on Saturday through a direct site visit. Which channel gets credit for the sale? Your Instagram team says awareness drove it. Your search team says the click proved intent. Your email team says they closed it. And your analytics shows a direct conversion. They're all partially right. They're all partially wrong. This is the attribution problem, and every model you choose to solve it is, at best, an informed compromise.
Attribution modeling is the system you use to assign conversion credit across the touchpoints in a customer's journey. The model you pick changes which channels look profitable, which look wasteful, and where your next dollar gets allocated. Get it wrong, and you'll over-invest in channels that take credit for demand they didn't create while under-investing in channels that actually drive growth.
Attribution Models Compared
| Model | How Credit Is Assigned | Best For | Biggest Weakness |
|---|---|---|---|
| Last-Click | 100% credit to the final touchpoint before conversion | Simple reporting, direct response campaigns, short purchase cycles | Ignores everything that happened before the final click. Over-credits brand search and retargeting. |
| First-Click | 100% credit to the first touchpoint that introduced the customer | Understanding which channels drive new customer discovery | Ignores all nurturing and closing touchpoints. Over-credits awareness channels. |
| Linear | Equal credit split across all touchpoints (4 touches = 25% each) | Getting a balanced view when you don't know which touchpoints matter most | Treats a random display impression the same as a high-intent search click. Not all touches are equal. |
| Time-Decay | More credit to touchpoints closer to conversion, less to earlier ones | Longer sales cycles where recent interactions are more influential | Under-values the awareness touchpoints that started the journey. Biases toward lower-funnel channels. |
| Position-Based (U-Shaped) | 40% to first touch, 40% to last touch, 20% split across middle touches | Valuing both discovery and closing while acknowledging the middle | The 40/40/20 split is arbitrary. Why not 30/30/40? The ratios don't reflect your actual customer journey. |
| Data-Driven (Algorithmic) | Machine learning analyzes all conversion paths and assigns credit based on statistical impact | Accounts with enough conversion volume (300+ per month) for the algorithm to learn | Requires significant data volume. Still operates within the walled garden of whichever platform runs it. Google's DDA favors Google touchpoints. |
| Marketing Mix Modeling (MMM) | Statistical regression across all marketing spend and external factors (seasonality, economy, weather) to estimate incremental impact | Enterprise brands with $1M+/month in marketing spend across many channels | Requires 2-3 years of historical data. Slow to update. Can't optimize in real-time. Expensive to build and maintain. |
Why Attribution Got Harder After 2021
Three forces have eroded the data that attribution models depend on:
Apple's ATT (App Tracking Transparency)
When Apple launched ATT in April 2021, the vast majority of iOS users opted out of cross-app tracking. Flurry Analytics reported opt-out rates around 85% in the months following launch. This broke Meta's pixel-based attribution, reduced the data available for cross-platform tracking, and forced platforms to rely on modeled conversions instead of observed ones. While platforms have invested heavily in statistical modeling to fill the data gap, significant blind spots remain, especially for mobile-heavy audiences.
Third-party cookie deprecation
While Google has delayed full cookie deprecation in Chrome multiple times, Safari and Firefox already block third-party cookies by default. This affects cross-site tracking that multi-touch attribution depends on. When a user moves from a display ad on a news site to your website, the cookie that would have connected those two visits is increasingly unavailable.
Walled garden fragmentation
Each platform (Google, Meta, Amazon, TikTok) runs its own attribution model within its own data. Google's data-driven attribution only sees Google touchpoints. Meta's attribution only sees Meta touchpoints. Neither sees the full customer journey. When both platforms claim credit for the same conversion, your spreadsheet says you sold twice as much as you actually did.
Choosing the Right Attribution Approach
There's no universally correct model. The right choice depends on your business stage, data volume, and what decisions you're trying to make.
- Under $50K/month ad spend: Platform-reported attribution with manual cross-referencing is practical. Use last-click as your baseline, but track first-click data separately to understand which channels drive discovery. Cross-reference platform-reported conversions with your actual revenue (Shopify, Stripe, CRM) to identify over-reporting.
- $50K-$250K/month ad spend: Invest in a cross-platform attribution tool. Triple Whale, Northbeam, Rockerbox, and Measured all offer multi-touch models that stitch together data across Google, Meta, TikTok, email, and direct. They cost $500-$2,000/month but pay for themselves by identifying misallocated spend.
- $250K+/month ad spend: Layer in incrementality testing and consider marketing mix modeling. Run geo-holdout tests (turn off a channel in specific regions and measure the sales impact). Use platform-reported data for day-to-day optimization and MMM for quarterly budget allocation decisions.
Incrementality: The Attribution Upgrade
Attribution tells you which channels touched a conversion. Incrementality tells you which channels actually caused it. The difference matters enormously.
Example: Your branded search campaign shows a 12x ROAS. Impressive, right? But run an incrementality test (pause branded search in a test region and compare sales to a control region) and you might find that 80% of those searchers would have found you anyway through organic results. The true incremental ROAS might be 2.4x, not 12x. That changes how much you should spend on branded search.
The most common incrementality test methods are geo-holdout tests (pause spend in a region and compare), conversion lift studies (Meta and Google both offer these natively), and matched-market tests (compare similar markets with different spend levels). None are perfect, but any incrementality data is better than relying on platform-reported attribution alone.
Example: Fixing Budget Allocation with Better Attribution
A home goods brand was allocating 70% of their $120K/month ad budget to Amazon Sponsored Products because Amazon's last-click attribution showed a 6.2x ROAS — far above Meta's reported 2.1x. When they ran a geo-holdout incrementality test (paused Meta in three markets for six weeks), Amazon sales in those markets dropped 18%. Meta was creating demand that converted on Amazon, but Amazon got all the credit under last-click. They shifted to a position-based model for strategic allocation and moved 15% of their Amazon budget back to Meta. Within a quarter, blended ROAS across all platforms improved by 22%. This pattern is common. The average Google Shopping ROAS ranges from 5.0x to 6.5x (WordStream 2025), but that figure includes branded search that inflates efficiency. Without incrementality data, every platform's self-reported numbers look better than reality.
Frequently Asked Questions
Which attribution model is the most accurate?
No model is perfectly accurate. Data-driven attribution is the most sophisticated for within-platform decisions, but it still only sees its own platform's data. For cross-platform accuracy, combine a multi-touch tool with periodic incrementality tests. Use platform attribution for daily optimization and incrementality data for strategic budget allocation.
Should I trust Meta's reported conversions?
Meta's Conversions API (CAPI) and modeled conversions have improved significantly since the ATT launch. Most advertisers find that Meta under-reports by 15-30% rather than over-reports (a reversal from pre-ATT days). Cross-reference with your Shopify or CRM data. If Meta reports 100 conversions and your backend shows 120 from Meta-attributed traffic, the modeling is directionally accurate.
Is last-click attribution dead?
Google deprecated last-click as the default in Google Ads in 2023, replacing it with data-driven attribution. But last-click is still widely used as a baseline comparison and is the default in many analytics tools. It's not dead, but it shouldn't be your only model. Use it alongside first-click and time-decay views to get a fuller picture.
What is marketing mix modeling and do I need it?
MMM uses statistical regression to estimate the incremental impact of each marketing channel based on historical spend and revenue data. It accounts for external factors like seasonality and economic conditions. You probably need it if you spend $250K+/month across 5+ channels and need to make quarterly budget allocation decisions. For smaller budgets, cross-platform attribution tools and periodic lift tests are more practical.