When a DTC brand comes to us with a "ROAS problem, " the first thing we do is ignore the ROAS. Not because the number is fake. It's real. But ROAS on its own tells you almost nothing about whether you're making money. It tells you revenue per dollar of ad spend. That's it. It won't tell you what that revenue cost to fulfil, whether those customers will ever come back, or whether the conversions Meta is claiming actually happened because of your ads. Optimising a number you don't fully understand is how brands pour money into growth that loses them money.
This isn't some edge case we've seen once or twice. It's everywhere. And it's gotten worse because platforms keep making it easier to report high ROAS numbers. Broader attribution windows. View-through attribution. ML bidding strategies that optimise for whichever conversions the platform can most easily claim. So the real question isn't "what's your ROAS?" It's "does your ROAS actually mean you're profitable?"
Why ROAS became the default (and where it falls apart)
ROAS became the go-to metric in paid media for obvious reasons: it's simple, every platform calculates it, and you can explain it to a CFO in ten seconds. Revenue divided by ad spend. A 4x ROAS means four dollars back for every dollar in. Compared to CTR or CPM, it at least feels connected to actual business results.
But here's what trips people up: ROAS measures revenue, not profit. A brand with 30% gross margins running a 3x ROAS is losing money on every single sale. COGS plus fulfilment plus acquisition cost adds up to more than revenue. Meanwhile, a brand with 80% margins at 2x ROAS might be printing money. Same metric, completely different outcomes. The difference has nothing to do with ROAS. It's the economics underneath.
Industry Benchmarks
Average ROAS by platform
Typical range across verticals, 2024 data
There's another problem. ROAS only looks backward. It counts conversions that already happened, attributed to the last click or most recent ad interaction within whatever window the platform uses. It tells you nothing about whether those customers will come back. And that matters a lot. A brand acquiring one-and-done buyers is running a completely different business from one acquiring customers who purchase three or four times a year. Their ROAS might look identical. Their actual economics are worlds apart.
Three numbers that make ROAS actually useful
These metrics don't replace ROAS. They give it meaning. Without them, you're flying blind and feeling confident about it, which is worse than knowing you can't see.
1. Contribution Margin (per order and per channel). This is revenue minus every variable cost tied to a sale: COGS, payment processing, shipping, returns. It's what you actually keep before fixed overheads and marketing. Once you know your contribution margin %, you can calculate your breakeven ROAS. That's the minimum ROAS where you stop losing money on ad spend. For most DTC brands it lands somewhere between 1.5x and 3x. If you don't know yours, stop reading this article and go figure it out. Everything else depends on it.
Breakeven ROAS Formula
Breakeven ROAS = 1 ÷ Contribution Margin %
Example: if your contribution margin is 40%, your breakeven ROAS is 2.5x. Below that, you're literally paying to lose money.
2. LTV:CAC ratio. CAC is total marketing spend divided by new customers acquired. LTV is how much revenue you expect from a customer over their lifetime. Divide LTV by CAC and you get the number that tells you whether your acquisition economics actually work. 3:1 or better is healthy for most DTC brands. Below 2:1, you're probably spending more to acquire customers than they'll ever return.
This is where ROAS-only thinking does the most damage: businesses with repeat buyers. A 1.8x first-order ROAS looks terrible if that's all you're looking at. But if 40% of those customers come back for three or more purchases in the next year? That channel might be wildly profitable. ROAS says cut the budget. LTV:CAC says double it. They can't both be right, and ROAS is the one that's wrong.
3. Payback period. How long until the contribution margin from a customer covers what you spent to acquire them. Three-month payback means you're whole in 90 days. Twelve-month payback means you're floating that capital for a year. If you're cash-constrained, this number matters more than LTV:CAC. A great 5:1 ratio doesn't help if the payback takes 24 months and you run out of runway first.
How attribution windows inflate your numbers
Meta defaults to 7-day click, 1-day view-through attribution. In plain English: if someone sees your ad without clicking and buys within 24 hours, Meta claims that sale. Doesn't matter if the customer was already on your site, already had the product in their cart, already decided to buy. If they happened to scroll past your ad first, Meta takes credit.
The result is striking. Incrementality studies consistently find that 40-60% of conversions Meta claims credit for would have happened anyway. The customer was already shopping, already knew the brand, already had their wallet out. They just happened to scroll past your ad somewhere in the process, and attribution grabbed the credit. Your ROAS balloons. This isn't Meta being deliberately deceptive. It's just how last-touch and view-through attribution works when customers touch five different channels before buying anything. The system does exactly what it was built to do, it's just that what it was built to do doesn't tell you what you think it does.
Want the real number? Run an incrementality test. Hold back a segment of your audience from seeing ads for a set period, then compare conversion rates between the exposed and holdout groups. The gap is your actual incremental lift. When we run these tests for clients, real incremental conversions typically come in at 35-65% of what the platform reports. That gap between reported ROAS and true incremental ROAS is the number you should actually be making budget decisions on.
"A great ROAS in Ads Manager and a declining bank balance. That's the situation nobody wants to explain to their board. Attribution claims credit. Your P&L tells the truth."
40–60%
of the conversions Meta claims credit for were going to happen regardless. View-through attribution and existing demand do a lot of the heavy lifting. Your real incremental ROAS is almost always lower than what the dashboard shows.
Source: Independent incrementality research, Meta attribution studies
Attribution Window Effect
Same campaign, different reported ROAS
Wider windows = bigger numbers. Same campaign, same results, different story.
1-day
click
7-day
click
28-day
click
7d click +
1d view
The metric we actually use to judge paid social
The number we care about most internally is Media Efficiency Ratio (MER). It's dead simple: total revenue divided by total ad spend across all channels. Not campaign-level revenue. The whole business's revenue for the period.
Media Efficiency Ratio
MER = Total Revenue ÷ Total Ad Spend (all channels)
MER doesn't try to attribute individual conversions to individual campaigns. It just asks: are we spending more on ads and getting proportionally more revenue?
MER is blunt. It won't tell you which channel to thank or which to blame. But it's honest in a way that platform ROAS can never be, because it sidesteps the whole attribution mess. MER trending up as you scale ad spend? Your paid media is working. MER trending down? You've hit diminishing returns, or the attribution numbers were lying to you and you're only now seeing it in actual revenue.
In practice: track MER weekly at the business level. Use platform ROAS for in-platform decisions like which creative to scale or which audiences to trim. Run incrementality tests quarterly to reality-check how much credit each platform actually deserves. Three different lenses on the same spend. No single number gets to mislead you.
What to actually optimise at each funnel stage
We see this constantly: brands holding every campaign to the same ROAS target regardless of what that campaign is actually doing. A cold prospecting campaign and a cart-abandonment retargeting campaign judged by the same number. That makes no sense. They're doing completely different jobs.
Top of funnel (prospecting, cold audiences): Care about CPM efficiency and creative performance, thumb-stop rate, hook rate, CTR. You're buying attention from people who've never heard of you. Direct ROAS is misleading here because new-to-brand customers take longer to convert. Track assisted conversions, sure, but don't kill your prospecting campaigns because their direct ROAS looks weak. That's how you end up with a retargeting-only strategy and a shrinking audience pool.
Middle of funnel (warm audiences, site visitors, video viewers): Now you care about CTR, landing page conversion rate, cost per add-to-cart. These people know you exist. Creative shifts from "here's who we are" to "here's what you get." ROAS starts mattering more here, but benchmark it against your breakeven number, not some arbitrary figure from a benchmarks report.
Bottom of funnel (cart abandoners, high-intent visitors, past purchasers): This is where ROAS matters most, and where platforms stack most of their reported numbers. These people were already close to buying. Your job is to close them efficiently without overspending on people who'd have converted anyway. Cap frequency. Nobody wants to see the same retargeting ad fifteen times, it's expensive and it makes your brand look a bit desperate.
Retention and reactivation: Judge these on repeat purchase rate, LTV trajectory, and contribution margin per cohort, not ROAS. Existing customers have completely different economics from cold acquisition. Lumping them together in the same report distorts everything.
Optimization Framework
What to measure at each funnel stage
TOFU
Awareness
Goal
Buy attention efficiently
MOFU
Consideration
Goal
Drive consideration & intent
BOFU
Conversion
Goal
Convert efficiently at scale
A reporting cadence that doesn't waste your time
Most paid social reporting is too granular, too frequent, and too disconnected from the decisions it's supposed to support. Checking ROAS every morning and reacting to daily swings is just adding noise to the signal. Monthly strategy reviews on campaigns that need a quarter of data to evaluate properly will miss what's actually happening.
Here's what we actually do: Daily, check spend pacing, delivery issues, and creative fatigue. If CPM is spiking, your audience is tired of seeing the same ad. That's about all you need to know day-to-day. Weekly, review MER vs. target, platform ROAS by campaign type, rank your creative. Monthly, look at LTV:CAC and payback period by acquisition cohort, contribution margin trends, whether your attribution calibration still holds. Quarterly, run incrementality tests on your biggest channel investments and reallocate budget based on what the tests show, not what the dashboards suggest.
Each cadence maps to a different kind of decision. Daily is operational: fix what's broken, pause what's bleeding. Weekly is tactical: move budget toward what's working, test new creative. Monthly is strategic: are the unit economics improving? Quarterly is about conviction: should you scale this channel, hold, or pull back?
So What Now
ROAS isn't useless. It's a starting point, a rough health check that tells you if a campaign is vaguely in the right neighbourhood. The mistake is treating it as a final answer when it's actually asking the wrong question. "Did we generate revenue relative to spend?" is very different from "Are we profitable, and are we acquiring customers who'll buy again?"
The brands that consistently make money from paid social are set up to answer the second question. They know their contribution margin per order. They track LTV:CAC by channel. They watch payback period by cohort, MER at the business level, and they run quarterly incrementality tests to stay honest with themselves. That's the difference between paid social that builds a real business and paid social that just generates impressive-looking screenshots.
If you're spending real money on paid social and the only number you can quote is your ROAS, you don't have enough information to make smart budget decisions. Fix that first. Everything else follows.