Prism (Fospha × Smartly): Automating Cross-Channel Optimisation with Full-Funnel Measurement
How Fospha's daily MMM feeds into Smartly's Predictive Budget Allocation to automate cross-channel spend decisions, and how to design, run, and measure your first pilot test.
1. Overview: What the Integration Is & the Value It Delivers
Prism brings together Fospha’s daily, full-funnel measurement with Smartly.io’s Predictive Budget Allocation (PBA) — automating budget shifts across channels, campaigns, and ad sets to drive incremental impact every day.
It’s the only daily MMM integrated directly with Smartly’s PBA — combining scientific measurement accuracy with operational automation.
Why it Matters
Modern marketing teams face three consistent challenges:
-
Too many decisions, not enough hours — high-value optimisation opportunities get missed.
-
Siloed or biased data — platform-reported metrics exaggerate performance in their own ecosystem.
-
Unstable signals — frequent resets in platform learning create inconsistent outcomes.
Prism solves these by turning Fospha’s unbiased, model-based data into Smartly’s optimisation engine — so spend shifts automatically toward the campaigns that actually drive new customers and incremental revenue.
In short:
-
More growth, less lift — PBA executes the “fifth task on your list” every day.
-
Smarter budgets — spend moves toward campaigns proven to drive true business results.
-
Daily confidence — MMM-level accuracy refreshed every day for stable, cross-channel guidance
2. How It Works: The Closed Feedback Loop
Prism operates as a 4-step continuous loop between Fospha and Smartly:
-
Fospha provides the data
Fospha shares 14 days of attributed performance data — conversions, new-customer conversions, and revenue — mapped to campaign and ad IDs through automated matching.
→ Source: daily S2S feed directly from Fospha. -
Smartly ingests the data
A secure server-to-server (S2S) API transfers Fospha data into Smartly’s dashboard as a new optimisation signal for PBA.
-
Smartly reallocates budget
Smartly’s Predictive Budget Allocation automatically redistributes spend across campaigns and channels based on Fospha’s MMM-led performance data — optimising toward your chosen KPI (ROAS, revenue, conversions, or new-customer ROAS).
-
Fospha validates the result
Fospha measures the incremental effect of those budget changes and reports Test vs Control outcomes, creating a closed feedback loop that strengthens over time.
3. Best Practice: Identifying the Right Test Opportunity + Guardrails
To ensure Prism delivers consistent, measurable value, start with well-defined pilot tests that balance volume, stability, and learning potential.
How to Spot a Strong Test Opportunity
-
Start with conversion-focused campaigns.
-
Immediate results are most visible in conversion-driven activity (e.g. evergreen prospecting or DPA campaigns).
-
Avoid mixing funnel stages (Awareness, Consideration, Conversion) in one pool; PBA works best when all campaigns share a single optimisation objective.
-
-
Ensure sufficient volume and headroom.
-
Choose campaigns with enough conversions to generate a stable learning signal (≈5–10× expected daily CPA budget in the pool).
-
Use Fospha’s Incremental Forecasting outputs to identify campaigns with untapped growth potential.
-
-
Exclude your very best performers.
-
Don’t risk early downside on top performers; start with campaigns that are stable but not mission-critical.
-
-
Structure pools around markets or clusters.
-
For country tests: keep each market isolated (e.g. Germany as a standalone pool) or use clusters of comparable markets (e.g. Nordics, Benelux) to learn allocation across regions with similar dynamics.
-
Avoid mixing markets with radically different baseline CVR/ROAS, as PBA will over-weight cheaper ones.
-
-
Explore cross-platform where relevant.
-
Single-market, cross-platform pools (e.g. Germany across Meta, TikTok, Pinterest) are high-value tests for identifying the optimal channel mix.
-
Start with one platform if you want a controlled pilot, then expand to cross-platform once you’ve validated results.
-
-
Choose the right KPI.
-
Decide upfront which Fospha KPI to optimise toward: Conversions, Revenue, ROAS, or New-Customer ROAS.
-
One KPI per pool keeps optimisation clear; at most two can be combined (e.g. ROAS + Revenue for value-based outcomes).
-
-
Align with org structure and ownership.
-
Pools should reflect budget ownership in the team (e.g. by channel, market, or region).
-
Assign a clear budget owner for each pool to manage alignment with business goals.
-
Examples of Strong Test Candidates
-
Always-on conversion campaigns that today only get monthly adjustments but could benefit from continuous optimisation.
-
Cross-platform spend in a single high-value market (e.g. Germany across Meta/TikTok/Pinterest) to test channel allocation.
-
Clusters of mid-performing regions (e.g. Nordics, Benelux) where manual optimisation is minimal and PBA can surface hidden opportunities.
-
Underperforming or overlooked regions where teams lack time to actively optimise, but Fospha signals can guide reallocation.
Guardrails to configure
|
Parameter |
Recommended Setting |
Why It Matters |
|
Budget Mode |
Fixed Daily |
Safest first step; stable total spend |
|
Min/Max Caps |
10% min / 60% max |
Prevents starvation or over-concentration |
|
Exploration Budget |
10% |
Allows testing new allocations safely |
|
Optimisation Cadence |
Meta/TikTok: every 2 days Pinterest/Snap: every 3–4 days |
Avoid manual edits between cycles |
|
Separate Pools |
Prospecting / Retargeting / App Installs |
Keeps funnel stages independent |
|
KPI per Pool |
ROAS, Revenue, Conversions, or New-Customer ROAS |
Ensures clear optimisation goal |
Avoid these pitfalls:
-
Editing budgets manually during the first 7 days (resets learning)
-
Mixing prospecting & retargeting in one pool
-
No caps or inconsistent objectives
-
Changing KPI mid-test — start a new 4-week test instead
4. Designing the Pilot Test
4.1 Picking the Right Campaign to Test
-
Start with conversion-focused, always-on campaigns.
-
Ensure campaigns share a common KPI (conversion, revenue, or ROAS).
-
For country tests: split ad set = country.
-
For channel tests: group campaigns across Meta/TikTok/Pinterest.
-
Keep creative comparable; limit to 2–3 creative themes so PBA is not confounded by asset mix.
4.2 Budget Pools Setup Best Practice
|
Setting |
Recommendation |
Why |
|
Pool budget size |
≥ 5–10× expected daily CPA |
Ensures enough signals for 7-day learning phase |
|
Min / Max allocation |
Min ≥ 10%, Max ≤ 60% per channel/campaign |
Prevents starvation or over-concentration |
|
Exploration budget |
Start at 10% |
Allows PBA to test new allocations without risk |
|
Campaign structure |
Disable Meta CBO when pooling ad sets |
Gives PBA budget control at pooled level |
|
Objective buckets |
Separate Prospecting, Retargeting, App Installs |
Prevents PBA over-optimising into remarketing |
4.3 Timeline
|
Phase |
Duration |
Key actions |
|
Warm-up |
Days 1–7 |
PBA learns variance; avoid manual budget edits |
|
Stabilisation |
Weeks 2–3 |
Monitor Smartly Budget Movement logs and Fospha KPIs |
|
Evaluation |
Weeks 4–5 |
Compare test vs control PoP in Fospha |
|
Scale/Pause |
Weeks 6–8 |
If results ≥10% uplift, scale to more markets/channels; if not, refine guardrails or extend test |
Expect PBA to adjust budgets every ~2 days on Meta/TikTok and every 3–4 days on Pinterest/Snap.
5. Measuring Test Results
-
Control design: Run a comparable campaign/market outside the pool on platform default optimisation.
-
Compare Test vs Control in Fospha:
-
Use PoP to track CPA, ROAS, new-customer share.
-
Validate if budget shifts matched incremental outcomes.
-
-
Directional validation: Look for divergence in trend (e.g. falling CPA in test vs flat control).
Success criteria:
-
Look out for ≥10% CPA reduction or ≥10% ROAS uplift vs control.
6. What To Do Next After the Pilot
Once the pilot completes, the goal is to decide whether to scale, refine, or re-run the test based on performance, learnings, and operational impact.
If Results Are Positive:
-
Scale gradually.
Add more campaigns into the existing pool or replicate the setup in adjacent markets with similar structure and KPIs.
-
Expand scope.
Move from single-channel to cross-channel pools once stable results are proven.
-
Build funnel-based pools.
Separate Awareness, Consideration, and Conversion pools to enable more granular control heading into peak seasons.
-
Layer in creative and bid optimisation.
Once PBA is stable, introduce creative tests or audience variations to compound gains.
-
Document uplift.
Capture the quantitative impact (≥10 % ROAS uplift or ≥10 % CPA reduction) and qualitative learnings (e.g. improved channel mix, reduced manual edits).
If Results Are Neutral or Negative
-
Extend the learning window.
Keep the test running for 6–8 weeks to allow the model to stabilise and gather more signal.
-
Re-check setup integrity.
-
Was the KPI consistent across campaigns?
-
Were creatives comparable?
-
Was there enough daily volume for learning?
-
-
Adjust guardrails.
Increase exploration budget (e.g. 10 → 15 %), widen min/max constraints, or slow the optimisation cadence to reduce volatility.
-
Simplify scope.
Split large pools into smaller, clearer tests (e.g. per channel or market) to isolate issues.
-
Re-baseline expectations.
Performance stabilisation may take longer for smaller budgets or upper-funnel campaigns; focus on directional improvement first.