
By 2025, most brands have already implemented some form of AI-driven personalization. Recommendation engines, dynamic creative, predictive targeting, the tools were in place, the pilots were run, and “AI-powered” had become table stakes rather than a differentiator.
What remained far rarer were breakthrough results.
Despite widespread adoption, only a small number of brands could point to AI personalization delivering gains large enough to materially outperform existing approaches, or to justify the added cost, complexity, and organisational change required to scale it.
For much of the past decade, personalization had plateaued. Rule-based targeting and incremental creative optimisation improved efficiency at the margins, but rarely changed outcomes. AI promised more, yet in practice, many initiatives stalled at modest uplifts or proof-of-concept success.
The core question heading into 2026, then, is no longer whether AI can personalise content.
It is whether AI can consistently beat today’s best marketing by a wide enough margin to reshape how creative, media, and measurement actually operate.
The CAP Model: What Scalable AI Personalization Looks Like in 2026
The MMA Consortium for AI Personalization (CAP) was built to bring scientific discipline to AI personalization, allowing brands to clearly see how much better AI performs versus what they are already doing. Rather than relying on lab simulations or proxy benchmarks, CAP runs in-market tests with real brands, real budgets, and real KPIs. The intent is straightforward: separate genuine lift from hype.
One such example is where Progressive Insurance joined CAP to answer a demanding question: could AI meaningfully outperform its existing audio marketing approach? Not by small optimisations, but by a margin large enough to justify changing how audio creative and optimizations were done. Streaming audio, often treated as a difficult-to-measure awareness channel, offered a particularly rigorous proving ground.
Working with CAP, Progressive used generative AI to create 96 ads, a scale of creative production that would have been nearly impossible to achieve at comparable speed or cost through traditional means. AI and machine-learning models then dynamically matched pre-approved creative elements to audience cohorts using non-PII metacontextual signals such as time of day, day of week, and connection type. The outcome was a +197% performance improvement, with significant lifts in quotes and conversions, achieved using only Phase I, “walk-stage” data.
What makes the Progressive result especially instructive is not just the magnitude of the lift, but how it was measured. These were live, controlled tests designed to show how, with precision, exactly how much better AI performed compared to the current baseline. In doing so, the study illustrates how generative creative, AI-driven decisioning, and rigorous experimentation can operate as a single system, turning AI personalization from a promising idea into provable performance.
When Personalization Moves Beyond Experiments
As CAP’s portfolio of in-market studies has expanded, a consistent set of patterns has emerged around what actually drives performance when AI personalization is applied at scale.
1. The Upside Is Not Marginal
When AI personalization is deployed as a system, combining creative generation, decisioning, and measurement, the performance gains move well beyond incremental optimization. Some of the examples from our research includes:
- Kroger (Display | 72 versions): Key insight: Upside is big; AI-driven personalization delivered a +259% uplift in webpage visits by optimising creative delivery using contextual signals such as DMA, time, day of week, device OS, and connection type.
- Monday.com (Audio | 16 versions): Key insight: Power of interactions; AI optimization drove a +188% increase in website visits and app installs by aligning audio creative with moment-level contextual signals.
- ADT (Display | 81 versions): Key insight: Audience matters; AI personalization delivered a +136% uplift in web form submissions by tailoring creative delivery to contextual audience signals.
Taken together, these results show that when AI personalization is designed for scale from the outset, upside is measured in multiples, not percentages.
2. Audience Context Shapes Outcomes
Performance varies meaningfully by category, format, and audience mix. CAP trials consistently show that context, when, where, and under what conditions a message is delivered, can be as influential as the message itself. This makes robust experimentation essential, particularly as brands move across channels and markets.
3. Creative Diversity Fuels Model Performance
Across studies, greater creative variation leads to stronger AI outcomes. Diversity in versions is not simply about creative choice; it is a core input that enables models to learn faster, optimise more effectively, and sustain performance gains over time.
4. GenAI and Decisioning Work Best as One System
The strongest results emerge when generative AI is paired with AI-driven optimisation. GenAI increases the speed and scale of creative production, while decisioning engines ensure that the right variant is served in the right context. Separately, each delivers value; together, they unlock step-change performance.
Conclusion
CAP’s in-market trials point to a clear conclusion: AI personalization delivers step-change value only when it is treated as a system, not a layer. The brands seeing outsized gains are combining generative creative, AI-driven decisioning, and disciplined experimentation to prove lift against their existing baseline, not chasing tools or pilots in isolation. By 2026, the mandate is to industrialise this approach: build creative diversity at scale, embed test-and-learn into everyday marketing workflows, and measure AI by business outcomes, not activity. In that model, CAP functions as the proof infrastructure enabling faster learning, clearer benchmarks, and a repeatable path from AI promise to measurable performance.











