This week's edition covers:
- Kevin's Take: Efficiency AI vs Opportunity AIKevin's Take: Why agencies should sell velocity, not outputs
- Signal: Salesforce rewrites Slack's playbook with 30 AI features
- On Our Radar: Oracle layoffs, experimental budgets, and operating model stress
- From the Trenches: Context might be the most important thing missing from your AI plans
Kevin's Take
Most agencies sell outputs. We should be selling learning velocity.
I've been running agencies long enough to know the old pitch deck by heart: "We'll build your demand engine. We'll run your campaigns. Trust us, we've done this before." And for years, that worked. You hired specialists because you didn't have them in-house. They delivered the thing. Everyone moved on.
But in 2026, that model is dying. The data makes it hard to argue otherwise. Gartner's 2025 CMO Spend Survey found that 39% of CMOs plan to cut agency budgets this year, and 22% say they've already reduced their reliance on outside agencies for creativity and strategy because of AI. That is not a temporary budget squeeze. That is a structural shift in what clients believe agencies are for.
The real constraint isn't production capacity anymore. It's learning speed. How fast can you figure out what works in your specific context, with your specific buyers, at your specific price point? That's the game now. And it's a game most agencies are not currently set up to play.
So here's the value proposition I think we should all be selling instead: hire us, we run structured experiments, we show our work in real time, and you walk away with compound learning, not just deliverables.
Why this framing works right now
CMOs have a specific anxiety in 2026 that didn't exist three years ago. Gartner surveyed 402 senior marketing leaders in late 2025 and found that 65% expect AI to dramatically change their role within two years — but only 32% believe they personally need significant new skills to meet it. Gartner calls this the "AI blind spot." We call it the real sales conversation.
The anxiety is not "I don't have enough tools." The anxiety is: AI is moving fast and I don't know what to learn or how to learn it fast enough. You can't send your team to a three-day conference and call it handled. You need to be running live tests in your own stack, with your own data, against your own goals. But most teams don't have the bandwidth or the methodology to do that at speed.
That's where the experimentation accelerator model makes sense. You're not selling cheaper execution. You're selling compressed learning cycles the client can't compress alone. You become the answer to "how do I get smarter faster without blowing up my pipeline."
But three things have to be true for this to work
First, you have to actually show your work. Not a polished case study six months later. Real-time transparency — here's what we tried, here's what broke, here's the adjustment we're testing next week. That's uncomfortable for agencies used to looking perfect. But if you're selling learning velocity, you have to make the learning visible.
Second, the experiments have to be replicable. If the client can't absorb the method and eventually run it themselves, you're not an accelerator. You're just a dependency with better branding. The goal is to make them smarter, not more reliant.
Third, you need a structured framework, not just "we'll try stuff and see." Clear hypotheses, defined success metrics, documented results, and a feedback loop that improves each cycle. Otherwise it's just expensive chaos with a learning label on it.
The implication for your hiring and pricing
If you're selling learning velocity instead of outputs, your team composition changes. You need people who can design experiments, not just execute campaigns. You need someone who can translate a failed test into a valuable insight instead of a budget write-off. And you need to price for the knowledge transfer, not just the hours.
This also means your client relationships get more collaborative. You're not the expert they hire to make the problem go away. You're the partner who makes them better at solving the problem themselves. That's a harder sell upfront, but it's a stickier relationship if you do it right.
Outputs are getting commoditized by AI. A 2026 industry analysis found the value of basic content deliverables has dropped 43% in agency markets in the past year. That trend does not reverse. But the ability to structure learning, compress decision cycles, and transfer methodology? That's still a human advantage worth paying for.
The agencies that figure this out in the next 18 months will be the ones still standing in three years.
The Signal
Salesforce turns Slack into an AI collaboration engine with 30 new features — TechCrunch
Salesforce just announced 30 new AI-powered features for Slack, fundamentally changing how the platform supports marketing, sales, and RevOps workflows. The updates include AI agents that can summarize conversations, automate routine tasks, surface key decisions buried in threads, and integrate more deeply with Salesforce CRM data. This isn't just feature bloat — it's a deliberate repositioning of Slack as a collaboration layer for AI-augmented work.
Why it matters: If your team uses Slack daily, your workflow automation options just expanded significantly. The real question is whether your ops team has the bandwidth to configure and test these features, or if they'll just become shelf-ware in your stack. Mid-market companies should be watching this closely — not because every feature is a must-have, but because this is a preview of how work tools are absorbing AI by default. Your team needs a filter for what's worth adopting versus what's just noise.
Oracle layoffs signal operating model stress, not just AI replacement — Forrester
Oracle's recent layoffs are being framed as an "AI replaces jobs" story, but Forrester analyst Laura Cross argues that framing misses the real signal. The cuts reveal something more fundamental: operating model stress. When capital tightens and decisions slow down, companies start questioning whether their go-to-market motion is built around outcomes or just activities. Oracle's restructuring suggests they're forcing that reckoning now.
Why it matters: This isn't about Oracle. It's about the vulnerability in how most B2B organizations structure their marketing, sales, and RevOps functions. If you can't draw a clean line from your team's activities to revenue outcomes, you're exposed in the next budget cycle. CMOs should be auditing their operating models now — not because layoffs are inevitable, but because being able to articulate outcome-based value is the best defense against becoming a cost center.
On Our Radar
Marketers are boosting experimental budgets for AI and emerging channels in 2026, turning the annual budget into a live lab. If you're still locking your spend in January and revisiting in Q4, you're managing with last decade's assumptions.
Marketers are quietly reallocating SEO budgets away from traditional search optimization toward GEO, optimizing for visibility in generative AI outputs instead of clicks. The shift is happening faster than most CM.
A detailed analysis of the German digital agency market published this month tracked 137 full-service agencies and found combined fee revenue fell 5.2% last year, with insolvencies doubling year over year. The projection: 80% of traditional agency revenue is structurally at risk from AI. Germany moves slower than the US on most of this. That's what makes it worth watching..
FROM THE TRENCHES
We stopped prompting. The work got better.
The Problem:
We've been running AI-assisted campaign experiments with several clients over the past few months. Early results were inconsistent enough that we started questioning the approach entirely. The agents were producing output, but nothing you could call reliable. Certainly nothing you'd want to hand a CMO.
The Fix:
The fix turned out to be simpler than we expected, and more fundamental than most AI implementation guides will tell you. We had been prompting our way to consistency — feeding instructions into tools and hoping repetition would produce quality. It doesn't. What actually changed the results was context. Specific stack documentation. Defined skill parameters. Brand and audience reference materials built directly into how the agents operate.
The Result:
Once we stopped prompting and started building context, the output quality shifted noticeably. More than that, the agents got creative in ways that were actually useful, not just generative.
The transferable insight: if your team is getting inconsistent results from AI tools right now, the problem is almost certainly not the prompt. It's the absence of structured context around the tool. You can't prompt your way to consistency. You have to build it.
We're going deep on exactly this topic in an upcoming webinar with the CMO of Asymbl. She's built AI agents her team works alongside daily, assigns them defined skills, and manages them with the same intentionality she'd bring to a human hire.
Register for our webinar on April 15th: Onboarding your Agents Just Like Another Employee.