The Agentic Transition: How SaaS Companies Move From Selling AI Features to Work Done
A practical framework for how SaaS companies transition to agentic AI – shifting from being an AI-powered tool to delivering "work done" while retaining their best customers.
How non-AI companies become agentic systems — without losing the customers who got them here.
There's a version of this story that ends badly.
A well-funded SaaS company sees the agentic wave coming. They bolt on AI features and call it a "copilot." Their power users — the designers who've spent years mastering layout systems and conversion psychology, the developers who know the codebase cold — open the product one day and see a shiny new button: "Generate with AI."
The feature might even be good. But what it communicates is: the hard part of your job isn't that hard. They can't quite articulate why it feels off. They just know it does. Their expertise has been framed as optional.
So they feel vaguely patronized — and they start looking around.
Meanwhile, the dashboards still track the same inputs. The pricing hasn't changed. The company is calling it a transformation. Two years later, a purpose-built agent has eaten their lunch, and their best customers went with it.
(Hypothetical — but it's the failure mode worth engineering against.)
Then there's the version that ends differently. Same company. Same wave. But instead of layering AI on top, they refound the business. They evolve what their best customers do, reprice around what actually creates value, and expand their growth ceiling in the process.
This post is your guide to the second story. I'll use a web design tool as the running example throughout, but the framework applies across any non-AI SaaS company facing the same inflection.
The Core Insight: It's Not the Loop That Unlocks Growth – It's Closing It
Before the frameworks, one idea worth establishing early, because it governs everything else.
The dramatic growth potential of agentic AI isn't in generation. Any tool can generate content faster. The unlock is the closed loop.
What changes the ceiling is when your agent can:
- Ship – take action in the real world
- Measure – evaluate what happened
- Detect – identify what's underperforming
- Propose – suggest what to change
- Execute – make the change, with human approval
- Repeat – at a pace no human team can match
This is what makes an agentic product structurally different from a faster tool. It's not doing the same work in less time — it's compounding improvement continuously. For your customers, results keep improving even when they're not actively working. That's a new category of value that justifies new pricing, a new conversation, and a new reason to stay.
Keep this loop in mind. Every decision below is ultimately about whether it gets you closer to closing it.
The Framework: The Four Layers of Agentic Transition
Agentic AI changes what you're selling. In the old world, you sold capability — a tool that helps humans work faster. In the agentic world, the companies winning most clearly are shifting toward selling work done — a system that does the work, with humans moving from operators to reviewers and decision-makers.
That's not a universal law yet, and the transition will be gradual across most categories. But it's the direction of travel. And every company navigating it has to move across four layers simultaneously:
| Layer | Old State | Agentic State |
|---|---|---|
| User role | Operator of the tool | Reviewer, system designer |
| Value metric | Seats, usage inputs | Tasks completed, work shipped |
| Product surface | UI workflows | Approval systems, automation loops |
| Trust architecture | Feature permissions | Action guardrails, audit trails |
Most teams focus on the product surface layer and under-manage the other three. That's where transitions stall — not because the product failed, but because the user role, metrics, and trust architecture didn't move with it.
The question isn't "how do we add AI features?" The question is: how do we move all four layers together — without destroying trust in the process?
Layer 1: Evolve the User Role – Change Their Job, Not Their Purpose
The fastest way to lose your best customers during an AI transition is to imply they're obsolete.
Your power users aren't obstacles to your agentic future. They're the distribution engine that makes it possible. They're also the humans in the loop your agents need the most.
The strategic move: role evolution, not role erasure.
Let's take a web design tool that turns from a UI web builder into an agentic AI the builds > ships > optimizes. The designer doesn't become irrelevant in this new user journey — they become more valuable:
- Curator of taste — defines what "good" looks like so the agent can replicate it at scale
- Steward of brand constraints — encodes the rules the agent operates within
- Reviewer and approver — protects the company from the agent's confident but subtly-wrong decisions
- System designer — sets up tests and hypotheses the agent executes against
This is already becoming a well-established workflow in software development. Engineers using Codex or similar tools are transforming their job as we speak. The analogy that lands best in engineering teams: treat agent output like a pull request. The agent proposes. The human reviews. The system ships. Everyone learns.
What this looks like in practice
Let's stick with the example of a web design tool, which is transitioning from a UI-driven design tool towards an agentic AI that can build, ship, test, learn, optimize and re-ship web pages autonomously.
Here's a typical flow the agent could follow
- The agent generates three landing page variants based on brand guidelines and past performance data.
- The designer receives a diff view showing layout and copy changes, with a "why the agent chose this" trace.
- The designer approves one variant, adjusts the CTA copy, rejects the other two.
- The agent publishes the page and initiates a traffic split test.
- Performance data flows back into the system. The agent flags what's winning.
The designer didn't build a page. They made consequential decisions about one in under ten minutes. That's not demotion — that's leverage.
Questions your product team should answer before shipping agents
- What actions can the agent take without human approval?
- What actions require explicit sign-off?
- What does the review interface look like for your power users — and does it make them feel expert, or just busy?
- What signals indicate agent success vs. silent failure?
Three retention mechanics that work
Narrative continuity. Users need to hear, repeatedly, that you're still solving the same underlying problem — just with radically higher leverage. "We help you build high-converting websites" doesn't change. How you deliver it does.
A new status hierarchy. Don't demote users to prompt typists. Promote them to the person who controls quality and risk. Ship product surfaces that make the reviewer feel powerful and expert, not replaced.
Two-speed adoption. Keep the legacy path stable while champions run the new agentic path inside the same account. Don't force overnight migration — it triggers resistance, delay, or churn.
The retention insight most teams miss: this is not a Customer Success problem. It's a product positioning problem. You're not "adding automation." You're redefining what it means to be excellent at the craft.
Layer 2: Move the Value Metric From "Inputs" to "Work Done"
Most SaaS companies price what's easy to measure: seats, bandwidth, API calls, projects. These are all inputs. They measure access, not value delivered.
In an agentic world, you can price something more powerful: work done.
| Pricing model | What it measures |
|---|---|
| Seat-based | Who can access the tool |
| Usage-based | How much the tool is used |
| Work-done | What actually got shipped |
For a web design tool: old value metric is seats and bandwidth; agentic value metric is landing pages shipped + A/B tests launched + optimizations implemented + conversion cycles completed.
The Intercom case study — the clearest live example right now:
Intercom is one of the clearest real-world example of this already playing out. Fin, their AI agent, is priced per successful customer query resolution — not per seat. Intercom still has seat pricing in the broader platform, so this isn't "seats are dead." It's more precise than that: a larger share of value capture is shifting toward delivered work. That's the direction most agentic products will move, at different speeds depending on the category.
| Before | After | |
|---|---|---|
| Product | Seat-based helpdesk software | AI agent (Fin) for customer service |
| Pricing meter | Per seat | Per successful resolution |
| Strategic shift | Selling software access | Selling completed customer service work |
| New internal metrics | Seats, tickets opened | Resolution rate, time-to-resolution, agent success rate |
The nuance: work done vs. business outcomes
Pure outcome pricing — "we charge when your revenue grows" — means inheriting accountability for factors you don't control. That's viable for some companies, but it's high-risk to start there. "Work done" is more practical: you price completed, auditable units of execution that are strongly correlated with outcomes. You capture the upside without absorbing the customer's P&L.
In practice, most companies in transition do best with a hybrid: a predictable base for budgeting comfort, metered agentic workload on top, and clear guardrails against surprise bills.
Change your metrics before you change your pricing
If your dashboards still report seats and project counts, your company will still behave like a seat-based business — even if your product is becoming an execution engine. The issue isn't that usage metrics are wrong; it's that the wrong usage metrics mislead your team.
New internal metrics to start tracking now:
- Tasks completed per account per week
- Approvals granted vs. rejections (and why)
- Rollback rate by workflow type
- Cycle time from "brief" to "shipped"
- Cost-to-serve per work unit
You can't price what you don't measure.
Layer 3: Approval Systems Before Autonomy
The biggest execution mistake in agentic transitions: over-investing in autonomy, under-investing in control.
The moment your agent can act, like publish a web page, launch a test, change a setting, spend money, trust becomes a first-order growth constraint. OpenAI's agent-building guidance explicitly recommends risk-based controls, pausing for guardrail checks, and escalating high-risk actions to humans. Anthropic's guidance similarly emphasizes that autonomous agents can take actions humans didn't intend, and that formal evaluation systems beat intuition.
Ship controls before you ship autonomy. Every time.
Three non-negotiables for the product surface
Bounded action space. Define what the agent can do technically (sandbox) and when it must ask before acting (approval policies). Keep higher-risk actions requiring explicit human sign-off. This is product design as much as security — it's what builds the trust that earns the agent more autonomy over time.
Evaluation infrastructure, not vibes. "It seemed to work" is not a quality system. Build evaluation frameworks early: define what a good output looks like, measure it, make regressions visible. Your evals capture taste, brand constraints, and quality standards in a way competitors can't easily replicate. This is your infrastructure moat.
Composable context. Agents are only as good as the context they can access. Start with the data sources you already own, then build explicit paths to the systems where your customers' work actually happens — analytics platforms, CRMs, CMS, billing. For products where agent interoperability matters, protocols like MCP (introduced by Anthropic in late 2024, now supported by tools including Figma) may become first-class product surfaces — though this applies most clearly to tool-rich ecosystems, not every vertical SaaS category.
Layer 4: Build the Trust Architecture with Capability Stages
Rather than planning against a timeline, plan against capability stages. Each stage earns the right to unlock the next.
Stage 1 — Assisted generation The agent generates options; humans choose and edit. Trust signal: Users engage with agent output rather than ignoring it.
Stage 2 — Human-reviewed execution The agent acts; humans approve before anything ships. Trust signal: Approval rate is high, rollback rate is low. Users feel the review interface makes them expert, not just busy.
Stage 3 — Closed-loop optimization The agent acts, measures, detects underperformance, and proposes changes — humans approve the cycle. Trust signal: Customers notice their results improving between sessions. Closed-loop cycles are completing without escalation.
Stage 4 — Autonomous growth systems The agent runs experiments, connects design to revenue data, and operates the growth loop with humans as the decision authority. Trust signal: Customers are expanding spend because agent-driven work is compounding faster than their team could.
The distinction between stage 3 and stage 4 is where most companies currently live in their imaginations. Stage 2 is where most are actually building. The path between them runs through evaluation infrastructure and composable context — not more generation capability.
Part 5: Refound the Vision
The hardest part of this transition isn't the product work. It's the internal story.
Teams that built something that works have a powerful emotional investment in it. The risk is that the company optimizes for the past's success — shipping incrementally better versions of the old product, calling it "AI-powered," and wondering why the growth ceiling keeps lowering.
The distinction that matters: Product evolution is shipping AI features inside the existing mental model. Refounding is rebuilding your product and operating model around a new value proposition — one where you're no longer a tool that helps humans work, but a system that does the work, with humans operating at a higher level.
What the companies navigating this well have in common:
Intercom called it what it was — not "AI features in our helpdesk," but a full rebuild around Fin as the core product, with a stated vision of a unified "Customer Agent" handling the full customer journey. The internal clarity preceded the product work.
Airtable explicitly relaunched as an "AI-native app platform." CEO Howie Liu publicly described restructuring the entire organization — from how products are scoped to how teams are formed — around AI cadence. Whether the full execution delivers is still playing out, but the posture is a genuine refounding, not a press release.
Vivun appears to be repositioning from legacy presales and sales-enablement SaaS toward an AI-native sales teammate model, with Ava taking on workflows previously owned by human reps. The pivot is less cleanly documented than Intercom's, but the directional move — from tooling that assists humans to an agent that does the work — fits the pattern.
What each of them did in the same sequence:
- Named the refounding internally before announcing it externally
- Kept the customer promise, then expanded it — same outcome, dramatically higher leverage
- Shipped controls before autonomy
- Changed their internal metrics before changing their pricing
- Treated migration as a growth motion, not a disruption to manage
Where to Start
1. Map your beachhead users' current identity. What do they call themselves? What expertise do they take pride in? Design their future role to elevate that identity, not erase it.
2. Define your unit of work done. What is the thing your product delivers that your customer actually wants to buy? Start measuring it now — even if you're not pricing it yet.
3. Decide which capability stage you're actually building toward. Not which one you aspire to — which one your current trust architecture and evaluation infrastructure can support. Be honest. Ship what's real.
4. Build one closed loop before anything else. A single workflow where the agent acts, the data comes back, and the agent improves is worth more than a dozen AI features. It proves the model internally and earns the trust that unlocks the next stage.
5. Write the two-sentence refounding narrative. Not what you are today — what you're building toward. Say it to your team. Often.
The companies that win the agentic transition won't be the ones that moved fastest. They'll be the ones that kept their promises to the customers who got them here — and then expanded what those promises could mean.
That's the refounding story. And it's yours to write.
Growth Untold covers the strategies, frameworks, and founder stories behind companies that grow differently. If this resonated, share it with someone building through their own transition.