Salesforce’s newest pricing signal is not a new SKU, it’s a new unit of measure.
On February 25, 2026, Salesforce introduced Agentic Work Units (AWUs), defined as “one discrete task accomplished by an AI agent,” explicitly positioning AWUs as a way to move beyond measuring AI success with tokens (how much the AI “talks”) and beyond seats (how many humans have logins). The message is simple: the enterprise AI era will be priced on work, not presence. (Salesforce News)
TL;DR
- AWUs are the start of “work-based” metering in CRMs, a direct bridge to usage based pricing AI CRM models.
- Seats break because agents are not humans, and they do not map cleanly to headcount.
- Tokens break because token volume is only loosely related to business value, and the ratio changes as agents become more tool-driven.
- Buyers should plan for credits, tasks, workflows, and outcomes, not per-user licensing alone.
- Your contract should define the unit, cap it, alert on it, and allow you to audit it.
- The most reliable ROI model ties cost per unit to pipeline outcomes: meetings booked, qualified opportunities created, and cycle time reduced.
- RevOps needs a vendor-agnostic AWU-to-dashboard mapping so usage becomes measurable, attributable, and governable across the funnel.
What Salesforce actually did: AWUs as the “digital labor” meter
Salesforce frames AWUs as a platform-level productivity metric that captures agent work across products like Agentforce and Slack AI, alongside “tokens processed” as an infrastructure-scale metric. They also emphasize a key pricing reality: tokens-to-work is not a fixed ratio, and they expect divergence over time as more “token-lean” deterministic tool calls do more work per token. (Salesforce News)
This is not an academic distinction. It is a pricing roadmap.
Once a vendor normalizes “work units” as the thing you should manage, compare, and optimize, the next step is straightforward: bill you on units of work (or package those units into credits).
You can already see the commercial foundation:
- Salesforce’s Agentforce supports consumption-based pricing, including Flex Credits (pay per action) and Conversations (pay per conversation), plus per-user options. (Salesforce Agentforce Pricing)
- Salesforce previously announced Flex Credits priced at $500 per 100,000 credits, with one action consuming 20 credits (about $0.10 per action), and positioned this as aligning spend to measurable outcomes. (Salesforce press release)
AWUs are the narrative and measurement layer that makes this shift feel inevitable, and defensible, inside a CRM budget.
Why “seats” break for AI agents inside CRMs
Seat-based pricing assumes:
- A “user” is a person.
- People have roughly comparable levels of usage.
- Adding users scales cost in a way that loosely tracks value.
Agents violate all three.
Seats fail because agents scale by volume, not headcount
A single AI SDR agent can:
- prospect 24/7,
- enrich hundreds or thousands of leads,
- draft personalized outbound,
- update the pipeline,
- follow up across sequences.
None of that maps neatly to “how many humans need logins.”
Seats fail because AI value is nonlinear
The difference between “agent on” and “agent off” is not incremental productivity. It can be:
- new coverage (accounts you never touched),
- compressed cycle time,
- reduced response time,
- fewer dropped leads.
The value curve is lumpy. Pricing needs a unit that can follow lumpy value.
Seats fail because agents create a governance problem
If your AI agent can do work without a human in the loop, then your cost and risk exposure can rise without any corresponding rise in “user count.”
This is why the market is converging on the core idea behind usage based pricing AI CRM: you need a meter that matches autonomous activity.
Why “tokens” break for CRM buyers (even if they are “fair”)
Token pricing is attractive to engineers because it maps to compute. It is less attractive to RevOps and Finance because it rarely maps to outcomes.
Salesforce says it directly: tokens measure how much an AI talks, not the work completed, and the tokens-to-AWU ratio is “elastic.” (Salesforce News)
From a buyer perspective, token billing breaks in three ways:
1) Tokens are not business legible
Your CFO cannot forecast “output tokens per qualified opportunity” with confidence, especially across changing prompts, models, and context windows.
2) Tokens can rise while value stays flat
Badly scoped agent behavior can generate lots of reasoning and lots of text, and still fail to:
- book a meeting,
- create a clean opportunity,
- move a stage,
- reduce time-to-first-touch.
3) Tokens become less predictive as agents become more tool-first
As vendors optimize toward tool invocation and deterministic steps, you can get more “work” with fewer tokens. That sounds good, until you realize it makes tokens a worse proxy for work over time.
Token billing is not going away at the infrastructure layer (OpenAI still bills model usage by token rates, and tool usage has separate meters), but enterprises are clearly looking for higher-level units that can be governed. (OpenAI API pricing)
The shift in plain terms: from “AI chat” to “AI labor”
In CRM land, we are watching pricing migrate through four stages:
-
Seat add-ons
“Pay $X per user/month for AI features.” -
Conversation pricing
“Pay $Y per conversation/session.” (Good for chat-like support, messy for workflow automation.) -
Action or task pricing
“Pay per action completed.” This is closer to labor economics and operational accounting. -
Outcome pricing (select use cases)
“Pay per successful resolution / successful task completion.” Common in service tooling and emerging elsewhere. (AI Multiple research)
AWUs are Salesforce planting a flag in stages 3 and 4: “work done” as the center of gravity.
Modeling costs in a usage based pricing AI CRM world: credits, tasks, workflows
If you are buying an AI CRM in 2026, do not model AI cost like SaaS. Model it like cloud.
The three “meters” you should expect
- Credits: a wallet-based abstraction (easy to sell, hard to compare cross-vendor).
- Tasks/actions: atomic steps (update record, enrich lead, generate email, create sequence step).
- Workflows: bundles of tasks that represent an operational unit (route inbound lead, run outbound research-to-email, qualify inbound, etc.).
Salesforce’s Agentforce pricing shows two common consumption meters today:
- Flex Credits as a pay-per-action mechanism.
- Conversations as a pay-per-conversation mechanism. (Salesforce Agentforce Pricing)
A practical forecasting template (simple and finance-friendly)
Use a three-layer model:
Layer A: Volume drivers (what scales)
- leads created per month
- inbound demo requests per month
- accounts researched per month
- active sequences per month
- opportunities entering pipeline per month
Layer B: Agent work per driver (units per item)
- enrichments per lead
- scoring runs per lead
- emails generated per account
- follow-ups per sequence
- stage updates per opportunity
Layer C: Cost per unit
- $ per action
- $ per workflow
- $ per 1,000 credits
- expected overage rate (if you exceed commit)
Then calculate:
- Expected monthly units = Sum(volume driver x units per driver)
- Expected monthly cost = Expected monthly units x cost per unit
This is the minimum viable model that avoids token math and keeps the forecast tied to operational levers.
If you want a deeper framework for forecasting credits-based plans (and how to budget when “AI doesn’t need a seat”), pair this with: Credits-Based AI CRM Pricing: How to Forecast, Budget, and Prove ROI When “AI Doesn’t Need a Seat” (2026).
Common pricing traps buyers should watch (and how to neutralize them)
Usage models are not automatically “fair.” They are fair only when the unit is clear, attributable, and governable.
Trap 1: Silent overages (or “PayGo surprise”)
If your agent loops, retries, or fans out across tools, you can blow past a commit quickly.
What to do
- Require hard caps (monthly and annual).
- Require real-time alerts (not end-of-month invoices).
- Require “auto-pause rules” for runaway usage.
Trap 2: Unclear definition of a “unit”
If a vendor says “action,” ask:
- Is an action a tool call, a workflow step, a completed task, or a model response?
- Do retries count?
- Do failed actions count?
- Does sandbox usage count?
- Does evaluation or monitoring count?
Salesforce’s public messaging has already emphasized “actions” and “discrete tasks,” and it is positioning AWUs as discrete tasks accomplished by an agent. That is helpful, but buyers still need contractual precision. (Salesforce News)
Trap 3: Bundled meters that obscure true unit economics
Credits can hide wide variability:
- one vendor’s “credit” might be 1 enrichment call,
- another vendor’s “credit” might be 1,000 tokens,
- another vendor’s “credit” might be 1 action.
What to do
- Demand a published rate card that maps credits to actions.
- Demand a per-workflow “bill of materials” estimate (expected units per workflow run).
Trap 4: Paying for agent thinking instead of agent doing
If vendors meter prompts, reasoning steps, and verbose outputs, costs correlate with “chatter.”
What to do
- Prefer meters tied to task completion and tool invocation.
- Align your internal KPIs to completed workflows and pipeline outcomes, not messages sent.
Trap 5: Governance blind spots
If you cannot attribute spend to:
- a team,
- a workflow,
- a segment,
- a campaign,
- a source, then you cannot optimize, and Finance will eventually shut it down.
This is where RevOps needs an “answer layer” and governance discipline, not just dashboards. See: AI Governance for RevOps in 2026: What to Automate, What Humans Must Approve, and How to Set Guardrails.
What to demand in contracts for outcomes-based AI units
If your vendor is selling anything resembling AWUs, credits, actions, or “digital labor,” your contract should include these clauses in plain language.
1) Unit definition (with examples)
Require a legal definition of:
- what counts as 1 unit,
- what does not count,
- how partial work is handled,
- how failures/retries are handled.
Ask for a unit definition appendix with:
- at least 10 concrete examples,
- the expected unit count per example,
- test cases for disputes.
2) Caps and rate protections
- monthly cap (hard stop or auto-approval threshold)
- annual cap (no surprise true-ups)
- rate lock (no per-unit increase mid-term)
3) Real-time alerts and auto-pause
- alerts at 50%, 75%, 90%, 100% of commit
- role-based notifications (RevOps, Finance, Admin)
- auto-pause for specific workflows that exceed thresholds
4) Auditability (the non-negotiable)
You need the right to export usage logs that show:
- timestamp
- workflow name
- unit type
- unit count
- user or agent identity
- object touched (lead, account, opportunity)
- outcome metadata (meeting booked, opp created, stage changed)
If a vendor cannot provide auditable logs, they are not ready for enterprise usage-based pricing.
5) Governance and human approval controls
- approval gates for sensitive actions (email send, record deletion, pricing changes)
- environment separation (sandbox vs prod)
- permissions and scopes for what agents can touch
For a buyer-side evaluation framework (security, ROI proof, and governance), use: The 2026 AI Sales Tool Buying Checklist: ROI Proof, Risk, Security, and Governance.
A simple ROI equation tied to pipeline (that Finance will accept)
You do not need a complicated model to justify usage based pricing AI CRM spend. You need a pipeline-causal one.
The practical ROI equation (vendor-agnostic)
ROI (%) = (Incremental Gross Profit from Pipeline Impact - AI CRM Usage Cost) / AI CRM Usage Cost x 100
Where incremental gross profit can be estimated from three levers:
- Meetings booked lift
- Incremental meetings booked = (meetings booked with agents - baseline meetings)
- Incremental qualified opps = incremental meetings x baseline qualification rate
- Incremental revenue = incremental qualified opps x win rate x ACV
- Incremental gross profit = incremental revenue x gross margin
-
Qualified opportunities created lift Same math, but start from qualified opps created by the agent (or assisted by the agent).
-
Cycle time reduction
- Cash acceleration matters, but keep it simple:
- If cycle time drops, you can:
- increase throughput per rep,
- reduce pipeline leakage,
- or reduce required headcount at the same pipeline target.
If you want a more operational “AI labor” ROI lens, map ROI to work output, not seats. That is the entire logic behind AWUs. For a metricization approach, also see: Agentic Work Units (AWUs): The ROI Metric Sales Teams Will Be Forced to Adopt in 2026 (and How to Implement It).
AWU-to-RevOps dashboard mapping (vendor-agnostic, practical, measurable)
The biggest mistake teams make is treating usage as an engineering metric. RevOps needs to treat “work units” as a funnel instrumentation problem.
Below is a dashboard mapping you can implement in any CRM, warehouse, or BI tool.
Step 1: Define your “Work Unit Taxonomy” (one-time setup)
You want 3 levels:
Level 1: Unit type
- Research unit
- Enrichment unit
- Scoring unit
- Outreach unit
- Routing unit
- Qualification unit
- Pipeline hygiene unit
- Deal acceleration unit
Level 2: Workflow
- “Inbound: form submit -> enrich -> score -> route”
- “Outbound: account signal -> research -> email draft -> sequence enroll”
- “Pipeline: stale opp -> summarise -> next-step task -> follow-up email draft”
Level 3: Atomic actions
- API call to enrichment
- ICP match check
- email draft generated
- sequence step created
- meeting link suggested
- record updated
- stage advanced
Step 2: Create the minimum viable data model
You need two tables (or two event streams):
A) Work Units table
- unit_id
- timestamp
- unit_type
- workflow_name
- action_name
- unit_count (usually 1)
- cost (optional, if vendor provides)
- agent_id
- human_owner_id (owner of the record or workflow)
- object_type (lead/account/contact/opportunity)
- object_id
- campaign_id (if outbound)
- environment (prod/sandbox)
B) Outcome Events table
- timestamp
- event_type (meeting_booked, opp_created, stage_advanced, closed_won, etc.)
- object_type/object_id
- attributed_agent_id (if applicable)
- attributed_workflow_name
- revenue_amount (if applicable)
Step 3: Build the RevOps dashboards (the actual boards)
Create four dashboard pages.
Dashboard 1: Unit Economics by Workflow
Featured-snippet friendly metrics:
- Cost per workflow run
- Units per workflow run
- Outcome rate per workflow run
- Cost per meeting booked
- Cost per qualified opp created
Dashboard 2: Guardrails and Risk
- Units per day with anomaly detection
- Top 10 workflows by unit consumption
- Retry rates and failure rates
- Over-cap forecast (end-of-month projection)
Dashboard 3: Pipeline Impact Attribution
- Meetings booked by workflow
- Qualified opps created by workflow
- Stage acceleration by workflow
- Rep time saved (estimated) by workflow
Dashboard 4: Governance and Audit
- Who changed what, and via which workflow
- Agent permission violations (attempted vs blocked)
- Human-approval queue volume and SLA
Step 4: Use “unit-to-outcome” ratios as your optimization KPI
This is the operational heart of AWUs.
Track:
- AWUs per meeting booked
- AWUs per qualified opportunity
- AWUs per closed-won
- $ per qualified opportunity
- $ per closed-won
Your goal is not “fewer units.” Your goal is better unit-to-outcome conversion.
This approach is also how you defend spend during budget season because it turns a noisy consumption meter into a controllable funnel lever.
Where Chronic Digital fits: make units measurable, attributable, and governable
The market is moving toward digital labor inside CRMs. AWUs are a clear signal that “work units” will become the language buyers and vendors use to justify AI budgets.
The hard part for B2B sales teams is not adopting the agent. The hard part is answering:
- What did the agent do?
- What did it cost?
- What pipeline outcome did it produce?
- Who owns it?
- What guardrails prevented it from doing the wrong thing?
- How do we forecast next month without guessing?
Chronic Digital is built for that reality:
- AI Lead Scoring + ICP Builder let you define what “qualified” means and meter how much unit spend goes into the right accounts.
- Lead Enrichment turns enrichment runs into measurable units tied to funnel stages.
- AI Email Writer + Campaign Automation make outreach units attributable to meetings and opportunities, not just “emails sent.”
- Sales Pipeline with AI deal predictions connects units to downstream pipeline movement.
- AI Sales Agent makes autonomous work visible, governable, and auditable, so usage is not a black box.
If your organization is also tightening deliverability controls while scaling AI outbound, pair governance with execution discipline using:
- The 2026 Deliverability Stack: A Step-by-Step Setup Before You Send a Single Cold Email
- Deliverability Ops SOP for Agencies: Monitoring, Thresholds, and Auto-Pause Rules
- Best Cold Email Platforms for Enterprise Teams in 2026: Infrastructure, Compliance, and Cost Predictability Compared
Usage-based AI only works when the system is instrumented and governed. Otherwise, it is just surprise cloud billing with a sales logo on top.
FAQ
What is an Agentic Work Unit (AWU)?
An AWU is a metric Salesforce introduced on February 25, 2026 that represents one discrete task accomplished by an AI agent, intended to measure “real work” rather than tokens or seats. (Salesforce News)
How does usage based pricing AI CRM differ from seat-based pricing?
Seat-based pricing charges per human user. Usage based pricing AI CRM models charge based on measurable AI activity such as actions, tasks, workflows, credits, or outcomes. This is better aligned to autonomous agents that can scale work without adding headcount.
What are the most common usage meters for AI in CRMs today?
The most common meters are:
- Credits (pre-purchased or wallet-based)
- Conversations (per session)
- Actions/tasks (per completed atomic step) Salesforce, for example, publicly lists consumption options including Flex Credits (pay per action) and Conversations (pay per conversation). (Salesforce Agentforce Pricing)
What contract terms should we demand before buying a usage-based AI CRM?
At minimum, demand:
- a precise unit definition (with examples),
- caps and rate protections,
- real-time alerts and auto-pause controls,
- audit logs export (who, what, when, workflow, object),
- governance and approval gates for sensitive actions.
What is a simple way to prove ROI for agent usage?
Tie usage cost to pipeline outcomes:
- cost per meeting booked,
- cost per qualified opportunity created,
- cycle time reduction (throughput improvement). Then convert pipeline lift to gross profit and compare to total AI usage cost.
Put AWUs on your budget, not just your vendor’s slide deck
If AWUs are the start of work-based AI pricing in CRMs, then your job is to operationalize them:
- Define your unit taxonomy (tasks, workflows, outcomes).
- Instrument usage and outcomes in one data model.
- Govern usage with caps, alerts, approvals, and audits.
- Report unit-to-outcome ratios weekly, not quarterly.
- Renew based on cost per pipeline outcome, not on “AI adoption.”
Do this well, and usage based pricing AI CRM stops being a risk. It becomes a controllable growth lever.