If you are building a PLG motion in B2B SaaS, your CRM stops being “a place sales logs calls” and becomes your system of record for product intent. That only works if your schema models how PLG actually happens: people sign up, create or join a workspace, collaborate, hit usage limits, and then someone with authority upgrades.
Attio’s push into modeling Users and Workspaces makes that point explicit: the “buyer” is often a workspace, not a single lead, and the strongest sales signals live in user level events grouped into a workspace level story. Attio’s standard Users object represents a user of your product and relates to a Workspace. Their docs also call out that Users are grouped into Workspaces, and those relationships are first-class. (Users standard object, Manage standard objects)
TL;DR
- Use a 5-object PLG model: Account, Workspace, User, Subscription, Product Event.
- Make Workspace the hub for PQL scoring and routing, with rollups from Users and Product Events.
- Build a PQL score with event weights, recency decay, role weighting, and negative signals.
- Route outcomes to SDR, lifecycle email, or in-app based on score + ICP fit + buying role coverage.
- Prevent the top PLG schema failures: event spam, identity gaps, and duplicative workspaces.
- An AI CRM should detect anomalies, explain scores, and predict deal likelihood from product signals plus CRM context.
Why “PLG CRM schema users workspaces” is the foundation (not a nice-to-have)
In a classic outbound CRM, you can get away with a simple hierarchy:
- Lead -> Contact -> Account -> Opportunity
In PLG and hybrid sales, that hierarchy breaks because:
- The first person to sign up is rarely the buyer.
- Purchase decisions happen at a workspace level (team adoption, integrations, seats, usage limits).
- Your highest intent signals are in-product events, not form fills.
A PLG CRM schema users workspaces approach fixes this by modeling:
- Who is using the product (Users),
- Where they use it (Workspaces),
- How deeply they use it (Product Events),
- Whether it is monetized (Subscription),
- Who the commercial owner is (Account / Company).
Attio’s docs explicitly frame Workspaces as “accounts using your product” and connect them to Users in a many-to-many relationship. (Manage standard objects, Users standard object)
Define the 5 core objects (with a practical PLG-first data model)
You want a schema that supports:
- PQL scoring
- Routing and handoffs
- Expansion and churn risk signals
- Multi-workspace, multi-domain, and multi-user complexity
Here is the recommended object set.
1) Account (Company)
Represents a commercial entity you can sell to.
When it exists
- Sometimes only after enrichment (you may not know the company on day 1).
- Sometimes inferred from billing domain, SSO, or invoice details.
Key fields (sales-relevant)
account_id(internal)primary_domainall_known_domains(array)employee_count,employee_rangeindustry,sub_industryhq_country,hq_regiontech_stack(high-level technographics)icp_fit_tier(A/B/C),icp_fit_score(0-100)current_owner(AE/CSM)lifecycle_stage(Prospect, PQL, SQL, Customer, Expansion, Churn Risk)
Where Chronic Digital helps
- Use Lead Enrichment to resolve company details fast from email domain, website, or billing fields, then keep Account records clean.
2) Workspace
Represents the in-product tenant: org, team, instance, or project space.
This is your PLG “account” for product signals. It is where you should:
- compute PQL score,
- define activation,
- roll up usage and seat signals.
Key fields (product + sales)
- Identity
workspace_id(required, immutable)workspace_namecreated_atworkspace_status(active, deleted, trial ended)
- Ownership and association
account_id(nullable, mapped when resolved)workspace_primary_domain(from invited users, SSO, billing)billing_contact_email(if present)
- Activation and usage depth
activation_dateactivation_milestone(enum: created first project, installed integration, invited teammate, etc.)time_to_value_minutesortime_to_activation_hoursweekly_active_users(WAU)core_feature_adoption_count(number of key features used in last 14/30 days)usage_limit_hits_30d(count)
- Seats and expansion
seat_count_totalseat_count_active_7dseat_growth_30dinvites_sent_14d
- Integrations
integrations_installed(multi-select)integration_installs_30dcrm_connected(bool)
- Commercial
plan_tier(free, trial, pro, enterprise)trial_end_datepaid_status(free, trial, paid)mrr,arr(if paid)
- Sales readiness outputs
pql_score(0-100)pql_stage(Not PQL, Warm, PQL, Hot PQL)routing_destination(SDR, lifecycle email, in-app, CSM)next_best_action
3) User
Represents an end user of your product.
Attio’s Users standard object (for a SaaS product’s users, not Attio login users) includes a primary_email_address, user_id, and a relationship to workspace. (Users standard object)
Key fields
- Identity
user_id(internal)primary_emailperson_id(CRM person/contact record reference)
- Role and buying signals
product_role(admin, member, viewer)job_title,seniority(enriched)department(enriched)is_workspace_admin(product truth)
- Engagement
last_active_atsessions_7d,sessions_14dcore_actions_7d(count)
- Routing helpers
persona(builder, champion, evaluator, economic buyer)buying_role_weight(numeric multiplier used in scoring)
4) Subscription
Represents monetization state.
Why separate it from Workspace
- A workspace can have multiple subscriptions over time.
- In enterprise, billing may be consolidated across multiple workspaces.
Key fields
subscription_idworkspace_idaccount_id(optional, if billing is centralized)status(trialing, active, past_due, canceled)plan,billing_periodmrr,arrseats_purchased,seats_usedrenewal_datetrial_start,trial_end
5) Product Event
Represents atomic product telemetry.
Do not try to store every raw event in your CRM. Instead:
- store a curated event stream (high-signal events only), or
- store aggregates in Workspace/User plus keep raw events in your warehouse.
Recommended event fields
event_idevent_nametimestampworkspace_iduser_idevent_properties(JSON)source(web, backend, mobile)is_key_event(bool)event_weight(optional, if you precompute)
Object linking: the minimum relationship graph that prevents PLG chaos
You want to be able to answer these questions instantly:
- Which users belong to which workspace?
- Which workspace maps to which account?
- Which “person/contact” in CRM corresponds to which product user?
- What is the subscription status for the workspace?
The canonical relationship map
- Workspace 1-to-many Users
- Workspace 1-to-many Product Events
- Workspace 1-to-many Subscriptions
- Account 1-to-many Workspaces (often many-to-many in reality, but start 1-to-many and support exceptions)
- User 1-to-1 Person/Contact (when resolved)
- Account 1-to-many People/Contacts
Identity resolution rules (do this before you score)
Most PQL systems fail because the same human exists as:
- anonymous device,
- signup email,
- later SSO user,
- later billing admin.
If you use Snowplow or similar pipelines, implement identity stitching so multiple identifiers resolve to one user journey. Snowplow’s docs define identity stitching as combining various user identifiers into a single user identifier for a complete picture of journeys. (Snowplow identity stitching)
Practical identity resolution checklist
- User keys
- Require
user_idat signup (internal UUID). - Capture
primary_emailonly when the user provides it.
- Require
- Workspace keys
- Require
workspace_idas the stable tenant identifier.
- Require
- Contact mapping
- Map
User.primary_email->Contact.email(with dedupe). - Maintain
user_idon the Contact record for reversibility.
- Map
- Domain mapping
- Do not map workspaces to accounts by domain alone without guardrails (see failure modes below).
The fields that matter most for sales: a PLG signal dictionary
Sales does not need “all events.” Sales needs a stable set of interpretable signals.
Activation (time-to-value) fields
activation_milestone_completed(boolean + timestamp)time_to_activation_hours(numeric)activation_path(enum: invited teammate, integration, created project, published, etc.)
Why: Activation is the earliest leading indicator for conversion in PLG programs.
Usage depth fields
wauandmaucore_feature_days_used_14dkey_feature_adoption_ratio= (key features used) / (key features available)automation_runs_7dor equivalent “value delivered” counter
Why: Frequency plus depth separates casual testers from teams building muscle memory.
Seats and collaboration fields
invites_sent_7d,invites_accepted_7dactive_seats_7dseat_growth_30d
Why: Collaboration is one of the strongest “this is becoming a team tool” signals.
Integrations installed fields
integration_installed_countintegration_installs_30dcrm_integration_installed(boolean)data_warehouse_export_enabled(boolean)
Why: Integrations indicate switching costs and operational adoption.
Monetization and intent fields
trial_days_remainingupgrade_clicks_7dpricing_page_views_7dusage_limit_hits_30dapi_rate_limit_hits_7d(for dev tools)
Why: “Friction” events (limits, upgrade clicks) are purchase timing signals.
Step-by-step: Build your PLG CRM schema (Users + Workspaces) in 10 steps
1) Write a one-sentence definition for each object
Example:
- Workspace: “A tenant in our product where multiple users collaborate and where monetization occurs.”
This prevents “workspace vs account” arguments later.
2) Define your activation milestone (one primary, two secondary)
Activation must be measurable and tied to real value.
Example for a collaboration SaaS:
- Primary: “Workspace has 2+ active users and completed Key Action X.”
- Secondary: “Integration installed.”
- Secondary: “Usage limit hit.”
If you need inspiration on activation being the moment users experience core product value, PLG frameworks emphasize activation as the critical early milestone. (ProductLed PLG framework)
3) Create a key event taxonomy (10 to 25 events max)
Create categories:
- Activation events
- Depth events
- Expansion events
- Intent events
- Negative events (churn risk)
4) Instrument events with required identifiers
Every key event must include:
workspace_iduser_id(or anonymous id that stitches later)timestamp
No identifier, no score.
5) Build Workspace rollups in your warehouse (recommended) or in your CRM
Examples of daily rollups:
workspace_wau_7dworkspace_invites_14dworkspace_key_feature_days_14dworkspace_integrations_installed
6) Enrich users and map to accounts
Enrich:
- job title, seniority, department
- company name, size, industry
Then:
- map workspace -> account using a multi-signal rule:
- billing domain match OR SSO domain match OR majority user domain match (with minimum user count)
7) Add a PQL scoring table that is explainable
A score that sales cannot understand becomes ignored.
Include:
score_totalscore_components(activation, depth, expansion, intent, fit)top_reasons(human-readable strings)
8) Set thresholds and routing rules
Example:
- Score >= 80 and ICP Fit A/B and Admin present -> SDR
- Score 60-79 -> lifecycle email + in-app prompts
- Score < 60 -> nurture only
- Paid customer and score >= 70 -> CSM expansion play
9) Create “workspace ownership” logic
If you do not define ownership, everyone will assume someone else owns it.
Rules:
- Unassigned PQL workspaces go to SDR pool.
- Assigned accounts follow AE ownership.
- Expansion signals go to CSM/AE based on segment.
10) Validate weekly with closed-won feedback
Every week:
- compare scores to outcomes (demo booked, opp created, closed-won)
- adjust weights, decay, and thresholds
If you want a practical governance pattern, treat lead scoring as a living model with drift and recalibration cycles (Chronic Digital’s playbook topic). A useful internal reference is Lead Scoring Drift: The CRO Playbook.
A PQL scoring recipe you can implement immediately
A PQL is widely defined as a lead that has experienced meaningful value from the product, typically in trial or freemium, and shows behaviors indicating readiness to buy. (TechTarget PQL definition)
Your scoring model should be:
- behavior-based (product events),
- adjusted by persona and role,
- time-aware (recency decay),
- guarded against spam.
Scoring structure (0 to 100)
Use five buckets:
- Activation (0-25)
- Usage depth (0-25)
- Expansion signals (0-20)
- Intent signals (0-20)
- ICP fit overlay (0-10)
Event weights (example)
Activation
workspace_created+2key_action_completed+15first_value_delivered+8
Usage depth
core_feature_used+2 each day (cap at +10 per 7d)automation_run+3 (cap at +12 per 7d)
Expansion
invite_sent+2 (cap at +10 per 14d)invite_accepted+4 (cap at +12 per 14d)new_active_user_day+2 (cap at +10 per 14d)
Intent
pricing_page_view+3 (cap at +6 per 7d)upgrade_click+8usage_limit_hit+10trial_days_remaining <= 3+5
Negative
workspace_downgraded-15uninstall_integration-60_active_days_last_14-20
Recency decay (simple and effective)
Decayed points = points * exp(-days_since_event / half_life_days)
Recommended half-life values:
- Intent events: 7 days
- Activation events: 30 days
- Usage depth events: 14 days
- Expansion events: 21 days
This stops “someone was active once” from staying a PQL forever.
Role-based weighting (because not all users are equal)
Multiply event points by a role factor based on product role and enriched job function:
- Workspace admin: 1.3x
- Manager/Director/VP: 1.2x
- IC in target function: 1.0x
- Student, personal email, unknown: 0.6x
Then compute:
workspace_score= max(admin_user_score, champion_score) + team_adoption_rollups
This avoids the classic trap: a power user in a non-buyer role inflates the score.
Thresholds that route cleanly
Start with three stages:
- Warm (50-64): Nurture + in-app “invite teammates” prompts
- PQL (65-79): Lifecycle email + SDR light touch if ICP fit is strong
- Hot PQL (80+): SDR immediate, AE assist for enterprise segments
Routing logic: SDR vs lifecycle email vs in-app (with examples)
Routing should be deterministic enough to trust, but flexible enough to handle edge cases.
Route to SDR when:
pql_score >= 80, ANDicp_fit_tier in (A, B), AND- at least one of:
- admin present,
- pricing/upgrade intent,
- seat growth > threshold.
SDR task payload should include
- top 3 score reasons
- last 5 key events
- list of admins and champions
- suggested email opener (based on events)
Chronic Digital angle:
- Use AI Lead Scoring to prioritize these workspaces automatically.
- Use AI Email Writer to generate an SDR email that references the exact product milestones (integration installed, limit hit, seats added).
Route to lifecycle email when:
- score is mid-range, OR
- ICP fit is uncertain, OR
- no buyer role coverage yet (no admin, no manager).
Lifecycle email goal
- drive one missing milestone:
- invite teammate,
- install integration,
- complete activation step.
If you run outbound sequences, enforce safe sending and suppression rules. Internal reference: CRM Throttling: send limits and suppression rules for safe outbound in 2026.
Route to in-app when:
- user is active now, and the next best action is product-driven. Examples:
- “Connect Slack to unlock alerts”
- “Invite 2 teammates to unlock shared workflows”
- “Try feature X to reduce time-to-value”
In-app is often faster than email for the “finish setup” moment.
Common failure modes (and how to design around them)
Failure mode 1: Event spam inflates scores
Symptoms:
- Scores spike from repeated low-value events.
- SDRs chase “hot” workspaces that never convert.
Fixes:
- Cap points per event per time window.
- Prefer “distinct days used” over raw counts.
- Add negative scoring for obvious loops (same action repeated 100 times in 1 hour).
- Use anomaly detection (see AI section below).
Failure mode 2: Missing identity resolution breaks the story
Symptoms:
- One person appears as multiple users.
- Workspaces show “no admin” even though one exists.
- Usage events cannot be mapped to CRM contacts.
Fixes:
- Implement stitching and a stable
user_idstrategy. - Enforce
workspace_idon every key event. - Use a single “contact mapping table” between CRM and product users. Snowplow highlights identity stitching as combining identifiers into one user identifier to better track journeys. (Snowplow identity stitching)
Failure mode 3: Duplicative workspaces fragment adoption
Symptoms:
- Same company has multiple workspaces due to pilots, regions, or sandbox.
- Seat counts look small per workspace, but real adoption is large.
Fixes:
- Add
workspace_type(prod, sandbox, dev). - Merge candidates based on:
- same SSO domain,
- same billing entity,
- high overlap of users.
- Score at both levels:
- workspace PQL score,
- account aggregated PLG score.
How an AI CRM should detect anomalies and predict deal likelihood from product signals
Rule-based scoring gets you to “good.” AI gets you to “reliable at scale.”
What to ask your AI CRM to do (concretely)
-
Anomaly detection
- “Alert if a workspace’s event rate is 5x its 30-day baseline.”
- “Flag bot-like patterns: 100 signups from same IP block, same minute.”
-
Score explanation
- Every score should come with:
- top reasons,
- what changed since yesterday,
- what to do next.
- Every score should come with:
-
Deal likelihood prediction
- Train a model on:
- product rollups (activation, depth, seats, intent),
- ICP fit features,
- sales activity features (response time, touches),
- outcomes (opp created, closed-won).
- Output:
- probability of conversion in 14/30 days,
- recommended next best action.
- Train a model on:
-
Routing automation
- Automatic assignment, task creation, and sequence enrollment.
Where Chronic Digital fits:
- Use Sales Pipeline to visualize product-driven deals and apply AI deal predictions.
- Use ICP Builder to codify your PLG ICP, then overlay that fit on top of product intent so SDRs do not waste cycles.
If you are evaluating CRMs that claim “AI,” beware of agent-washing and insist on explainability and workflow execution (internal reference: Best Agentic CRM Platforms in 2026 (And How to Spot Agent-Washing)).
Implementation blueprint: a working example schema (copy/paste level)
Workspace (table or CRM object)
Required
- workspace_id (PK)
- created_at
- workspace_name
- status
Usage rollups
- activation_date
- wau_7d
- key_feature_days_14d
- invites_14d
- integration_installed_count
- usage_limit_hits_30d
Commercial
- paid_status
- trial_end_date
- plan_tier
- mrr
Scoring
- pql_score
- pql_stage
- pql_last_changed_at
- pql_top_reasons (array/text)
Routing
- owner_team (SDR/AE/CSM)
- owner_user_id
- routing_destination
- next_best_action
User
- user_id (PK)
- workspace_id (FK or relationship)
- primary_email
- product_role
- last_active_at
- sessions_14d
- is_workspace_admin
- contact_id (CRM link)
- persona (optional)
Subscription
- subscription_id (PK)
- workspace_id
- status
- plan
- seats_purchased
- mrr
- renewal_date
- trial_end_date
Product event (curated)
- event_id
- timestamp
- workspace_id
- user_id
- event_name
- event_properties
Competitor context: why “Users + Workspaces” modeling is a differentiator
Modern CRMs are converging on “objects that match reality.” The differentiator is not just having objects, it is:
- whether the objects support product telemetry,
- whether scoring and routing are native,
- whether the AI can reason over the schema.
If you are comparing platforms:
- Chronic Digital vs Attio: Chronic Digital vs Attio
- Chronic Digital vs HubSpot: Chronic Digital vs HubSpot
- Chronic Digital vs Salesforce: Chronic Digital vs Salesforce
- Chronic Digital vs Apollo: Chronic Digital vs Apollo
(Those comparisons matter because PLG teams often stitch together outbound tooling, enrichment, and a CRM, then discover too late that the schema cannot support product-led routing cleanly.)
FAQ
What is the difference between an Account and a Workspace in a PLG CRM schema?
An Account is the commercial entity you sell to (the company). A Workspace is the in-product tenant where usage happens and where PQL signals are generated. In PLG, a single Account can have multiple Workspaces (pilots, regions, sandboxes), and you often score at the Workspace level first, then aggregate to the Account.
What is a PQL, and how is it different from an MQL?
A PQL (product-qualified lead) is a prospect or account that has experienced meaningful product value in a trial or freemium motion and shows behaviors indicating readiness to buy. TechTarget defines a PQL as someone who experienced value from using the product, which makes it more purchase-ready than leads qualified only by marketing engagement. (TechTarget PQL definition)
Should I score PQLs at the user level or workspace level?
Score both, but route on workspace-level scores. Users generate signals, but deals close when a workspace shows team adoption, admin involvement, and intent. A common pattern is: compute user scores, apply role weighting, then roll up into a workspace score with collaboration and monetization signals.
How do I prevent event spam from creating false “Hot PQLs”?
Use four controls:
- Point caps per event per time window
- Distinct-day counting for repetitive behaviors
- Recency decay so old spikes fade
- Anomaly rules that flag bot-like rates and repetitive loops
Then require at least one “high intent” event (upgrade click, limit hit, admin action) before SDR routing.
What are the most common identity resolution mistakes in PLG scoring?
The big three are:
- Missing
workspace_idon events, so you cannot attribute usage. - Multiple identifiers for the same person without stitching (anonymous, email, SSO).
- Dedupe collisions when mapping contacts by email alone (aliases, shared inboxes). If you use Snowplow, follow identity stitching practices to combine identifiers into a single user journey. (Snowplow identity stitching)
When should a PQL route to an SDR vs lifecycle email vs in-app prompts?
A practical rule:
- SDR: High score + strong ICP fit + buyer/admin coverage + recent intent.
- Lifecycle email: Medium score or missing buyer coverage, needs education and nudges.
- In-app: The user is active now and one action unlocks value (integration, invite, key feature). The goal is not “sales everything,” it is “the right channel for the next milestone.”
Build it this week: a 7-day implementation plan your RevOps team can execute
- Day 1: Finalize object definitions and identifiers (workspace_id, user_id).
- Day 2: Choose 10 to 25 key events and define activation milestone.
- Day 3: Implement event payload requirements (workspace_id, user_id, timestamp).
- Day 4: Build rollups (WAU, key feature days, invites, integrations, limits).
- Day 5: Implement the first PQL score (weights + caps + decay + role multipliers).
- Day 6: Set thresholds and routing rules, create SDR and lifecycle playbooks.
- Day 7: QA with real workspaces, verify identity stitching, and launch with weekly recalibration.
If you want the schema to stay useful as your PLG motion evolves, treat scoring and routing as a product: ship v1 fast, measure conversion impact, then iterate based on closed-won reality.