Most lead scoring fails for one reason. It picks “good leads” and does nothing with them. So reps keep calling whoever screams loudest in Slack.
A fit and intent scoring model is the right start. But it still misses the third variable that decides whether pipeline actually moves today: capacity. Your team’s real ability to act now, with speed and quality, on the leads you just ranked.
TL;DR
- Fit = “Should we sell to them?” (ICP match)
- Intent = “Are they in-market right now?” (signals)
- Capacity = “Can we work this lead today, properly?” (bandwidth + SLA)
- The output is not a dashboard. It’s a daily priority queue that triggers sequences, tasks, and booked meetings.
- If scoring doesn’t route action, it’s just dashboard addiction.
Definition: Dual scoring (Fit + Intent) and why it still breaks
Dual scoring means you score leads (or accounts) on two independent axes:
- Fit score (profile match)
- Intent score (behavioral likelihood to buy soon)
Most CRMs and scoring tools basically do this. HubSpot explicitly supports combined scores with separate fit and engagement inputs. That’s table stakes. It’s also where teams stop. (knowledge.hubspot.com)
The failure mode looks like this:
- Marketing builds a scoring spreadsheet.
- RevOps ports it into the CRM.
- A pretty score shows up on records.
- Nothing changes in the rep’s day.
- Reps keep cherry-picking inbound. Or doom-scrolling “hot leads” they never touch.
So the score becomes a vanity metric. Then leadership asks why pipeline is flat. Everyone blames “lead quality.” Cute.
Fit scoring: the definition that actually matters
Fit scoring measures how closely a lead or account matches your ICP.
Fit is mostly stable. It changes slowly. It answers: “Even if they wanted to buy, would this be a good customer?”
Common fit signals (practical, not philosophical)
Use what you can verify:
- Firmographics: industry, employee count, revenue band
- Geography: supported regions, compliance constraints
- Technographics: tools they run, platforms they integrate with
- Role and seniority: buyer, champion, blocker
- Use case alignment: do they have the problem you solve
Fit scoring is why you don’t waste time selling enterprise security workflows to a 12-person agency running Gmail and vibes.
Fit scoring rubric (simple and usable)
Score fit 0-100. Keep it boring.
Fit Score = Industry (0-25) + Size (0-25) + Tech match (0-25) + Role/seniority (0-25)
Example:
- Industry match: exact target vertical = 25, adjacent = 15, not target = 0
- Company size: in ICP band = 25, close = 15, outside = 0
- Tech match: must-have present = 25, unknown = 10, incompatible = 0
- Role: economic buyer = 25, strong influencer = 15, junior = 5
This is the part most teams can do with basic enrichment.
If your data is trash, fix that first. Start with enrichment and standard fields, then score. No data, no score.
Internal: pair fit scoring with automated enrichment so the score fills itself, not your intern’s Monday. Use Lead Enrichment and a real ICP definition workflow like ICP Builder.
Intent scoring: what it is, and what it is not
Intent scoring measures whether someone is showing buying signals now.
Intent is time-sensitive. It decays fast. It answers: “Is this the right timing?”
Forrester’s intent research breaks down intent data types and emphasizes using intent to prioritize and progress opportunities by understanding buying behavior. (forrester.com)
Intent sources (ranked by usefulness)
Not all “intent” is intent. Some is just curiosity.
Tier 1 (high intent, close to revenue)
- Demo request, pricing page visits, product comparisons
- Reply behavior: “send pricing,” “what’s implementation like,” “talk this week”
- Sales engagement signals: email reply, booked call, inbound chat
Tier 2 (mid intent)
- Repeated visits to solution pages
- Webinar attendance with Q&A
- Competitor keyword searches that land on your site
Tier 3 (weak intent)
- One blog post view
- A single LinkedIn like
- “Opened email” (in 2026, still pretending opens are reliable is a choice)
Intent scoring needs decay or it lies
If intent doesn’t decay, your CRM becomes a museum of “hot leads” from 90 days ago.
A basic decay rule:
- High-intent events decay 50% every 7 days
- Low-intent events decay 50% every 3 days
- Any lead with no activity for 30 days drops to near zero intent
Gartner frames lead qualification around two categories: profile fit and behavioral fit (which maps closely to intent/engagement). (gartner.com)
The missing variable: Capacity scoring (the part that makes it work)
Capacity scoring measures your team’s ability to act now on a lead with the right speed, channel, and personalization.
Capacity answers: “Even if this lead is perfect and in-market, will we touch it today?”
Most teams ignore this. Then they wonder why speed-to-lead is terrible.
InsideSales’ lead response research shows conversion rates are dramatically higher when teams respond in the first five minutes. (insidesales.com)
This is not motivational poster material. It’s math:
- If your team cannot respond fast, your “hot” intent score is fiction.
- The lead doesn’t wait for your QBR.
Capacity signals (what to score)
Capacity is operational. Not vibes.
Score capacity 0-100 based on:
- SLA coverage: Are reps available in the next X minutes?
- Queue load: How many “must-touch-today” leads already assigned per rep?
- Channel readiness: Do you have deliverability headroom, dialing coverage, LinkedIn capacity?
- Routing readiness: correct owner exists, territory rules work, no duplicates
- Data readiness: email and phone present, persona clear enough to message
Capacity is how you stop routing your best leads into a black hole.
A practical Capacity Score rubric (0-100)
Use four components.
- Coverage (0-30)
- Within SLA window (ex: 0-15 minutes) = 30
- Today but not within SLA = 15
- Not today (weekend, holiday, no coverage) = 0
- Rep Load (0-30)
- Rep has < 15 priority leads in queue = 30
- 15-30 = 15
-
30 = 0
- Data completeness (0-20)
- Email + phone + role known = 20
- Email only = 10
- Missing email = 0
- Channel health (0-20)
- Sending infrastructure healthy, no throttling issues = 20
- Throttled = 10
- Paused due to reputation problems = 0
If you run cold email, capacity is also deliverability. If your domains are cooked, your “sequence” is just spam with extra steps. Run a weekly process. Internal: Deliverability Ops in 2026.
The actual model: Fit + Intent + Capacity (and how to combine it)
Here’s the clean way to define it:
- Fit decides who belongs in your pipeline.
- Intent decides when they should be worked.
- Capacity decides whether your team can act fast enough for it to matter.
The combined score (use multiplication, not just addition)
Most teams do this:
- Fit (0-100) + Intent (0-100) = Combined (0-200)
That inflates junk. High fit with zero intent still bubbles up. Or high intent from a terrible-fit account steals attention.
Use a gating formula:
Priority Score = (Fit × Intent × Capacity) / 10,000
Why:
- Any score near zero in one dimension kills the total.
- That matches reality. If you cannot act, the lead dies. If they don’t fit, don’t chase. If there’s no intent, don’t pretend.
Thresholds (simple routing bands)
Define three bands:
- P1 (Now): Priority Score ≥ 70
- P2 (This week): 40-69
- P3 (Nurture): < 40
Then automate actions. More on that below.
Example weights: SMB vs Mid-market (because one size is fake)
If you prefer weighted addition (fine), use weights that match motion.
SMB outbound (speed matters more than perfect fit)
SMB wins with volume plus speed plus “good enough” personalization.
Recommended weights:
- Fit: 35%
- Intent: 40%
- Capacity: 25%
Why:
- SMB buyers move fast.
- Many SMB deals close because you showed up first and didn’t waste their time.
Mid-market outbound (fit matters more, intent still critical)
Mid-market has more stakeholders and longer cycles. Bad fit wastes weeks.
Recommended weights:
- Fit: 45%
- Intent: 35%
- Capacity: 20%
Why:
- Mid-market qualification mistakes are expensive.
- Capacity still matters because speed-to-lead still matters, you just cannot spray garbage.
If you want the math:
Weighted Score = 0.45(Fit) + 0.35(Intent) + 0.20(Capacity)
Then gate it:
- If Fit < 60, cap Weighted Score at 49 (cannot become P1)
- If Capacity < 40, downgrade one band (P1 to P2)
A simple scoring rubric you can ship this week
Stop building a 90-signal model that nobody trusts. Ship a v1.
Step 1: Define your ICP in writing (one page)
Include:
- 3 target industries
- 1-2 size bands
- 1 must-have tech signal (or “unknown allowed”)
- 2 target personas
Internal: make this a living asset with ICP Builder.
Step 2: Pick 6 fit rules, 6 intent rules, 4 capacity rules
That’s it. No more.
Fit rules (example)
- Industry match: 0/10/20
- Employee band: 0/10/20
- Geo: 0/10
- Tech match: 0/10/20
- Persona: 0/10
- Existing customer competitor: -10 (yes, negative scoring is real life)
Intent rules (example)
- Demo request: +40
- Pricing page: +25
- Case study view: +15
- 2+ product page visits in 7 days: +20
- Replied to email: +35
- Unsubscribed: -50 (they are not “in-market,” they are “done”)
Capacity rules (example)
- Within SLA coverage window: +30
- Rep load under threshold: +30
- Email + phone present: +20
- No deliverability throttles: +20
Step 3: Add decay
Decay intent weekly. Capacity recalculates daily. Fit recalculates when enrichment updates.
Step 4: Calibrate on closed-won, not opinions
Pull your last 90 days of closed-won and closed-lost.
- What were their Fit/Intent/Capacity at first touch?
- Adjust thresholds.
- Repeat monthly.
If you have no data volume, keep it rules-based until you do. Predictive scoring without enough outcomes is just expensive astrology.
Operational output: the daily priority queue (where scoring becomes pipeline)
Scoring without routing is decoration.
Your model must output a daily priority queue with explicit actions:
- Who gets worked today
- By whom
- On which channel
- With which message angle
- In what order
What the queue looks like (example)
Every morning, each rep gets 25 records:
- P1 Now (top 10): call + email within 15 minutes
- P2 This week (next 10): sequence enrollment + LinkedIn touch
- P3 Nurture (last 5): add to long-cycle nurture, no rep time
Then the system pushes those leads into sequences.
Internal: this is the point of a real sales engine. Chronic runs it end-to-end, till the meeting is booked. Start with AI Lead Scoring tied directly into your Sales Pipeline.
Routing rules that stop “hot lead rot”
Add two non-negotiables:
- Auto-route P1 immediately based on territory and persona
- Auto-reassign if untouched inside SLA
This is how you keep “high intent” from dying in someone’s queue while they “circle back” on LinkedIn.
Turn the queue into sequences (fast)
Your queue should trigger:
- A sequence with 4-6 steps
- Personalized first line based on the intent trigger
- A channel switch if no engagement by step 2
Internal: tie channel switching to signals. The Next-Best-Channel Rulebook.
And yes, the copy matters. But copy without prioritization is just more noise. Internal: use an email writer that actually pulls context and writes to the trigger, not generic fluff. AI Email Writer.
Common failure points (so you can avoid them)
1) You score leads but never cap the queue
If every rep has 300 “hot” leads, none are hot. Cap P1 and P2.
Hard caps:
- P1: max 10 per rep per day
- P2: max 10 per rep per day
Overflow becomes P2 tomorrow. That’s literally capacity scoring doing its job.
2) You treat “engagement” as intent
Webinars and ebook downloads can be intent. Or they can be students doing homework.
Make high-intent events scarce. Pricing, demo, reply, competitor compare pages.
3) You ignore response time
Speed-to-lead is not just inbound. It applies to any high-intent spike.
If you wait a day to hit a lead who just signaled active buying, you volunteered to lose.
InsideSales’ research highlights the response-time impact in the first minutes after submission. (insidesales.com)
4) You build the model around your CRM’s limitations
Don’t. Build the operational workflow first, then map fields and automation.
If your CRM fights you, that’s useful information about your CRM.
If you’re comparing stacks:
One line of contrast: Salesforce can cost hundreds per seat and still needs extra tools bolted on. Chronic runs autonomous outbound at $99 with unlimited seats. Different philosophy. Different outcome.
Implementation: build a Fit + Intent + Capacity system in 7 days
Day 1: Define ICP and disqualifiers
- ICP one-pager
- 5 disqualifiers (ex: industry no-go, size too small, region unsupported)
Day 2: Instrument intent events
- Identify your 6 intent events
- Track them consistently (UTMs, page events, replies, form types)
Day 3: Create fit enrichment waterfall
- Firmographics
- Technographics
- Persona classification
Internal: this is literally what Lead Enrichment exists for.
Day 4: Add capacity inputs
- Rep availability windows
- Rep queue limits
- Deliverability status (green/yellow/red)
Internal: if you run outbound email at any scale, read The 2026 Outbound Sending Architecture.
Day 5: Build the score and bands
- Fit 0-100
- Intent 0-100 with decay
- Capacity 0-100 recalculated daily
- Priority formula and thresholds
Day 6: Build the daily queue and automations
- P1 routes to rep + creates tasks + triggers sequence
- P2 enrolls in sequence + schedules tasks
- P3 nurtures, no rep time
Day 7: QA and ship
- Test 20 real leads
- Confirm routing
- Confirm SLA timers
- Confirm sequences fire correctly
Then run it for two weeks. Adjust weights based on meetings booked, not internal debates.
FAQ
FAQ
What is a fit and intent scoring model?
A fit and intent scoring model ranks leads or accounts using two scores: fit (ICP match) and intent (buying signals). Tools like HubSpot support combined scoring with separate fit and engagement inputs. (knowledge.hubspot.com)
What’s the difference between fit scoring and intent scoring?
Fit scoring measures whether the prospect matches your ICP (industry, size, tech, persona). Intent scoring measures whether they are showing signals that they are actively evaluating or buying (pricing views, demo requests, replies, comparison behavior). Forrester’s intent frameworks focus on using intent signals to prioritize based on buying behavior. (forrester.com)
Why add capacity scoring? Isn’t fit + intent enough?
Because fit + intent doesn’t guarantee action. If reps are overloaded, routing is broken, or your team cannot respond inside your SLA, “hot leads” rot in queues. Response-time research from InsideSales shows conversion is significantly higher when teams respond in the first minutes. (insidesales.com)
How do I choose weights for SMB vs mid-market?
SMB usually needs more emphasis on intent and speed, so weight intent higher. Mid-market punishes bad fit, so weight fit higher. Start with:
- SMB: Fit 35%, Intent 40%, Capacity 25%
- Mid-market: Fit 45%, Intent 35%, Capacity 20% Then recalibrate monthly using closed-won vs closed-lost outcomes.
What should the scoring system produce in real operations?
A daily priority queue. Not a report. The queue assigns P1/P2/P3 bands, routes ownership, and triggers sequences and tasks automatically. If a score does not trigger an action, it is just a dashboard.
How many signals should my model include?
Start with fewer than you think:
- 6 fit rules
- 6 intent rules (with decay)
- 4 capacity rules
Ship v1, then tune based on meetings booked and conversion, not internal opinions.
Build the queue. Book the meetings.
If your scoring model does not create a daily list that reps actually work, delete it. Seriously.
Ship this instead:
- Score Fit (ICP match).
- Score Intent (time-decayed buying signals).
- Score Capacity (SLA coverage + rep load + data readiness + channel health).
- Combine into a Priority Score.
- Route into a daily priority queue that triggers sequences and meetings.
Pipeline on autopilot is not a slogan. It’s what happens when scoring turns into actions, every day, without begging reps to “follow up.”