Fit + Intent Scoring in 2026: The Practical Taxonomy (and Exactly How to Use It to Book Meetings)

Most teams score the wrong thing. This taxonomy fixes it. Fit answers who. Intent answers now. Run the two-axis loop with weights, decay, and stop rules to book more meetings.

May 14, 202615 min read
Fit + Intent Scoring in 2026: The Practical Taxonomy (and Exactly How to Use It to Book Meetings) - Chronic Digital Blog

Fit + Intent Scoring in 2026: The Practical Taxonomy (and Exactly How to Use It to Book Meetings) - Chronic Digital Blog

Most teams miss meetings because they score the wrong thing. They chase “hot” accounts with terrible fit, or they obsess over fit and ignore the buyer who is literally shopping today.

Fit + intent scoring fixes that. Not as a dashboard. As an operating system for outbound that decides what to do next, every day.

TL;DR

  • Fit = “Should we sell to them?” (firmographics, technographics, ICP match)
  • Intent = “Should we sell to them now?” (hiring, funding, web activity, review sites, content, email engagement)
  • Use a two-axis model: Fit score (0-100) + Intent score (0-100)
  • Add weights, decay windows, and stop rules so bad signals shut things down fast
  • Run the loop: score -> prioritize -> sequence -> pause -> escalate -> book
  • Chronic executes the loop end-to-end, till the meeting is booked. Not another tab you ignore.

Fit intent scoring in 2026: the only definition that matters

Fit intent scoring is a dual scoring system that predicts two different truths:

  1. Fit score: How closely an account matches your ICP.
  2. Intent score: How likely that account is to engage or buy in a near time window.

If your pipeline depends on outbound, you need both. Otherwise you do what most teams do: send 10,000 emails to prove nobody wants your product.


Fit vs intent scoring (and why single scores fail)

Fit scoring: “Should we sell to them?”

Fit stays mostly stable. It changes when:

  • the company grows or shrinks
  • the stack changes
  • the team structure changes
  • the ICP shifts

Fit comes from:

  • firmographics: industry, size, geo, revenue, growth stage
  • technographics: tools in use, cloud, data warehouse, CRM, MAP
  • structural reality: business model, sales motion, compliance needs

Intent scoring: “Should we sell to them now?”

Intent is perishable. It spikes, then dies.

Intent comes from:

  • hiring
  • funding
  • leadership changes
  • web activity
  • review-site research
  • content consumption
  • email engagement
  • outbound negative signals that scream “stop”

Why single blended scores lie

A single “lead score” hides the reason a lead looks good.

Two classic failures:

  • High intent, low fit: Students, consultants, tiny teams, competitors, job seekers. Lots of clicks. Zero deals.
  • High fit, low intent: Perfect accounts. Bad timing. You burn them with spammy follow-ups and poison the well.

Two-axis scoring avoids both.


The practical taxonomy: fit signals (what to score)

Fit intent scoring taxonomy: firmographics (Fit)

Best for: fast filtering and clean ICP boundaries.

Score these:

  • Industry / vertical
    • Example: you sell to B2B SaaS, not restaurants
  • Company size
    • Use employees as a proxy if revenue is messy
  • Geography
    • Time zones, language, regulatory reality
  • Business model
    • PLG vs sales-led vs services-heavy
  • Sales motion
    • If you sell outbound tooling, a company with zero outbound motion is a “future maybe,” not a “now”

Practical scoring rule

  • Give big points for “must-have” traits.
  • Give zero points for “nice-to-have.”
  • Give negative points for “never.”

Fit intent scoring taxonomy: technographics (Fit)

Technographics are fit signals with teeth because they map to:

  • integrations
  • switching costs
  • pain you can name
  • budget patterns

Score these:

  • CRM (HubSpot, Salesforce, Pipedrive, etc.)
  • Sales engagement (Apollo, Instantly, Outreach)
  • Data tools (Clay, ZoomInfo, Clearbit alternatives, enrichment vendors)
  • Email infrastructure (Google Workspace vs Microsoft 365, sending tools)
  • Analytics / warehouse (if your product depends on it)

How to use technographics without lying to yourself

  • If you integrate with Salesforce, Salesforce presence is Fit+.
  • If you replace Salesforce, Salesforce presence is Intent-ish, but only if you also see switching triggers (new RevOps leader, cost-cutting, CRM migration).

The practical taxonomy: intent signals (what to score)

Intent is where teams get religious and weird. Do not. Treat intent as probabilities with half-lives.

Hiring signals (Intent)

Hiring is a budget leak with a paper trail.

Score:

  • net new roles in the function you sell into (SDRs, RevOps, Demand Gen)
  • a new team pod (first SDR hire, first RevOps hire)
  • “building outbound” language in job posts

When it matters

  • Hiring SDRs while reply rates are dropping means one thing: they will need automation or they will churn humans. Fast.

Decay window

  • 30-60 days. Hiring signals rot quickly.

Funding signals (Intent)

Funding changes risk tolerance and priorities.

Score:

  • Seed to Series A: building the first repeatable pipeline
  • Series B-C: process and tooling consolidation
  • Growth equity: efficiency and cost control

Decay window

  • 45-90 days.

Reality check Funding is not “buying.” Funding is “can buy.” Still useful.


Leadership changes (Intent)

New leaders make changes because they have to justify their existence.

Score:

  • new CRO / VP Sales / Head of RevOps
  • new Head of Growth
  • new CMO for sales-led orgs

Decay window

  • 60-120 days.

Best practice Tie leadership change to a reason:

  • “New RevOps leader + messy stack” beats “Congrats on the new job.”

Fit intent scoring taxonomy: web activity (Intent)

First-party web intent is real because it is your property.

Score:

  • visits to pricing page
  • visits to integration pages
  • multiple sessions in a short window
  • visits from target geos during business hours
  • repeated visits to “compare” pages and “case studies”

Caution Web intent without identity is just vibes. You need enrichment and matching, or it is a ghost story.


Fit intent scoring taxonomy: review-site intent (Intent)

Review sites are late-stage behavior. Buyers go there to shortlist. Not to “learn what a CRM is.”

G2’s Buyer Intent signals include actions like viewing product profiles, pricing pages, alternatives, and comparison pages. Those are explicit evaluation behaviors, not inferred “topics.” See G2’s Buyer Intent signal types in their documentation: G2 Buyer Intent documentation.

What to score heavily

  • Compare page views (direct vendor comparison)
  • Pricing page views
  • Alternatives page views in your category
  • Competitive signals (they are researching competitors)

Decay window

  • 7-21 days. Review-site intent is perishable and extremely time-sensitive.

Fit intent scoring taxonomy: content consumption (Intent)

Content intent is tricky because half your “traffic” is:

  • AI scrapers
  • students
  • internal employees
  • competitors
  • agencies doing research

Score:

  • BOFU assets: implementation guides, migration docs, pricing PDFs, ROI calculators
  • webinars with attendance, not just registrations
  • repeat consumption across multiple days

Decay window

  • 14-30 days.

Fit intent scoring taxonomy: email engagement (Intent, but handle with gloves)

Open rates got cooked years ago by privacy features. Treat opens as directional at best. Apollo’s own benchmarking guidance calls out that privacy proxies inflate opens and recommends prioritizing inbox placement, reply rate, and positive reply rate. Apollo outbound benchmarks and deliverability preflight.

Score:

  • human replies (best)
  • positive replies (better)
  • forwarded / introduced signals (best)
  • link clicks and opens as light signals only

Also score deliverability health. Validity highlights that spam complaint rates directly hurt sender reputation, and notes the <0.1% band as best practice while bulk sender requirements reference 0.3% thresholds. Validity 2025 Email Deliverability Benchmark Report (PDF).

Decay window

  • Replies: 30-90 days (relationship memory lasts)
  • Clicks: 7-14 days
  • Opens: 3-7 days, and only if your tracking is clean

Fit intent scoring taxonomy: outbound negative signals (Stop rules)

Negative signals are not “intent.” They are “stop bothering people.”

Score these as hard brakes:

  • hard bounce
  • spam complaint
  • unsubscribe
  • domain block
  • repeated “not interested”
  • “remove me” replies
  • role mismatch (student, recruiter, vendor, competitor)

Validity calls spam complaint rates a key driver of sender reputation degradation. Treat complaints as an existential risk, not a metric. Validity 2025 benchmark report (PDF).

No decay window needed These are terminal events.


The scoring model: simple, weighted, decayed, operational

You want a model your team can run weekly without a PhD.

Step 1: Build two 0-100 scores

  • Fit Score (0-100): stable traits
  • Intent Score (0-100): time-bound signals

Do not blend them yet.


Fit intent scoring model: example Fit weights (0-100)

Here’s a practical Fit model for a B2B SaaS selling outbound + CRM automation.

Firmographics (60 points)

  • Industry match (0-20)
  • Employee size band match (0-15)
  • Geo match (0-10)
  • Sales motion match (0-15)

Technographics (40 points)

  • Uses a CRM you support (0-10)
  • Uses outbound tool(s) you complement or replace (0-10)
  • Uses data tooling that indicates outbound maturity (0-10)
  • Stack complexity signal (too many tools) (0-10)

Fit stop rules

  • If “never” industry: Fit = 0, stop.
  • If employee count below minimum viable: Fit capped at 30.

Fit intent scoring model: example Intent weights (0-100)

High-intent signals (60 points)

  • Review-site compare/pricing activity (0-35)
  • Pricing/integrations page visits (0-15)
  • “reply with interest” (0-10)

Mid-intent signals (30 points)

  • Hiring in function (0-15)
  • Leadership change in function (0-10)
  • Funding event (0-5)

Light signals (10 points)

  • Content consumption (0-5)
  • Email click (0-5)

Intent stop rules

  • spam complaint: Intent = 0, suppress contact and domain
  • unsubscribe: suppress contact
  • hard bounce: suppress address, re-enrich if account is strategic

Add decay windows so old “intent” stops pretending

Intent signals die. Your model should reflect that.

Use simple exponential decay or stepped decay. Stepped decay is easier to run in ops.

Suggested decay windows by signal type

  • Review-site intent: full value 0-7 days, half value 8-21 days, zero after 21
  • Web pricing/integrations visits: full 0-7, half 8-14, zero after 30
  • Hiring: full 0-30, half 31-60, zero after 90
  • Leadership change: full 0-60, half 61-120, zero after 180
  • Funding: full 0-45, half 46-90, zero after 180
  • Email click: full 0-7, half 8-14, zero after 21

This keeps your “hot list” honest.


Fit intent scoring thresholds: what to do at each band

Make it operational. Your score only matters if it triggers actions.

Recommended bands

Fit

  • 80-100: core ICP
  • 60-79: adjacent ICP
  • 40-59: long tail, test only
  • <40: suppress

Intent

  • 80-100: active buying window
  • 60-79: warming
  • 40-59: watchlist
  • <40: low priority

The decision matrix (simple)

  1. Fit 80+ and Intent 60+: aggressive sequence now
  2. Fit 80+ and Intent <60: light sequence, then pause and monitor
  3. Fit 60-79 and Intent 80+: qualify fast, do not over-automate
  4. Fit <60: only engage if intent is extreme and deal size justifies it

Stop rules: protect deliverability and your brand

You do not get bonus points for persistence. You get blocks.

Hard stop rules (non-negotiable)

  • Spam complaint: suppress contact + consider domain suppression
  • Unsubscribe: suppress contact permanently
  • Hard bounce: suppress address permanently
  • Role mismatch confirmed: suppress contact, route account for different persona if needed
  • 3 negative replies across same domain in 14 days: pause domain, review targeting

Validity’s report calls out spam complaint rates as a top driver of sender reputation damage. Treat this as pipeline insurance. Validity 2025 benchmark report (PDF).


The operational loop: score -> prioritize -> sequence -> pause -> escalate -> book

This is the part most teams never build. They score. They stare. They do nothing. Incredible.

1) Score (daily refresh)

  • refresh Fit weekly or monthly
  • refresh Intent daily
  • apply decay daily
  • apply stop rules in real time

If your scoring does not refresh daily, it is not intent scoring. It is historical trivia.

2) Prioritize (today’s call sheet)

Create three queues:

  • Now queue: Fit 80+ and Intent 60+
  • Watch queue: Fit 80+ and Intent 40-59
  • Trash queue: Fit <60 or stop-rule triggered

This becomes your outbound engine.

3) Sequence (match message to the signal)

Signal-based outbound beats generic blasting. Even benchmark discussions in outbound land keep landing on the same truth: triggered outreach outperforms list blasts because relevance goes up.

Tie the opener to the signal category:

  • review-site intent: “saw you comparing X vs Y” (do not pretend you saw the person, keep it account-level)
  • hiring: “saw you’re hiring SDRs, outbound volume is about to jump”
  • leadership change: “new RevOps leader, stack audit season”
  • web pricing: “pricing question, here’s the 2-line answer”

Then run multi-step sequences.

Need compliance and deliverability discipline? Start with Chronic’s playbook: Cold Email Compliance Ops in 2026: the SOP agencies use and Cold Email Deliverability in 2026: new failure modes.

4) Pause (when intent drops or negatives appear)

Pause rules:

  • intent falls below 40 after decay
  • no engagement after N steps
  • domain shows multiple negative replies
  • deliverability metrics degrade

Pausing is not “giving up.” Pausing protects future inbox placement.

5) Escalate to human (when the score says “do the high-touch thing”)

Escalate when:

  • Fit 90+ and Intent 80+
  • review-site compare signal hits twice in 7 days
  • a real reply asks a real question
  • buyer asks for security, pricing, timeline

Human escalation tasks:

  • 2-minute account research
  • custom teardown
  • short Loom
  • call + voicemail, if the persona matches

6) Book (and feed the loop)

When a meeting books:

  • record what signals preceded it
  • raise weights on signals that correlate with meetings
  • lower weights on signals that correlate with noise

This is how scoring compounds.


Exactly how Chronic runs this loop (without pretending it is “just insights”)

Most tools do one slice:

  • Clay builds data flows, powerful and complex.
  • Instantly sends email, that’s it.
  • Salesforce stores objects and invoices you $300/seat for the privilege.

Chronic runs end-to-end, till the meeting is booked:

Result: scoring triggers action. Not alerts that die in Slack.

If you want the broader stack view, see: Stop Buying 5 Tools: the 2026 outbound stack that produces booked meetings.


Implementation: set this up in 7 steps (no buzzwords, no drama)

1) Write your ICP as rules, not adjectives

Bad ICP: “mid-market tech companies” Good ICP:

  • 50-500 employees
  • B2B SaaS
  • North America + UK
  • outbound motion exists
  • uses HubSpot or Salesforce or is actively switching

2) Pick 10-15 fit fields and 10-20 intent events

If you pick 80, you will never ship.

3) Define point values and stop rules first

Weights are easy later. Stop rules prevent damage now.

4) Define decay windows per intent type

Start with the windows above. Adjust based on meetings booked.

5) Create the three queues

Now, Watch, Trash. Simplicity wins.

6) Build 3-5 signal-specific sequences

One sequence per major trigger:

  • review-site evaluation
  • hiring
  • leadership change
  • web pricing visit
  • generic ICP fit with light intent

7) Run weekly weight reviews tied to booked meetings

Not clicks. Not opens. Meetings.


Common mistakes that kill fit intent scoring

  1. Blending fit and intent into one number
    • You lose the reason. You lose the action.
  2. Scoring only positive intent
    • Negative signals matter more. They prevent domain death.
  3. No decay
    • You chase last month’s “hot lead” who already bought.
  4. No stop rules
    • You keep sending into bounces and complaints. Deliverability collapses.
  5. No operational loop
    • You built analytics. You did not build pipeline.

FAQ

FAQ

What is fit intent scoring?

Fit intent scoring is a dual scoring model that ranks prospects on two axes: fit (ICP match) and intent (time-sensitive buying signals). Fit answers “should we sell to them?” Intent answers “should we sell now?”

Which matters more, fit or intent?

Fit matters more for long-term efficiency. Intent matters more for short-term meetings. The highest conversion comes from high fit plus high intent. If you must choose, prioritize fit for deliverability and brand safety, then use intent to time the push.

What intent signals are strongest in 2026?

Late-stage evaluation signals tend to be strongest: review-site comparisons and pricing-page behavior. G2’s Buyer Intent includes signals like profile, pricing, alternatives, and compare page views. G2 Buyer Intent documentation.

Should I use email open rates as an intent signal?

Only lightly. Open rates are distorted by privacy proxies and bot opens. Treat them as directional inside your own program, not a primary indicator. Prioritize replies and positive replies. Apollo on outbound benchmarks and open rate reliability.

What are the non-negotiable stop rules for outbound?

At minimum: spam complaint, unsubscribe, hard bounce. Complaints damage sender reputation and trigger deliverability problems. Validity’s benchmarking emphasizes spam complaint rates as a key driver of reputation and deliverability. Validity 2025 benchmark report (PDF).

How often should we refresh intent scores?

Daily. Intent decays fast. Review-site and pricing signals can be useless after 2-3 weeks. If your model updates weekly, you will show up after the buyer already chose someone else.


Build the loop, then let it run

Stop arguing about scoring theory. Ship a taxonomy. Add weights. Add decay. Add stop rules.

Then run the loop every day:

  1. Score
  2. Prioritize
  3. Sequence
  4. Pause
  5. Escalate
  6. Book

Want the system that executes this instead of reporting on it? Run the loop inside Chronic. Pipeline on autopilot. End-to-end, till the meeting is booked.