Fit vs Intent Scoring: A 7-Day Model That Stops You From Emailing the Wrong Companies

Stop emailing the wrong companies. Fit vs intent scoring splits “should we sell” from “should we sell now.” Build a dual score in 7 days, add a hard send gate, and route every account by score.

April 24, 202615 min read
Fit vs Intent Scoring: A 7-Day Model That Stops You From Emailing the Wrong Companies - Chronic Digital Blog

Fit vs Intent Scoring: A 7-Day Model That Stops You From Emailing the Wrong Companies - Chronic Digital Blog

Sending outbound to the wrong companies is not a messaging problem. It is a scoring problem.

Fit vs intent scoring fixes it in a week. No data science. No “we’ll get to it next quarter.” Just a simple model that decides, with receipts, who gets an email today and who never should.

TL;DR

  • Fit = “Should we sell to them?” (ICP match)
  • Intent = “Should we sell to them now?” (in-market signals)
  • Build a dual score in 7 days: define fields, pick signals, set weights, set a hard send gate, and route every account by score band.
  • Default rules:
    • High fit + high intent = immediate sequence + fast SLA
    • High fit + low intent = nurture or light-touch
    • Low fit + high intent = disqualify unless strategic
  • Put it in a spreadsheet first. Automate later.
  • Then run it on autopilot with Chronic’s dual scoring and prioritization: AI Lead Scoring

Fit vs intent scoring: definitions you can actually use

What is “fit” scoring?

Fit scoring measures how closely a company matches your ICP.

Common fit fields:

  • Industry
  • Headcount band
  • Geo
  • Tech stack
  • Business model (PLG vs sales-led, enterprise vs SMB)
  • Target personas present (RevOps, VP Sales, IT, Security)

Fit answers: “Should this account ever be in our pipeline?”

What is “intent” scoring?

Intent scoring measures buying activity. Not vibes. Signals.

Common intent signals:

  • Topic research (third-party intent providers, review sites)
  • Hiring (role types that correlate with initiatives)
  • Tech stack change (install/uninstall)
  • Competitor usage or comparison behavior
  • First-party site behavior (if you have it): pricing page, integration docs, demo page

Intent answers: “Should this account get outreach now, and how aggressive?”

6sense frames intent data as signals that indicate in-market research behavior, often from third-party sources like publisher networks and review communities. It is a useful baseline definition if your team needs alignment. (6sense intent data explainer)

Why you need both (and why single-score systems fail)

A single blended score turns two different questions into one muddy number:

  • A perfect-fit account can look “cold” and get ignored.
  • A terrible-fit account with noisy intent can hijack your SDR day.

Dual scoring keeps the logic clean:

  • Fit decides eligibility.
  • Intent decides urgency.

The 7-day model: build it fast, ship it, refine weekly

This is a day-by-day build designed for a RevOps lead, a sales leader, and one person who can edit a spreadsheet without fear.

Day 1 - Lock your ICP fit fields (no more “we sell to everyone”)

You need 5-7 fit fields. Not 30. If you cannot measure it, it does not exist.

Pick fit fields from this list:

  1. Industry (your top 3-5 verticals)
  2. Company size (headcount band, sometimes revenue)
  3. Geo (countries, regions, time zones)
  4. Tech stack (must-have, nice-to-have, no-go)
  5. Business model (B2B SaaS, agency, ecommerce, services)
  6. Trigger role coverage (does the buyer persona exist?)
  7. Compliance constraints (HIPAA, SOC2 requirements, etc.)

Rule: if a field does not change your outreach decision, delete it.

Example ICP (simple, aggressive)

  • Industry: B2B SaaS, MarTech, DevTools
  • Headcount: 20-500
  • Geo: US, Canada, UK, Western Europe
  • Stack: uses HubSpot or Salesforce OR a modern data stack (Segment, Snowflake)
  • Buyer role: VP Sales, Head of Growth, RevOps present

If you want this to run automatically end-to-end, Chronic’s ICP Builder covers the exact “turn ICP into fields” step.


Day 2 - Define intent signals you can collect this month (not “someday”)

Intent signals split into two buckets:

A) Third-party intent (easy to start)

  • Review site activity (G2 category visits, comparisons)
  • Publisher network content consumption (topic surges)

B) First-party intent (highest quality, sometimes scarce)

  • Website visits by account (if you have reverse IP / account identification)
  • High-value page views: pricing, security, integrations, docs
  • Demo/contact forms

C) Public signals (free, underrated)

  • Hiring signals (especially role type and seniority)
  • Funding, expansion, new product lines
  • Tech install/uninstall (via technographic providers)
  • Competitor stack present (or switching away)

Intent signal rules (keep it clean):

  • Only include signals that map to a real buying motion.
  • Define the lookback window (usually 7, 14, or 30 days).
  • Define the unit of measure (binary, count, or intensity band).

If your team is serious about signal-driven outbound, keep a list of plays per signal. Chronic has a good reference list here: 18 High-Intent Buying Signals for Outbound (And the Exact Play to Run on Each)


Day 3 - Build score bands (before you touch weights)

Most teams start with weights and end up in a religious war.

Start with bands, because bands map to actions.

Use 0-100 for both Fit and Intent.

Recommended bands

  • Fit
    • 80-100: Tier 1 ICP
    • 60-79: Tier 2 ICP
    • 40-59: Edge cases
    • 0-39: Not ICP
  • Intent
    • 80-100: Active evaluation
    • 60-79: In-market research
    • 40-59: Early signals
    • 0-39: No reliable activity

Now define routing decisions by band combination.

The routing matrix (the part that actually stops bad outreach)

Fit \ Intent0-39 Low intent40-59 Early60-79 In-market80-100 Active
80-100 High fitNurture / light-touchLight sequenceFull sequenceImmediate sequence + fast SLA
60-79 Mid fitNurtureLight sequenceFull sequence (watch reply quality)Full sequence (tighter targeting)
40-59 EdgeNo outboundNurture onlyDisqualify unless strategicStrategic-only outreach
0-39 Low fitNo outboundNo outboundNo outboundDisqualify (or route to partner)

This is the core: intent does not override fit unless you explicitly approve exceptions.


Day 4 - Assign weights (simple math, no drama)

You now weight fields inside each score.

Fit scoring weights (example)

Total = 100 points.

  • Industry match (0-25)
  • Headcount band (0-20)
  • Geo (0-10)
  • Tech stack must-have present (0-25)
  • Buyer persona present (0-10)
  • “No-go” tech/compliance conflict (-30 penalty)
  • “Strategic logo” override (+10, capped at 100)

Why penalties matter: “No-go tech” should not “average out.” It should kill the score.

Intent scoring weights (example)

Total = 100 points.

  • Review/comparison activity (0-30)
  • Hiring for relevant roles (0-15)
  • Tech stack change event (0-20)
  • Competitor usage + switching hint (0-15)
  • First-party site behavior (0-30, if available)

Simple intent intensity rule (fast to implement):

  • 1 signal type in last 30 days = half points
  • 2+ signal types in last 14 days = full points
  • Any signal in last 7 days = +10 “recency boost” (cap at 100)

If you want an operating system where marketing and sales stop fighting about what matters, map scoring to shared attribution and playbooks. This is the grown-up version: The Marketing-Led BDR Operating System: Shared Scoring, Shared Playbooks, Shared Attribution


Day 5 - Set a hard “send gate” (this is the whole point)

A model without a gate is just a dashboard. Nobody got pipeline from a dashboard.

Send gate rule (recommended)

  • Do not enroll an account in outbound unless:
    • Fit ≥ 70, and
    • Intent ≥ 50
  • Exception path:
    • Strategic accounts can bypass intent, but only with an owner and a reason.

This stops “low fit + high intent” from eating your team alive.

Also, speed matters. If you wait, you lose the moment. Multiple studies and benchmarks hammer this point: fast response correlates with better qualification outcomes, and many companies respond far too slowly. For example, Workato references a Harvard Business Review analysis often cited in lead response discussions, and it reports major drop-offs as response time increases. (Workato lead response time study)

So yes, gate hard. Then move fast on the ones that pass.


Day 6 - Create routing rules and SLAs by score band

Now translate scores into actions inside your CRM and sequencing tool.

Routing rules (copy/paste policy)

  1. Fit 80-100 + Intent 80-100
    • Action: enroll in “Hot ICP” sequence today
    • SLA: first touch within 1 hour during business time
    • Owner: SDR or AE (depends on ACV)
  2. Fit 80-100 + Intent 50-79
    • Action: enroll in standard sequence
    • SLA: same day
  3. Fit 80-100 + Intent 0-49
    • Action: nurture list or quarterly light-touch
    • No sequencing blast. You are early.
  4. Fit 50-79 + Intent 80-100
    • Action: “Strategic review” queue
    • If approved, run a tighter sequence (more qualification, less volume)
  5. Fit < 50
    • Action: disqualify from outbound
    • Optional: route to partner channel, community, or self-serve content

What “nurture or light-touch” actually means (so it does not become a graveyard)

Pick one:

  • 1 email every 30-45 days with a specific trigger angle
  • A quarterly “relevance check” email
  • Add to retargeting / newsletter only
  • Invite to webinar only if topic matches their likely problem

Don’t pretend it is nurture if nobody owns it.


Day 7 - Ship the spreadsheet schema, test 50 accounts, then automate

Before you automate anything, score a small batch and sanity-check:

  • 25 accounts you closed in the last 12 months
  • 25 accounts you should never have emailed

If the model scores these backwards, fix the fields. Not the copy.


Simple spreadsheet schema (ready in 20 minutes)

Use one row per account. Keep it brutally simple.

Tab 1: Accounts

Required columns:

Identifiers

  • Account Name
  • Domain
  • CRM Account ID
  • Owner

Fit fields

  • Industry
  • Headcount
  • Geo
  • Tech Stack
  • Buyer Role Present (Y/N)
  • No-go Flag (Y/N)

Intent fields

  • Intent Topics (last 30d) (list)
  • Review Activity (None / Medium / High)
  • Hiring Signal (None / Weak / Strong)
  • Tech Change (None / Install / Uninstall / Migration)
  • Competitor Present (Y/N)
  • First-party Visits (last 14d) (0, 1-2, 3-5, 6+)
  • Recency (days since last signal)

Scores

  • Fit Score (0-100)
  • Intent Score (0-100)
  • Send Gate Pass (Y/N)
  • Route (Hot / Standard / Light / Nurture / Disqualify / Strategic Review)

Tab 2: Scoring Rules

Columns:

  • Field
  • Condition
  • Points
  • Notes

Example rows:

  • Industry = “B2B SaaS” -> +25
  • Headcount 20-500 -> +20
  • No-go flag = Y -> -30
  • Review activity = High -> +30
  • Recency <= 7 days -> +10

Tab 3: Routing

Columns:

  • Fit Band
  • Intent Band
  • Action
  • SLA
  • Owner

That’s it. Three tabs. No heroics.


The rules that keep your model honest (and stop spreadsheet cosplay)

Rule 1: Fit is mostly static. Intent is a moving target.

  • Fit updates monthly or quarterly.
  • Intent updates daily or weekly.

If your process treats them the same, it breaks.

Rule 2: Intent without fit is noise you pay for

Plenty of intent sources capture research behavior. They do not guarantee you can win the deal or even belong in it.

Default stance:

  • Low fit + high intent = disqualify
  • Exception only when:
    • it is a strategic logo, or
    • it expands into a target vertical you can actually serve, now

Rule 3: Score bands matter more than precision

You do not need “73.2.” You need:

  • “Enroll now”
  • “Nurture”
  • “Disqualify”
  • “Manual review”

Rule 4: Every score must map to a play

If a score change does not trigger a different action, your scoring system is theater.


Fit vs intent scoring: example plays by quadrant (steal these)

High fit + high intent: go hard, go fast

Goal: book a meeting before they pick someone else.

Tactics:

  • Shorter sequences (4-6 touches)
  • More direct CTA
  • Higher personalization density, but only where it matters (trigger + why now)
  • SLA discipline

Buyers do a lot of work before talking to sales. Gartner’s research on the B2B buying journey emphasizes rep-free preferences and heavy digital research behavior. Translation: by the time intent shows up, you are late unless you move. (Gartner B2B buying journey)

High fit + low intent: nurture, don’t spam

Goal: be the obvious option when intent spikes.

Tactics:

  • Light-touch “relevance checks”
  • Educational assets tied to their role
  • Periodic “what changed?” prompts (hiring, new product, new region)

Low fit + high intent: disqualify unless strategic

Goal: protect time and deliverability.

What to do instead:

  • Route to partner if appropriate
  • Offer self-serve resource
  • Keep out of outbound sequences

Low fit + low intent: ignore

Yes. Ignore. This is a how-to guide, not a charity.


How Chronic runs this model without adding another tool to your stack

You can build the model in a spreadsheet. You should. It forces clarity.

Then you automate it so your team stops “checking the sheet” and starts booking meetings.

Chronic’s workflow maps cleanly to this exact dual model:

If you’re comparing stacks:

  • HubSpot has great context, but you still need execution. Chronic’s take is blunt here: Chronic vs HubSpot
  • Salesforce is expensive and still needs extra tools bolted on. Here’s the contrast: Chronic vs Salesforce
  • Apollo is strong for data, but you still stitch scoring, routing, and sequencing together. Contrast: Chronic vs Apollo

One line difference: Chronic runs end-to-end, till the meeting is booked. No tab juggling.


Common implementation mistakes (so you can avoid them this week)

Mistake 1: Treating “industry” as fit when your GTM is really use-case based

Fix:

  • Keep industry, but add one “use-case proxy” field like:
    • hiring for RevOps
    • presence of sales team size
    • installed CRM type

Mistake 2: Overweighting first-party website visits when you barely have traffic

Fix:

  • Cap first-party to 20-30 points until volume justifies more.

Mistake 3: No penalty fields

Fix:

  • Add explicit penalties for “no-go” conditions.

Mistake 4: No send gate

Fix:

  • Put the gate in writing.
  • Enforce it in tooling.

If you also want to measure whether this system actually creates revenue, not just “activity,” fix your outbound measurement stack. This pairs well with dual scoring: Email ROI Is a Board Problem Now: The Outbound Measurement Stack Most Teams Don’t Have


FAQ

What’s the difference between fit scoring and intent scoring?

Fit scoring measures ICP match. It changes slowly. Intent scoring measures buying activity. It changes fast. Fit answers “should we sell to them,” intent answers “should we sell now.”

What’s a good send gate for fit vs intent scoring?

A clean starting gate is Fit ≥ 70 and Intent ≥ 50. Then add an exception path for strategic accounts with an owner and a reason. If you skip the gate, your team emails random companies again, just with nicer spreadsheets.

Should high intent ever override low fit?

Default: no. Low fit + high intent usually means “they are buying something, just not from you.” Only override when it is a strategic logo or a deliberate vertical expansion with a real plan.

What intent signals are strongest if we don’t have website intent data?

Use what you can get reliably:

  • Review site activity and comparisons
  • Hiring for initiative-related roles
  • Tech stack changes
  • Competitor presence and switching hints
    Intent providers define and package these signals differently, but the category-level definition is consistent. (6sense on intent data)

How often should we change weights?

Not daily. Review weekly for the first month, then monthly. Watch:

  • Reply rate by quadrant
  • Meeting rate by quadrant
  • Disqual reasons in high-fit bands
    If high fit + high intent is not converting, your ICP or offer is wrong. Not the weights.

Can a small team implement this without RevOps or data science?

Yes. That’s the point. Start with the spreadsheet schema above, score 50 accounts, refine once, then automate. If you want it fully autonomous, Chronic runs the dual scoring and routing directly: AI Lead Scoring


Build it in 7 days, then enforce it like a policy

Day 1: lock fit fields.
Day 2: pick intent signals you can actually collect.
Day 3: define score bands and routing.
Day 4: set weights and penalties.
Day 5: set the hard send gate.
Day 6: write routing rules and SLAs.
Day 7: test 50 accounts, fix what’s obviously wrong, ship.

Then stop emailing the wrong companies. Your deliverability, SDR morale, and pipeline will all improve. Annoying how that works.