Back to blog

How to Score Accounts with Intent Data

April 29, 2026 | Jimit Mehta

How to Score Accounts with Intent Data

Scoring accounts with intent data turns a noisy stream of signals into a ranked queue the team can actually work. The right score blends fit and intent into one number, calibrates against pipeline history, and refreshes on a cadence that matches the buying cycle. Built well, it routes attention where it converts. Built badly, it produces a leaderboard nobody trusts.

Disclosure: Abmatic AI is an account-based marketing platform, so we have a financial interest in B2B teams running structured ABM. The framework below is platform-agnostic and works regardless of whether the team's stack centres on Salesforce, HubSpot, a warehouse, 6sense, Demandbase, ZoomInfo, Clearbit, or another vendor.

See how Abmatic AI operationalises this framework, book a demo.

Step 1: Decide what the score is for

An intent score is a tool, not a deliverable. Before designing the formula, write down what decision the score is supposed to drive: routing, prioritisation, ad bidding, content choice, or all four. The score that drives routing is not the same as the score that drives ad bidding, and treating them as one number is the most common reason intent programmes stall.

  • Routing: the score determines which rep an account lands with and how fast.
  • Prioritisation: the score orders the SDR daily queue.
  • Ad bidding: the score sets the LinkedIn or Google customer-match tier.
  • Content choice: the score selects which personalised experience to serve.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 2: Pick the fit-score baseline first

Intent without fit is noise. Build a fit score from firmographic and technographic data before layering intent on top. The fit score is the floor: an account that scores zero on fit will not convert no matter how strong the intent looks, so the intent layer can only re-rank within the fit-qualified universe.

  • Score industry, size band, geography, and tech stack against the ICP.
  • Cap the fit score at 100 and document the weights in writing.
  • Exclude accounts below a published fit threshold from the intent scoring entirely.
  • Audit the fit weights quarterly against actual closed-won data.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 3: Choose the intent sources deliberately

Intent data comes in three flavours: third-party (vendors like Bombora aggregate publisher consumption), first-party (the team's own website, content, and product telemetry), and partner-network (community, partner referrals, ecosystem signals). Pick a small set rather than a wide one. Three trusted sources beat ten noisy ones.

  • Third-party: select five to ten bottom-funnel topics, not the full taxonomy.
  • First-party: deanonymise web traffic and weight by page intent (pricing, comparison, demo).
  • Product or partner: capture trial activity, community posts, partner referrals if available.
  • Document the source contracts and refresh cadence for each.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 4: Define the signal threshold for each source

Each intent source needs an explicit threshold below which the signal does not count. Without thresholds, the score is dominated by background noise and the team stops trusting it inside a quarter. Per Forrester research on intent data programmes, the strongest predictor of programme success is whether thresholds are written down.

  • Third-party: set the threshold at the vendor's published surge boundary, not at zero.
  • First-party: require at least two qualifying page visits in the last 14 days.
  • Demo or pricing visits: weight at three to five times a content visit.
  • Product trial: weight at five to ten times a content visit, depending on cycle length.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 5: Combine fit and intent into one composite score

The composite score is what the team actually uses day to day. The simplest defensible formula is a weighted sum: 50 percent fit, 30 percent third-party intent, 20 percent first-party. Adjust the weights based on what the historical pipeline tells you, not based on what feels right to the loudest stakeholder.

  • Document the formula in a one-page runbook.
  • Calibrate the weights against the last 12 months of closed-won and closed-lost data.
  • Re-publish the formula whenever the weights change.
  • Store the score on the CRM account object so the rest of the stack can read it.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 6: Calibrate against historical pipeline

The score is only useful if it predicts conversion better than ICP filters alone. Pull the last 12 months of opportunities, calculate the score retroactively, and check that closed-won opportunities cluster in the high-score band. If they do not, the formula is wrong.

  • Pull all opportunities from the last 12 months, won and lost.
  • Compute the composite score for each on the date the opportunity was created.
  • Bucket into deciles and read the conversion rate per decile.
  • If the top three deciles do not contain a disproportionate share of wins, redesign the weights.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 7: Wire the score into routing and queueing

A score that does not change rep behaviour is decoration. Wire the composite score into CRM routing rules, the SDR queue, the marketing automation lead-grade, and the ad audiences. The single most common failure mode is a beautifully designed score that nobody reads.

  • Route inbound leads from high-score accounts to the named rep within minutes.
  • Pre-load the SDR queue with high-score accounts before low-score accounts.
  • Sync high-score accounts into LinkedIn and Google ad audiences as the priority tier.
  • Surface the score in the CRM record header so reps see it on every account view.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 8: Build a feedback loop from sales

Reps see things the score does not. Build a one-click feedback loop where reps can mark a high-score account as a false positive and a low-score account as a true positive. Roll the feedback into a monthly retraining of the weights. Without this loop the score drifts away from reality inside two quarters.

  • Add a one-click feedback button on the CRM account record.
  • Roll feedback into a monthly review with marketing and RevOps.
  • Re-tune weights quarterly based on the volume of false-positive flags.
  • Publish the change log so reps see the system reacts to their input.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 9: Audit the score for bias and drift

Scores drift. Industries change, products change, and the buying committee changes. Run a quarterly audit that compares the score's predicted conversion to the actual conversion and flags drift early. The audit should also check for bias against under-served segments where the team has limited training data.

  • Quarterly: compare predicted vs actual conversion per decile.
  • Quarterly: read the score distribution by segment and flag segments with thin data.
  • Annually: re-baseline the formula against the last 24 months of pipeline.
  • Annually: retire signals that no longer predict and add new signals carefully.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Step 10: Communicate the score so reps trust it

A score is only useful if reps trust it. Publish a one-page explainer that shows what goes into the score, what threshold means what action, and how the score has performed against actual deals. Update the explainer when the formula changes. Reps who understand the score use it; reps who do not, ignore it.

  • One-page explainer in the rep enablement library.
  • Quarterly five-minute share-out in the sales meeting.
  • Change log posted in the GTM channel for every formula change.
  • Live examples of high-score wins and low-score losses in the explainer.

The operational reading: this step is where most teams under-resource the work, because it looks like documentation rather than execution. In practice, the discipline of writing the artifact down is what allows the next step to compound. Skip the writing and the next quarter starts the conversation from zero.

Related reading on Abmatic.ai

The framework above sits inside a wider set of operating-model artifacts the Abmatic AI editorial library has documented. The links below cover the adjacent topics most teams reach for next, in plain English, with the same platform-agnostic stance.

External research the framework draws on

The framework is informed by the public B2B research bodies that cover this space. The links below open in a new tab and point to the most useful starting pages on each.

Want to see this framework running on the Abmatic AI platform? Book a demo.

Common pitfalls when running this framework

Most teams stall on a small set of recurring failure modes rather than on the framework itself. The list below names the patterns we see across B2B revenue teams in the under-500M ARR band, drawn from public customer reports and from Forrester and Gartner research on B2B operating models.

  • Treating the framework as a slide deck rather than an operating model. The artifacts only matter when they change what the team does on Monday morning.
  • Naming an owner without giving the owner the authority to make decisions. Accountability without authority produces meetings, not outcomes.
  • Running the framework without a forcing function date. Without a deadline, the work expands to fill the quarter and the read at the end is unclear.
  • Skipping the documentation step because the team thinks they will remember. They will not, and the next quarter rebuilds from memory rather than from a runbook.
  • Measuring activity rather than outcome. Coverage, engagement, pipeline, and conversion are the four numbers that matter; everything else is decoration.
  • Tooling outpacing the operating model. Buying a platform before the team has agreed on the list, the definitions, and the cadence guarantees the platform underperforms.

Each pitfall has the same fix: write the artifact, name the owner, set the date, and review on a fixed cadence. The framework above is the canonical reference; the pitfalls list is the recurring trap on the way to using it.

Frequently asked questions

What is the difference between fit and intent scoring?

Fit scores who an account is (firmographics, technographics, ICP match). Intent scores what the account is doing (research, web activity, product engagement). Most defensible programmes combine both into one composite score, with fit acting as the floor and intent acting as the re-ranker.

How many intent sources should we use?

Three is a good starting point: one third-party source (Bombora or similar), first-party deanonymised website behaviour, and either product or partner signal if available. More sources do not produce better scores; they produce more noise unless the team has explicit thresholds.

How often should we re-tune the weights?

Quarterly, against the last 12 months of pipeline. Re-tune more often than that and the formula chases noise; re-tune less often and the score drifts away from reality. The audit and the re-tune are the same activity.

Can we score without third-party intent data?

Yes, if the team has strong first-party signal and a high-traffic website with deanonymisation. First-party intent often outperforms third-party in the bottom funnel because it captures the actual behaviour the rep cares about (pricing, comparison, demo). Per Forrester research on first-party data, the bottom-funnel lift is meaningful.

How does the score connect to the target account list?

The list says who the team works; the score says when. The list is firmographic and strategic; the score is behavioural and tactical. A target account with a low score still gets touched; a non-target account with a high score still does not. The score re-ranks within the list.

Where to start

The shortest path from this page to a working operating model is to pick one section above, name a single owner, and ship the deliverable inside two weeks. Frameworks compound; the first artifact is the one that matters.

If a demo of an account-based marketing platform built around this framework is useful, book one with the Abmatic AI team.


Related posts