Personalization Blog | Best marketing strategies to grow your sales with personalization

Prioritize Accounts with Mixed Signals (Three-Axis Score) | Abmatic AI

Written by Jimit Mehta | Apr 29, 2026 12:44:33 AM

Prioritising accounts with mixed signals is the daily reality of any ABM programme that has more than one signal source. A tier-2 account with high intent. A tier-1 account with no recent activity. A churned customer that just appeared on the website. The rep has to pick where to spend the next hour. Per Forrester research, the median B2B sales team uses three to five signal sources concurrently in 2026 and lacks a unified prioritisation rule. This is the framework that turns a soup of signals into a defensible top-of-day account list.

Full disclosure: Abmatic AI ships an account-prioritisation engine that fuses fit, intent, and engagement signals into a single score, so we have a financial interest in the topic. The framework here is platform-agnostic. It works whether you build the score in Snowflake, run it inside Salesforce, or use a vendor's native scoring layer.

The 30-second answer

Prioritise accounts with mixed signals using a three-axis score: fit (firmographic, technographic, ICP match), intent (third-party plus first-party signals), and engagement (recent interactions plus opportunity stage). Combine the three axes into a weighted composite score with weights tuned to your ICP and stage of growth, refresh daily, and present reps with a top-50 daily action list, not a 5000-row spreadsheet. Per public customer reports, three-axis prioritisation lifts meeting-booking rates by 30 to 80 percent over single-axis scoring.

See a three-axis account prioritisation engine running live with daily refresh and rep-side dashboards, book a demo.

Why single-axis scoring fails

Most teams start with a single-axis score: either fit-only (the ICP match) or intent-only (the surge data) or engagement-only (recent activity). Each axis breaks at the edges:

  • Fit-only ranks accounts by long-term suitability but ignores whether they are buying right now. The top of the list is full of perfect-fit accounts that have not engaged for two years.
  • Intent-only ranks accounts by topic interest but ignores fit. The top of the list is full of high-intent accounts your product cannot serve, plus your competitors researching you.
  • Engagement-only ranks accounts by recent activity but reproduces the inbound queue. The top of the list is whoever just filled out a form, regardless of fit or intent depth.

The combination is the actionable list. Mixed signals require a mixed-signal score.

The three-axis prioritisation framework

AxisWhat it measuresInputsRefresh cadence
FitFirmographic and technographic ICP matchCRM enrichment, ICP rules, technographic dataMonthly
IntentThird-party plus first-party in-market signalsBombora, 6sense, G2 intent, site visits, content engagementDaily
EngagementRecent interactions, deal stage, prior touchesCRM activity log, marketing automation, opportunity stageReal-time or near-real-time

Axis 1: Fit

Fit is the slow-moving axis. It captures the firmographic and technographic features that predict whether an account can buy your product at all. The build:

  • Pull the closed-won data, find the firmographic and technographic patterns. See how to build an ICP.
  • Score each account on a 0 to 100 scale: 80+ is tier-1 fit, 50 to 80 is tier-2, below 50 is tier-3 or out-of-ICP.
  • Refresh monthly. Firmographic data does not change daily.

The fit score is the gating layer. If fit is below 50, the account does not enter the priority queue regardless of intent or engagement signal strength.

Axis 2: Intent

Intent captures whether the account is in-market right now. Two sub-axes:

  • Third-party intent. Surge signals from Bombora, 6sense, G2, or Demandbase indicate research activity beyond your owned properties. Per Bombora's published methodology, surge accounts convert at materially higher rates than non-surge accounts.
  • First-party intent. Signals from your owned properties: pricing-page visits, demo-page visits, comparison content reads, returning sessions. See first-party intent data.

Score each on 0 to 100, weight the two equally to start, refresh daily. The fused intent score is what the engagement axis lacks: forward-looking buying probability versus backward-looking activity log.

Axis 3: Engagement

Engagement captures the depth of the existing relationship. Inputs:

  • Recent meetings or calls (last 30 days).
  • Email engagement (replies, not opens).
  • Content engagement (downloads, video completion, gated asset views).
  • Open opportunity stage if one exists.
  • Customer or prior customer status.

Score on 0 to 100. Refresh in real-time where possible, daily at minimum. Engagement is the recency lens; without it, the score over-prioritises cold accounts.

The composite score

Three weighted axes produce one number per account. The defensible starting weights:

  • Fit: 40 percent.
  • Intent: 35 percent.
  • Engagement: 25 percent.

Composite score = (fit × 0.4) + (intent × 0.35) + (engagement × 0.25). Range 0 to 100. Reps see the top 50 to 100 accounts daily, sorted by composite score.

Tune the weights based on stage of growth:

  • Series A / early growth: bias to engagement (35 percent) since the funnel is small and inbound matters.
  • Series B / scale-up: bias to intent (40 percent) since outbound to in-market accounts produces highest leverage.
  • Series C plus / enterprise: bias to fit (45 percent) since named-account programmes are the priority.

Re-tune the weights quarterly based on which axis correlated with closed-won.

Edge cases the score has to handle

The composite score covers most cases but breaks at the edges. The seven edge cases that require explicit rules:

  • Open opportunity: the account moves to a separate priority bucket regardless of score; the AE owns it.
  • Customer account: route to customer success or expansion, not new-business sales, regardless of intent surge.
  • Recently disqualified: apply a 90-day cooldown unless a new high-intent signal fires.
  • Competitor account: drop, do not score.
  • Tier-3 with surge intent: upgrade to tier-2 for 30 days, re-evaluate after.
  • Tier-1 with no engagement: programmatic touch first, manual outreach only after engagement signal fires.
  • Anonymous traffic resolved to ICP: add to scoring with a flag noting de-anonymization source. See website de-anonymization.

Each rule should be written down, version-controlled, and reviewed quarterly.

The framework: gate, score, present, review

  1. Gate on fit score below 50. Out-of-ICP accounts do not enter the queue.
  2. Score each in-ICP account on the three axes, refresh per the cadence rules.
  3. Compose the weighted composite, sort daily.
  4. Apply edge-case rules to handle opportunities, customers, competitors, etc.
  5. Present reps with a top-50 daily list with the score, the inputs, and a recommended first action.
  6. Review weekly: which scored accounts converted, which did not, what does that say about weights or filters.

How to present this to reps

The score is invisible to the rep if the dashboard is wrong. Three principles for the rep view:

  • Top-50 list, not 5000. The rep sees the top 50 accounts for the day. Anything below the cut is one click away but not in front of them.
  • Why-this-account explainer. Each entry shows the three axis scores and the dominant signal (high intent on topic X, engaged with pricing page yesterday).
  • Recommended next action. Each entry includes the recommended first touch, drawn from a template library.

For more on how to package the rep-side context, see closing the loop from intent to rep action.

Common traps

Trap 1: Composite score with no gate

Without the fit gate, intent or engagement can push a non-ICP account to the top of the list. Reps waste cycles disqualifying. Always gate on fit first.

Trap 2: Static weights forever

The right weights change as the company changes. Re-tune quarterly based on which axis correlated with closed-won; static weights drift out of accuracy within 18 months.

Trap 3: Refresh cadence mismatch

Daily fit refresh is wasteful (firmographics do not move daily). Monthly intent refresh is too slow (intent decays inside 7 to 14 days). Match the cadence to the signal.

Trap 4: No edge-case rules

Without explicit rules for customers, competitors, and open opportunities, the score will surface awkward accounts (a customer needing renewal showing up in a new-business queue). Write the rules down.

Trap 5: Showing reps the spreadsheet

A 5000-row sheet is the wrong artifact. The dashboard is the artifact. The rep needs the top-50 with explanation and recommended action, not the raw scoring table.

How this connects to the rest of the stack

Prioritisation sits downstream of identity resolution and upstream of routing. Identity tells you who the account is; prioritisation tells you which to act on; routing tells you which rep gets the signal with what context. The model reuses the same fit score, intent feed, and engagement log used elsewhere; do not build separate scoring engines per workflow.

Related frameworks: how to set up account scoring, lead scoring, account fit score, how to route leads from intent signals.

FAQ

What weights should I start with?

Forty percent fit, 35 percent intent, 25 percent engagement is the defensible default for under-100M-ARR B2B teams. Re-tune after one quarter based on which axis correlated best with closed-won.

Can the score replace the SDR's judgement?

No. The score is a sorting layer; the SDR still picks among the top-50 based on context the score does not capture (a personal connection, a recent news event, a triggering competitor announcement). Treat the score as triage, not autopilot.

What if I only have one signal source?

Build the score with the data you have, and prioritise adding the missing axis. A fit-only score is better than no score; a fit-plus-engagement score is better still. Add intent when you have a third-party feed.

How often should the score be re-tuned?

Quarterly is the right cadence for weight adjustments. Annually for the underlying ICP definition. Daily for intent inputs. Monthly for fit inputs. Real-time for engagement inputs.

How does this interact with predictive scoring vendors?

Predictive vendors (6sense, Demandbase) ship their own composite scores. You can use the vendor score as one input to your composite, weighted alongside your own. The risk of using only the vendor score is you cannot explain or defend it, since the model is opaque. Composite scores with explainable inputs are easier to defend at QBR.

How does this connect to ABM influence?

The score is the prioritisation layer; the influence model is the impact layer. Score predicts where reps should spend time; influence reports whether the spend produced pipeline. See how to prove pipeline influence from ABM.

Prioritising accounts with mixed signals is what turns an intent-data programme from a dashboard exercise into a pipeline driver. The teams that build the three-axis composite, tune weights quarterly, and present reps with top-50 lists outperform the teams that hand reps spreadsheets and hope. Build the score; trust the model; tune it on outcome data.

See a three-axis account prioritisation engine running live, book a demo.