Build Your Account Scoring Model: A Practical Guide

Jimit Mehta ยท May 12, 2026

Build Your Account Scoring Model: A Practical Guide

Build Your Account Scoring Model: A Practical Guide

Effective account scoring combines three dimensions: fit scoring (firmographics: company revenue, industry, size), engagement scoring (behavioral: website visits, email opens, demo requests), and intent scoring (third-party signals: job postings, funding news, tool adoption). Weighted together, these predict which 50 accounts out of 10,000 will convert.

Three Types of Account Scoring

Before building, understand the three dimensions.

Fit Scoring (Firmographics)

Does this account match your ICP? Pure fit scoring is static. You run it once or update quarterly. It answers: "Is this a company we should be selling to?"

Fit criteria: - Company revenue (target: $10M-$500M) - Industry vertical (target: SaaS, fintech, healthcare) - Company location (target: US, UK, Canada, Australia) - Headcount (target: 50-1000 people) - Company stage (target: Series B and later)

Engagement Scoring (Behavioral)

Is this account showing buying signals right now? Engagement scoring is dynamic and updates weekly. It answers: "Is this account actively researching solutions like ours?"

Engagement signals: - Website visits (especially pricing, features, case studies pages) - Content downloads (e-books, comparisons, evaluations guides) - Email engagement (opens, clicks) - Event attendance (webinars, conferences) - Demo requests - Sales conversations

Intent Scoring (Third-Party Data)

Is this account showing intent to buy in the market? Intent data comes from third-party providers (Bombora, 6sense, ZoomInfo). It answers: "Based on broader market signals, is this account in buying mode?"

Intent signals: - Keywords being researched (ABM, account scoring, marketing automation) - Job postings (hiring for marketing leadership or GTM roles) - News and funding (Series A funding, new product launch, CEO change) - Website technology changes (installing new marketing tools, analytics platforms)

Most teams use Fit + Engagement for scoring. Intent is optional but powerful if you have budget.

---

Building Your Fit Scoring Model

Start with firmographics. This is deterministic and easy.

Step 1: Define Your ICP Criteria

List out the company characteristics of your ideal customer. Use your best customers as reference: - What revenue range are they in? (example: $50M-$300M) - What industries do they operate in? (example: SaaS, fintech, enterprise software) - What geographies? (example: US, Western Europe, Australia) - What company stage? (example: Series B and later) - What company size? (example: 100-2000 employees)

Be specific. "Mid-market" is vague. "Mid-market through enterprise SaaS companies with $20M-$300M revenue in North America and Europe" is specific.

Step 2: Weight Your Criteria

Not all criteria are equally predictive of conversion. Most teams weight like this: - Industry fit: 30-40% (most important) - Company size/revenue: 20-30% - Geography: 10-15% - Company stage: 10-15% - Other factors (team size, hiring velocity, etc.): 10-20%

Example scoring model: - Financial services company: +30 points - $50M-$300M revenue: +25 points - 100+ employees: +15 points - US or UK based: +10 points - Series B or later: +10 points - Recently hired a new VP of Product/Engineering: +10 points

Total possible: 100 points. Threshold for "fits ICP": 70 points. This should capture roughly 20-30% of your total addressable market.

Step 3: Implement in Your System

Use a spreadsheet or a scoring tool to calculate fit scores. If you have HubSpot or Salesforce, these tools have built-in scoring capabilities. If you're manually scoring, use a spreadsheet with formulas.

For each account in your database: 1. Look up company revenue from Crunchbase, ZoomInfo, or LinkedIn 2. Identify industry from company website 3. Check geography from HQ location 4. Check company stage (startup, Series A, B, C, etc.) 5. Sum up points 6. Flag accounts scoring 70+ points as "ICP-fit"

This takes 5-10 minutes per account. If you have 5,000 accounts, that's 400-800 hours. Use a tool that automates this or hire a contractor. Data quality is critical.

Adding Engagement Scoring

Engagement scoring layers on top of fit scoring. An account can be a great fit but show no engagement signals. That's fine; they might buy later. But an account showing high engagement is immediately more valuable.

Step 1: Define Engagement Events

What signals tell you this account is actively researching? List them: - Website page visits (especially high-intent pages like pricing, comparisons, case studies) - Content downloads (evaluation guides, feature comparisons) - Email engagement (opens, clicks, replies) - Demo requests - Sales calls booked - LinkedIn profile views - Ad clicks

Step 2: Weight by Signal Type

Not all signals are equally indicative of buying intent. A demo request is more valuable than an email open. Weight like this:

  • Demo request: +50 points
  • Sales conversation (call, meeting): +40 points
  • Pricing page visit: +30 points
  • Content download (evaluation guide or comparison): +25 points
  • Email click (not just open): +15 points
  • Website visit (general): +10 points
  • Email open: +5 points

Step 3: Decay Over Time

An engagement signal from 3 months ago is less valuable than one from last week. Apply decay: - Signal from this week: 100% weight - Signal from 2 weeks ago: 80% weight - Signal from 1 month ago: 60% weight - Signal from 3 months ago: 20% weight - Signal from 6+ months ago: 0% weight (drop it)

This prevents old signals from inflating scores.

Step 4: Set Engagement Scoring Threshold

Most teams use: - 40+ points in past 30 days: "Actively engaged" - 20-39 points in past 30 days: "Moderately engaged" - 1-19 points in past 30 days: "Minimally engaged" - 0 points in past 30 days: "Not engaged"

Only accounts scoring "Actively engaged" or "Moderately engaged" should be prioritized for active campaigns. "Not engaged" accounts get nurture sequences, not direct outreach.

Step 5: Update Automatically

Set up tracking so signals feed automatically into your scoring model. If you use HubSpot: 1. Install tracking pixels on key pages (pricing, features, case studies) 2. Set up form tracking for content downloads 3. Create workflows that flag high-engagement accounts 4. Build a report that shows accounts by engagement score

Update scores weekly or daily. Engagement signals decay quickly; stale scores are useless.

Skip the manual work

Abmatic AI runs targets, sequences, ads, meetings, and attribution autonomously. One platform replaces 9 tools.

See the demo โ†’

Optional: Intent Scoring

If you have budget and sales cycle is long (4+ months), consider adding intent data.

Intent providers (Bombora, 6sense, ZoomInfo Intent) sell data on companies researching specific topics. You query their database: "Show me companies researching ABM solutions in the US, $50M+ revenue." They return a list with intent scores.

Intent scoring adds another dimension: - High fit + High engagement + High intent: Ideal. Immediate outreach. - High fit + No engagement + High intent: Very good. Intent might tip them toward engagement. - High fit + High engagement + No intent: Still good. They're researching, intent data might lag reality. - High fit + No engagement + No intent: Nurture for later.

Most teams find intent scoring adds 10-15% incremental accuracy to fit + engagement models. Whether that's worth the cost (usually $500-2000/month) depends on your sales cycle and deal value.

---

Building Your Final Scoring Model

Combine fit, engagement, and (optionally) intent into one overall score.

Example model: - Fit score (0-100): 40% weight - Engagement score (0-100): 40% weight - Intent score (0-100): 20% weight

Overall score = (Fit score x 0.4) + (Engagement score x 0.4) + (Intent score x 0.2)

Final score tiers: - 80-100: "Red hot." Immediate outreach. Sales calls within 48 hours. - 60-79: "Warm." Active campaign (email, ads, content). Sales calls within 2 weeks. - 40-59: "Interested." Nurture sequence. Sales calls if they show interest. - 20-39: "Suspect." List building. No active outreach. - 0-19: "Not a fit." Don't target.

Validation & Iteration

Don't trust your model on day one. Validate it against your historical customer data.

Step 1: Score Your Existing Customers

Go back through your customer list (last 2 years of deals). Score them retroactively using your model. How many of your best customers scored 80+? How many scored below 40?

If your model scores your actual customers poorly, it's broken. Adjust criteria and weights.

Step 2: Look for Gaps

If 70% of your customers are in one industry but your fit criteria weights three industries equally, you're not calibrating correctly. Reweight.

Step 3: Test on New Accounts

Over the next 90 days, track accounts by score tier. Which tier converted best? Which generated the most pipeline? Adjust thresholds accordingly.

Example finding: You thought 60-79 was "warm" and would convert. But in practice, only accounts scoring 70+ converted at acceptable rates. Adjust your "warm" threshold to 70.

Step 4: Measure Bias

Make sure your fit criteria aren't accidentally biasing toward one geography, industry, or company stage. Diversity in your TAL often outperforms concentration.

Common Scoring Mistakes

Mistake 1: Too many criteria

47-factor scoring models are noise. Stick to 5-7 fit criteria, 5-7 engagement signals. More complexity doesn't improve accuracy; it just makes it harder to explain.

Mistake 2: Static weighting

One industry might be 10x more valuable than another. If you weighted them equally, you're leaving money on the table. Validate weights against actual results and adjust quarterly.

Mistake 3: Ignoring engagement decay

An account that visited your site 6 months ago and hasn't been seen since is not a buying signal. Apply decay. Update scores weekly.

Mistake 4: No feedback loop

Build a report showing "Accounts that scored 80+, what happened?" Some closed. Some went silent. Learn from the pattern. Adjust your model quarterly.

Mistake 5: Garbage in, garbage out

If your company data is wrong (revenue, industry, location wrong), your scores are useless. Validate your data sources. Use tools like ZoomInfo, Apollo, or Clearbit to enrich your data before scoring.

---

Implementation Checklist

  1. Define your ICP (5-7 core criteria)
  2. Assign weights to fit criteria
  3. Calculate fit scores for all accounts
  4. Flag accounts scoring 70+ as ICP-fit
  5. Define engagement events (7-10 signals)
  6. Assign weights to engagement signals
  7. Set up tracking to capture signals automatically
  8. Calculate engagement scores weekly
  9. Set engagement score thresholds
  10. Combine fit + engagement into overall score
  11. Validate model against existing customers
  12. Adjust weights based on validation
  13. Test model on new accounts over 90 days
  14. Review and iterate quarterly

Account scoring is not "set it and forget it." It's a feedback loop that improves over time. Start simple, measure results, iterate based on what converts. In 3-6 months, you'll have a scoring model that predicts buying behavior with 70-80% accuracy. That's good enough to prioritize accounts and run programs that convert.

Run ABM end-to-end on one platform.

Targets, sequences, ads, meeting routing, attribution. Abmatic AI runs all of it under one login. Skip the 9-tool stack.

Book a 30-min demo โ†’

Related posts