Back to blog

Account Engagement Score Glossary: 20 Terms for 2026

April 29, 2026 | Jimit Mehta

Account Engagement Score Glossary: 20 Terms for 2026

30-second answer: An account engagement score summarises how actively a target account is interacting across channels, expressed as a rank-orderable number that drives prioritisation and routing. The vocabulary covers inputs, weighting, decay, thresholds, and the most common failure modes. This glossary defines 20 engagement-scoring terms.

See engagement scoring driving real routing decisions inside Abmatic AI, book a demo.

Input class terms

Site Engagement

Visits to owned web properties, including pages-per-visit, time on site, and high-intent page hits (pricing, demo, comparison).

Content Engagement

Downloads, video views, calculator completions, webinar registration and attendance.

Email Engagement

Opens, clicks, replies, forwards. Reply weight is usually highest among email signals.

Ad Engagement

Clicks, video completions, and post-impression actions on owned and paid surfaces. See account-based advertising glossary.

Social Engagement

Likes, comments, shares, and direct messages on owned social posts at the account level.

In-Product Engagement

For PLG motions, feature usage, workspace creation, and invite events at the account level.

Weighting terms

Action Weight

The numeric weight of each action type (a demo request weighs heavier than a blog visit). Action weights are the highest-leverage scoring tuning.

Persona Weight

Multipliers based on the role of the engaging contact (a CMO opens an email at higher weight than an intern). See buying committee.

Recency Weight

Multipliers reducing contribution as time passes since the action.

Frequency Weight

Boosters for repeated actions across distinct sessions, capturing pattern engagement.

Aggregation terms

Composite Engagement Score

The aggregate score combining all input classes after weighting.

Per-Persona Subscore

Engagement score broken out by persona class, surfacing whether buyers, influencers, or champions are engaging.

Channel Subscore

Engagement score broken out by channel, surfacing channel mix at the account level.

Trend Score

The change in engagement score over a defined window, useful for surge detection.

Threshold terms

MQA Threshold

The composite score above which an account is marketing-qualified. See marketing qualified account and how to set up account scoring.

Re-Engagement Threshold

The score at which a previously dormant account becomes engaged again.

Cool-Down Threshold

The score below which an account exits active outreach and returns to nurture.

Operations and anti-pattern terms

Score Decay

Reduction in score contribution over time without further activity.

Score Reset

Clearing accumulated score on major lifecycle events (deal closed, opportunity lost, contact churn).

Score Inflation

The drift of scores upward across the population over time, usually because decay is too gentle or weights too generous.

Vanity-Action Trap

Treating low-intent actions (career page views, blog visits) with significant weight, polluting the score and misrouting capacity. See account fit scoring glossary.

Examples and scenarios

Worked example: a SaaS vendor weights actions as follows in the composite engagement score: demo request 100, sales-meeting accept 80, pricing-page-with-multi-page-context 60, calculator completion 50, webinar attendance 40, comparison-page visit 30, content download 20, ad click 10, blog visit 2. Persona multipliers apply across all action types: economic buyer 1.4x, decision maker 1.3x, influencer 1.0x, user 0.7x. Recency uses a 30-day half-life. Subscores break out by channel (web, paid, email, in-product) for diagnostics.

Counter-example: the same vendor weights all actions at 10 with no persona multipliers and no recency decay. Composite scores correlate weakly with conversion, sales loses trust, and routing reverts to opinion within a quarter, with the scoring system effectively abandoned.

Operating tip: the cheapest analytics improvement is a one-day weight calibration session against historical conversion data. The lift is usually larger than adding new signal sources, and the calibration habit is the foundation for ongoing model quality. PLG operating tip: in product-led motions, in-product action weights typically dominate the composite. Workspace creation, invite events, and milestone feature usage often receive multipliers above marketing-site engagement. The clean PLG composite blends in-product activity (weighted heaviest), authenticated marketing-site engagement, and email or ad engagement at the contact level. Multi-product operating tip: vendors selling across product lines usually run separate engagement scores per product line, with cross-product aggregation reporting at the account level. One-score-fits-all hides product-specific dynamics and misroutes capacity across motions.

Common metrics and benchmarks

Programs running engagement scoring well track score-to-conversion correlation as the master quality metric.

Correlation falls when weights drift, recency decays incorrectly, or new action types fire without weighting.

Other tracked metrics include score distribution by tier, MQA threshold conversion rate, and time from MQA threshold cross to opportunity creation.

The four together catch most quality issues before they erode sales trust.

Subscore reporting (per-channel, per-persona) is the diagnostic layer that explains aggregate movement.

When the composite score moves, subscore reports tell the story: was it driven by paid-media engagement, by champion activation, by an in-product event class.

Programs without subscore reporting cannot diagnose composite-score moves. ABM metrics captures the broader metric vocabulary.

Related concepts and adjacent disciplines

Engagement scoring sits alongside fit scoring and intent scoring in the composite scoring stack.

The clean architecture keeps the three separate at the model layer and combines at routing time, allowing each model to be tuned independently. How to set up account scoring covers the build pattern. How to route leads from intent signals covers the activation pattern.

PLG motions place special weight on in-product engagement.

The conversion correlation of workspace creation, invite events, and milestone feature usage typically exceeds marketing-site engagement, and engagement scores in PLG programs lean heavily on in-product signals.

Multi-product vendors usually run separate engagement scores per product line and aggregate at the account level, since cross-product engagement dynamics differ.

Implementation patterns and anti-patterns

Engagement scoring programs that compound do four things. They calibrate action weights against historical conversion rather than running vendor defaults. They separate engagement from fit so each can be tuned independently. They report subscores by channel and persona so diagnostics survive aggregate reporting. And they decay aggressively so dormant accounts do not retain inflated scores. Common anti-patterns are flat action weights (which mask the high-intent actions), combining engagement with fit into one number (which hides why accounts move), and shipping scoring without decay (which guarantees stale-state acting). Avoiding these three patterns produces engagement scoring that consistently drives sharper routing.

See engagement scoring driving real routing decisions inside Abmatic AI, book a demo.

Frequently asked questions

How is engagement score different from fit score?

Engagement score measures observed activity; fit score measures structural ICP match. The clean design separates them and combines at routing time. See account fit scoring glossary and account fit score.

Should engagement be one number or multiple?

Both. Use a composite for prioritisation and per-channel and per-persona subscores for diagnostics. Reporting only one number hides why scores moved.

What is the right decay half-life?

Most categories use 14 to 60 day half-lives for engagement score components. Pricing-page and demo-request events warrant longer half-lives because intent strength is high.

How often should weights be retuned?

Quarterly is the modal cadence. Major product, pricing, or ICP changes justify off-cadence retuning.

What is the most common engagement-scoring failure?

Treating all actions equally, which buries the high-leverage demo and pricing engagement under blog visits. The fix is calibrated action weights tied to historical conversion correlation. See how to set up account scoring.

Closing

Engagement scoring captures the temporal dimension of account interest. Used alongside fit scoring and intent merge, it is one of the highest-leverage components of a modern revenue stack. Use this glossary alongside the ABM metrics glossary when designing scoring rules.

Ready to put this glossary into practice? Book a demo of Abmatic AI.


Related posts