A merged intent signal is a single account-level score that combines what an account does on properties you own with what an account does across the open web. The merge matters because each signal source on its own is partial: first-party intent is precise but narrow, and third-party intent is broad but noisy. The merged view is what turns a long target account list into a daily prioritization decision.
What the merge needs to produce: a single score per account, a written rule for how each source contributes, and a decay window that keeps stale signals from polluting the present. Anything richer is decoration; anything simpler stops being useful inside a week.
Want the merge schema the Abmatic AI team uses with revenue teams? Book a demo and we will share it.
Per Forrester research on intent data adoption, B2B teams using only one source of intent see lift on a fraction of the target list, not on the whole list. The reason is structural. First-party signals only fire on accounts that visit your properties; third-party signals only fire when an account crosses a category threshold on a partner network. Each source covers a different slice. The merged score covers both slices and makes the team prioritization decision defensible.
The merge also fixes the credibility problem. According to Gartner research on B2B sales technology adoption, sellers stop trusting intent data when scores move without an observable reason. A merged score with named contributors makes every move readable: a rep can hover over the score, see the inputs, and decide whether to act. Trust is the multiplier, and the merge is what produces trust.
The score below is the structure we recommend for a first-pass merge. Keep the inputs small and observable.
| Input | Source | Weight | Decay |
|---|---|---|---|
| Visit on a high-intent page | First-party analytics. | High. | Seven days. |
| Multi-role engagement | First-party reverse IP. | High. | Fourteen days. |
| Topic surge from a curated category | Third-party intent provider. | Medium. | Twenty one days. |
| Competitor research signal | Third-party intent provider. | Medium. | Twenty one days. |
| Funded round or hiring spike | Public firmographic provider. | Low. | Forty five days. |
Five inputs is the right number for a first-pass merge. Adding a sixth before you trust the first five is a common reason teams give up on scoring inside a quarter. The minimal version reuses the team first-party intent work and the predictive intent reference.
Weights are the team opinion about which signal predicts pipeline best. Per the Bombora research on B2B intent data calibration, first-party signals predict short-term action, and third-party signals predict medium-term interest. The merge needs to respect that asymmetry.
Weights are written down in the playbook and reviewed quarterly against closed-won data. Per Forrester research on revenue analytics, teams that review weights against outcomes converge on a stable set of inputs faster than teams that adjust weights monthly on instinct.
Decay is the half-life of a signal inside the score. Without decay, the score becomes a cumulative count and stops reflecting present-tense buying intent. With decay, the score reflects the last few weeks of behavior and produces an actionable list every morning.
The decay schedule is written into the score model and reviewed once a quarter. Mid-quarter changes to decay produce score whiplash that destroys rep trust faster than any other intervention.
Conflicts are common. An account shows a third-party surge with no first-party engagement; another account shows heavy first-party engagement with no third-party surge. The merge has to express each case differently.
The conflict rules are written as policy, not as an algorithm. Sellers and demand owners read the rules and act on them. According to McKinsey research on B2B sales productivity, written policies that fit on a single page outperform black-box scoring in adoption and in attributable pipeline.
The merged score is only useful when it lands inside the rep workflow. The playbook below names the surfaces, the cadences, and the owners.
The cadence is the merge in action. The score lives in the rep workflow, not in a marketing dashboard. Without that, the merge is decoration.
Source selection is the leverage decision. The playbook should pick one third-party provider, one first-party telemetry path, and one firmographic reference. Per the IDC research on B2B data spend, teams that consolidate to three sources spend less and get more usable signal than teams that buy from five.
The selection question is covered in detail in the intent data platforms guide and the source selection framework. Both reference the trade offs by tier.
Validation is the discipline that prevents the score from drifting. The team picks a fixed validation window, runs the score against closed-won and closed-lost outcomes, and adjusts weights only when the data justifies it.
The validation cadence keeps the merge honest. Teams that skip the validation step end up with a score that ranks accounts plausibly but does not predict pipeline.
The merge is also the source of truth for the marketing-to-sales handoff. The playbook reuses the team handoff scoring approach and the rep-action framework. The handoff happens when the merged score crosses a threshold and at least two committee roles are engaged.
Most teams stall on a small set of recurring failure modes rather than on the framework itself. The list below names the patterns Forrester and Gartner research call out, plus the patterns we see most often in mid-market B2B revenue teams.
Each pitfall has the same fix: write the artifact, name the owner, set the date, and review on a fixed cadence.
Five inputs across first-party, third-party, and firmographic sources. Adding a sixth before validating the first five usually breaks the model.
First-party visits over seven days, multi-role engagement over fourteen days, third-party surges over twenty one days, firmographic context over forty five days. Decay schedules are written into the model.
First-party engagement carries the highest weight because it is observable on your property. Third-party surges enter at medium weight with a faster decay. The asymmetry is per Bombora calibration research and matches typical B2B buying patterns.
On the CRM account record with the top contributors visible to the rep. The score also lives in the morning prioritization view filtered above a written threshold.
Pull two quarters of closed deals, look up the score at opportunity creation, and confirm the closed-won median sits at least one quartile above the closed-lost median. If the gap is smaller, adjust weights at the next quarterly review.
The article above sits inside a wider editorial library. The links below cover adjacent topics most B2B revenue teams reach for next.