Most B2B teams know the answer in the abstract: in-market accounts are companies actively researching or buying your category right now, and you find them by combining first-party signals (web and product behavior on your own properties) with third-party intent signals (content consumption across the web), scoring the blend by recency, relevance, and fit, and acting on the top tier within hours. The gap is never the definition. The gap is the workflow that turns the definition into pipeline. This guide is the workflow — vendor-agnostic for the first four steps, honest about where Abmatic earns its keep in the fifth, and structured so you can take it to a Monday meeting without rewriting it.
Full disclosure: Abmatic builds an agentic ABM platform, so we have an obvious bias toward the agentic activation layer in Step 5. We've tried to make the first four steps work whether you ever talk to us or not — the manual sheet-and-discipline path is real, the hybrid Zapier path is real, and we've shipped both ourselves before building the agent layer on top. If you only read one section, read Step 3; bad scoring is what kills most intent programs, and bad scoring is vendor-agnostic.
An in-market account is one whose buying-group behavior in the last few weeks suggests an active evaluation of your category. The unit is the account, not the lead — B2B purchases are committee decisions, and a single MQL is a poor predictor of the committee's state. Signals are behavioral, not declarative: someone ticking "interested in learning more" on a webinar form is rarely in-market; an account whose CFO loaded your pricing page twice this week probably is.
For this guide, an in-market account satisfies three conditions: recent behavioral evidence of category research, basic firmographic fit, and engagement from a coordinated buying group rather than a single curious intern. Strip any of those out and you hit the failure modes every demand-gen lead already knows — chasing one analyst, courting a perfect-fit account that hasn't actually engaged, or following clicks from a company that will never buy. For the underlying signal vocabulary, our guide to using intent data covers it.
This is the section most playbooks skip and the one most teams need first. Without these in place, the rest of the guide produces confidently wrong answers at scale.
Teams that get the most out of intent data spend a quiet first month fixing these. Teams that buy a platform first and patch the prerequisites afterward spend the year explaining to their CMO why pipeline didn't move.
Before instrumenting anything, decide which signals you'll trust. There are four practical categories, and the right blend depends on your motion.
Visits to high-intent pages, content downloads, demo requests, chat conversations, on-site search. Highest-fidelity signals you'll ever get — they're yours, they're fresh, and nobody else has them. Most teams under-weight first-party because it feels obvious. Don't.
If you have a free tier, trial, or PLG motion, product activation events (invited a teammate, completed a workflow, connected an integration) are the strongest intent signal that exists. The user has committed effort. Treat product signals as a tier above any third-party data.
Content consumption across a publisher co-op (Bombora's model), review-site behavior (G2 Buyer Intent, TrustRadius), and aggregated multi-source intent graphs (6sense, Demandbase). Useful for catching accounts before they hit your site. Less useful as "act today" signal — most sources run a few days behind.
Engagement with your ads, social, and sales history; firmographic match to ICP. Not intent on its own — these qualify and contextualize the signals above.
The mistake at this stage is chasing coverage breadth. You don't need every signal type to start; you need two reliable ones — first-party behavioral plus one other, blended sensibly. Coverage is something to grow into.
Your own site is the highest-signal real estate you'll ever own. Most teams skip past it on the way to a third-party purchase — a little like buying weather data for a city you don't live in while ignoring the window in your own room.
A pixel that resolves anonymous traffic to companies — and, with the right consent flow, individuals — is the highest-leverage tool you can deploy. Vendors in this space include Abmatic, Warmly, RB2B, and the legacy Clearbit Reveal (now folded into HubSpot Breeze, per HubSpot's public announcement). Our reverse IP lookup explainer walks the trade-offs and how the underlying tech actually works.
One non-negotiable: have privacy or legal review the deployment for GDPR and ePrivacy compliance before shipping into the EU. Most reputable providers handle consent flow well, but the burden of compliance is yours.
Tag every page where intent isn't ambiguous: pricing, demo, comparison, integrations, "vs competitor" landing pages. Each should fire a distinct event so you can score them differently — a pricing-page hit is not the same signal as a blog read.
Whitepapers, gated assets, webinars, video views above a meaningful watch threshold (60 seconds-plus is a useful floor). Softer signals than a pricing-page hit, but useful for staging accounts into nurture before they're outbound-ready.
If you run a free tier or trial, instrument the activation events that correlate with conversion: invited a teammate, connected an integration, hit a usage threshold. Highest-fidelity data on the list. Most PLG teams under-use product signals because they live in the product database, not the marketing stack.
The output of this step is a tagged, named, accountable stream of first-party events your scoring model can read. If that stream lives in seven tools and nobody can join the data, you have an instrumentation problem, not a scoring problem.
Third-party intent fills in what your own site can't see — the accounts researching the category somewhere else (competitor pages, review sites, industry publications) before they ever land on your domain. The major sources, with honest trade-offs:
A useful test for any third-party vendor before signing: show me the last ten accounts that scored "in-market" on your platform that we ended up closing, in our category, in our region. If they can't or won't run that query in the demo, the data isn't precise enough to drive action. Also ask about latency — some third-party sources are 24 hours behind, some a week. Your act window has to be longer than the data's latency or the signal is dead on arrival.
This is the step where most teams quietly give up and let the vendor's default scoring run. That's fine for week one. By month two you'll want your own formula, because the vendor doesn't know which signals matter for your sales motion and won't tune for the segment you're trying to crack.
A starter scoring model that's served plenty of B2B teams well:
Score = (Recency × Relevance × Fit) − Decay
Each component, broken down:
A worked example to make the formula concrete: a Series B SaaS company hits your pricing page yesterday, runs three competitor searches on G2 this week, and downloads an integration whitepaper a few days ago. Strong recency, strong relevance, decent fit assuming they match your ICP — that account might land somewhere in the 80s on a 0–100 scale. The exact number doesn't matter; the shape does. You want one number per account that gets you to a defensible "should we act today, or not."
Most teams iterate the formula every month for the first quarter and then settle in. There's no perfect score; there's a score that's good enough to drive action, and that's the goal.
This is where most intent-data programs quietly die.
The manual workflow — a weekly export, a meeting where someone reads out the hot accounts, a Slack list, a few BDRs trying to remember to follow up — works fine under a hundred priority accounts. Above that it falls over. The signal-to-action latency stretches from hours to days, decay eats the score by the time anyone calls, and the BDRs lose trust in the list because half of it is already stale by Wednesday.
There are three rough levels of automation, each with a clear use case.
Sheets plus reminders plus discipline. A weekly intent review meeting where the demand-gen lead and an SDR walk through the top fifty accounts and assign owners. Cheapest setup, no platforms to buy, fine if you're a five- or ten-person team running a focused list. Breaks above ~100 priority accounts a week and degrades fast as soon as anyone takes vacation.
Intent score crosses a threshold, a Zap or n8n workflow fires, the account gets tagged in your CRM, a Slack notification goes to the owning AE, an email sequence kicks off automatically. This works well up to a few hundred high-priority accounts a month and is a reasonable home for most mid-market teams. The fragility is in the glue — Zaps fail silently, CRM fields drift, copy goes stale, and you'll need someone owning the plumbing about half a day a week.
An agent watches the signal stream continuously, applies the scoring model in real time, picks the right activation per tier, and runs the next action — fire a personalized website experience, queue an outbound message for human review, push a retargeting audience, log the touch in the CRM. The human stays in the loop for approvals on outreach copy, but the busywork (triage, audience building, list updates, copy variants) gets handled automatically.
The argument for the agentic model isn't "agents are cool." It's that the latency between signal and action is the entire game with intent data, and a workflow with a human in the middle of every triage decision can't get under a few hours. An agent can. This is the wedge Abmatic was built for, and it's also the part of this playbook that's hardest to assemble out of point tools — the signal layer, scoring, activation, and CRM sync all need to share state.
Picking your level is a function of two things: how many priority accounts you can act on per week, and how tight your act window has to be. Under 100 accounts and a 72-hour act window, manual is fine. Above 500 accounts or a 24-hour window, agentic is the only option that works without doubling your headcount. Hybrid covers most teams in between.
A score is useless without a rule for what to do with it. You need a tier definition that maps score bands to actions, and the actions need to be specific enough that anyone on the team can execute them without asking.
| Tier | Score band | What it means | Action |
|---|---|---|---|
| A — Hot | 80–100 | In-market, fits ICP, acting now | Personalized site experience plus BDR outbound within 24 hours |
| B — Warming | 50–79 | Researching the category, fits ICP | Retargeting plus tailored nurture sequence |
| C — Cold-but-fit | 30–49 | Fits ICP, no recent intent | Awareness ads plus content offers; review monthly |
| D — Noise | 0–29 | Doesn't fit, or signal is stale | Ignore. Watch for re-emergence. |
The numbers are illustrative; calibrate the bands to your own data. The important part is the action discipline. Tier A accounts get the same treatment every time, ideally automatically, with the same SLA. If your tier-A definition lives in a doc but the actual outreach takes a week, you don't have a tier-A workflow — you have a tier-A spreadsheet.
The other thing worth saying out loud: tiers aren't permanent. An account moves between bands as signals accumulate or decay. The system needs to handle that movement automatically — re-scoring, re-tiering, and re-routing without a human reading a sheet on Monday. Most failures at this step come from a static tier assignment that ages badly.
Intent-data programs get killed not because they don't work, but because the wrong metrics get reported up. Pick leading and lagging indicators that map to pipeline.
What to stop measuring: form fills attributed to intent data, MQLs from intent data. Intent is account-level; lead-shaped metrics will mislead you and give the program a misleading-good or misleading-bad story depending on which way the lead funnel happened to bounce that month.
The interesting story in intent data right now isn't another vendor with another data source. It's that the activation layer — the part where signal becomes action — is being absorbed by AI agents. Forrester and Gartner have both flagged the agentic shift in their 2025 ABM and martech coverage; the practitioner reality is that the gap between signal and action is finally closing.
The traditional loop looked like this: signal lands, a human reviews it, the human decides what to do, the human triggers the action through a tool. Each handoff added latency and dropped fidelity. The agentic loop collapses signal-to-action into a single continuous process: an agent monitors the signal stream, applies the scoring model, picks the activation per tier, runs it, logs the result, and surfaces only the decisions that need a human (mostly outbound copy and high-stakes meetings).
What changes for the team:
Our 2026 ABM playbook walks the broader operating model around this shift. The short version: the unlock isn't magic, it's that the busywork between signal and action gets automated, which lets the team focus on the parts that actually need human judgment.
Yes — and most teams should start that way. A first-party visitor pixel, a tagged set of high-intent pages, a simple score in a sheet, and a weekly review meeting will get you a long way under 100 priority accounts. Add platforms when the manual workflow breaks under volume or when your act window tightens past what humans can hit.
Aim for under 24 hours on tier-A accounts, with 72 hours as the floor. Beyond that, decay eats most of the score's value. The act window is the single biggest determinant of whether intent data turns into pipeline — every hour of latency is a measurable drop in conversion.
Roughly: a week to two weeks for high-intent behavioral signals (pricing-page hits, demo views, competitor comparisons), two to four weeks for content consumption, longer for slow-moving research signals like topic-cluster surges. Always apply decay; never treat a 60-day-old signal as if it's fresh.
A first-party visitor-identification pixel, your CRM, a tagged set of high-intent pages, and a manual review cadence. You can run this for the cost of one tool subscription. Add a third-party source — Bombora coverage, G2 Buyer Intent, or similar — once the first-party motion is shipping reliably. Trying to start with third-party data and no first-party instrumentation is the most common expensive mistake in this space.
Look at win rate, ACV, and sales-cycle length on tier-A pipeline versus non-tier-A pipeline. If tier-A doesn't win more, sell larger, or close faster, your scoring is off — or your activation isn't tight enough. Adjust the model before adjusting the data source. And run the score backward against last quarter's closed-won deals as a sanity check; if it wouldn't have flagged the deals you actually closed, the model is wrong.
For triage, audience building, list maintenance, retargeting setup, and personalization — yes, with high reliability today. For outbound message-writing — yes, with human review on the final copy. For closing deals — no. The agent's job is to compress signal-to-action latency and remove the busywork, not to replace the seller.
Track both for a quarter. Compare win rates on accounts the data flagged versus accounts the AE flagged. Usually the data is right more often on volume and the AE is right more often on the top few accounts where they have real context — known relationship, recent conversation, board-level connection. The answer is a system that respects both: data drives the queue, the AE has override on the top tier.
If you've made it this far, you're past the "what does in-market mean" question and into "how do I run this." Three useful next reads:
And when you're ready to see what agentic in-market identification looks like running on your own accounts, with your own signals, in your own tier definitions: book a 30-minute Abmatic demo. We'll run it against real data, not a sandbox, and you'll leave with an honest read on which tier of automation actually fits your team — even if the answer is "you don't need us yet."