Back to portfolio
Workflow Automation · Cross-functional

Traction Engine

B2B Outbound Validation Engine

Tests business hypotheses by running fully-automated outbound prospecting — from ICP identification through email sequencing — so founders learn whether an ICP is real before investing in a sales team.

01The Problem

B2B founders validating new ICPs spend weeks manually building prospect lists, enriching contact data, writing email sequences, and loading into CRM — work that could be automated but usually isn't because the tooling is fragmented. Worse, the feedback loop between "send first email" and "learn if the ICP responds" is so slow that many hypotheses go unvalidated longer than they should.

02What the AI Does

Takes a hypothesis (problem statement + ICP definition + offer + differentiator), then runs the full outbound pipeline: discovery via Extruct lookalikes and Apollo title search → enrichment via Airscale email verification → CRM load via HubSpot → email sequence generation via Claude → sequence deployment via SalesHandy. Monitors reply classification and flags when signal is clear (kill or validate). Runs on cron with human approval gates at hypothesis entry and before first send. Pipeline tools: Extruct, Apollo, Airscale, Netrows, HubSpot, SalesHandy, Claude Code.

03Design Decisions

01 · Choice

Discovery before enrichment — free before paid

Why

Every dollar spent on a bad hypothesis is a lesson in what doesn't work. Apollo is free for people search. Only after a company passes discovery does it move to paid enrichment (Airscale, $0.008/email). This sequencing prevents spending on companies that don't match the ICP.

Constraint

Free tier rate limits apply to Apollo. Extruct tokens are limited (~900/month). Both must be used judiciously.

02 · Choice

HubSpot as pipeline database, not CSV

Why

CSV-based state management breaks down when prospects are in multiple sequences, when status changes across tools, and when the pipeline scales. HubSpot's native bi-directional SalesHandy sync eliminates the CSV handoff entirely.

Constraint

HubSpot free CRM has API rate limits. Custom properties had to be carefully named because HubSpot doesn't allow property type changes after creation — must delete and recreate.

03 · Choice

Netrows for personalization, not discovery

Why

Netrows returns garbage for generic title queries ("VP Sales") but excels at enriching specific profiles (career history, skills, tenure). Using it for discovery caused false negatives. Using it for personalization — where it returns 16 position descriptors and 26 skill tags per person — produces high-quality outreach customization.

Constraint

Must use URN-style LinkedIn usernames from profile URLs, not vanity usernames. Apollo is the discovery tool; Netrows is the enrichment layer.

04 · Choice

Vertical diversity in seed selection, not geographic

Why

Testing showed 8 vertical seeds (manufacturing, logistics, SaaS, etc.) generated 48% more unique domains than geographic diversity. City filters added minimal uniqueness. The ICP definition matters more than the geography.

Constraint

ICP is defined by firmographics and problem framing, not by geography. Outbound is location-agnostic once the right firm type is identified.

05 · Choice

Human approval gates at hypothesis + before send

Why

Fully autonomous outbound without approval gates risks a bad hypothesis burning through budget and reputation (bad emails going out under Brett's name). The architecture is "human reviews, AI executes" — not "AI runs unsupervised."

Constraint

Phase 5 (approval-gated autonomy) is not yet implemented. Currently: full human approval required at every step. The autonomous roadmap exists but execution is staged.

04Tradeoffs & Limits

- **Apollo data freshness varies by title and geography.** Title-based search is free but results quality varies. "Revenue Operations Manager" returns solid results; exotic titles may return nothing. - **Email decay is not real-time.** Email addresses verified today may bounce tomorrow. Airscale's 81% verified rate means ~19% of "verified" emails may still bounce. Re-verification on long-running campaigns isn't yet implemented. - **Reply classification is manual.** The system currently flags when replies come in but doesn't autonomously classify sentiment, route positive replies to Brett, or handle negative replies with unsubscribes. Phase 3 of the autonomous roadmap addresses this. - **H1 shakedown batch still not sent.** The first signal is pending. Everything is built; the gate is Brett actually sending the first emails. - **ICPs requiring event triggers or news mentions are not yet supported.** Playbook D (trigger-based discovery) exists in docs but hasn't been wired into the orchestration layer.

05Key Insight

The most dangerous phase of B2B validation isn't building the pipeline — it's the gap between "pipeline built" and "first email sent." Every day that passes without a send is a day of uncorrected assumptions. The machine should be the execution layer; the human should be the judgment layer.