OA-6 · Outbound Automation · 125 XP · ~20 min
The Metrics That Matter
| Metric | What It Measures | Target (Cold Outbound) |
|---|---|---|
| Deliverability rate | % of emails reaching inbox (not spam) | > 95% |
| Open rate | % of delivered emails opened | > 40% for targeted, > 25% for broad |
| Reply rate | % of delivered emails getting any reply | > 5% for cold, > 10% for warm |
| Positive reply rate | % of replies that are interested or curious | > 3% of sends |
| Meeting booked rate | % of sends that convert to a scheduled meeting | > 1% |
| Pipeline generated | Total $ value of pipeline sourced | Depends on ACV |
| Cost per meeting | Total cost (time + tools) ÷ meetings booked | < 25% of ACV |
The Campaign Diagnostic Framework
When a campaign underperforms, the failure is almost always in one of four layers. Work top-down:Running Campaign Analysis in Bitscale
Bitscale is your analysis layer. After a campaign runs, pull the results back into a grid and build analysis columns. Step 1: Import campaign results Export from your sequencer: email address, send date, opens (yes/no), replies (yes/no), reply text, outcome (positive/negative/neutral/no reply). Step 2: Build analysis columns Reply sentiment analysis:A/B Testing in Outbound
Good campaign analysis enables systematic testing. Here’s the testing hierarchy — run tests in this order:-
Subject line test (highest leverage, cheapest to run)
- Split: 50% get subject line A, 50% get B
- Metric: open rate
- Sample size: minimum 200 per variant
-
Opening line test (second-highest leverage)
- Split: 50% get SVC opening, 50% get PAS
- Metric: reply rate
- Sample size: minimum 300 per variant
-
CTA test (often underrated)
- “Worth a 15-minute call?” vs. “Want me to send the breakdown?”
- Metric: positive reply rate
- Sample size: minimum 200 per variant
-
Send time test (smaller impact, worth running once)
- Tuesday–Thursday, 7–9am local vs. standard business hours
- Metric: open rate
Reading Negative Replies
Negative replies are not failures — they’re free market research. A “not interested” with context tells you more than 100 non-replies.| Negative Reply Pattern | What It Means | Action |
|---|---|---|
| ”We already use [Competitor]“ | ICP is right, timing is wrong | Add to competitor displacement list; nurture in 6 months |
| ”Budget is frozen” | ICP is right, economic timing is wrong | Tag with “Q[next quarter] revisit" |
| "Not the decision maker” | Wrong persona — too junior or too senior | Ask for referral: “Who would be the right person?" |
| "We’re not using outbound” | ICP mismatch on GTM model | Remove from sequence; update ICP definition |
| ”Too expensive” | Value prop not landing, or genuinely wrong fit | Send case study; if persists, remove from segment |
Campaign Scorecard
Before launching any new sequence, set your benchmarks. After 2 weeks, run the scorecard:| Metric | Benchmark | Actual | Status |
|---|---|---|---|
| Deliverability | > 95% | ? | 🟡 |
| Open rate | > 35% | ? | 🟡 |
| Reply rate | > 5% | ? | 🟡 |
| Positive reply rate | > 2% | ? | 🟡 |
| Meeting rate | > 0.8% | ? | 🟡 |
OA-6 Challenge: Diagnose a Campaign (+125 XP)
Take a real or simulated campaign dataset (minimum 100 rows) and run the full diagnostic in Bitscale: Requirements:- Reply sentiment classification column
- Objection extraction column
- A written diagnosis identifying which layer is failing
- 2 specific A/B tests you would run next, with hypothesis, variant descriptions, and success metric
- Campaign scorecard filled in with your data
Submit OA-6 Challenge →
Share your grid link + written diagnosis + test hypotheses. +125 XP on approval.
Next: OA-7 — Outbound Capstone →
You’ve done the work. OA-7 is where you build and ship a complete outbound system from scratch — and earn the Outbound Specialist certification.