
Activation Lever: Revenue Proof
85% of B2B firms miss their monthly forecast by more than 5%. Half miss by more than 10%.
The problem isn't the methodology. Most organizations have MEDDPICC or SPICED or BANT documented somewhere. The problem is inspection. Reps call deals "commit" based on feel. Managers haircut forecasts because they don't trust the data. Leadership loses confidence in the pipeline.
This prompt inspects a single deal against objective criteria. It answers two questions: Is this deal real? And does the forecast category match the evidence?
No more gut-feel forecasting. No more hoping. Just evidence.
How to use this prompt
What you'll need:
- Weekly pipeline reviews (inspect all Commit and Upside deals)
- 1:1 deal inspections with reps
- Before forecast calls with leadership
- When a deal feels "off" but you can't articulate why

Step-by-step:
- 1
Copy the prompt below into your AI tool
- 2
Paste your deal information
- 3
Include your stage gate requirements (or use the defaults provided)
- 4
Review the assessment
- 5
Use the manager questions in your next deal review

The Prompt
=============
PERSONA
=============
You are an expert sales operations analyst who evaluates deal health and forecast accuracy. You've inspected thousands of deals and know that most forecast misses come from the same root cause: reps calling deals "commit" when critical elements are missing. Your job is to deliver objective truth about deal readiness.
=============
TASK
=============
Inspect a single deal against qualification criteria and assess whether the forecast category matches the evidence. Score each qualification element, compare to stage requirements, identify gaps, and deliver a clear verdict with specific questions for the manager to ask.
=============
CONTEXT
=============
THE FORECAST INTEGRITY PROBLEM:
Reps overestimate deal readiness. They call deals "commit" based on feel. Managers respond by haircutting forecasts, which destroys trust and makes planning impossible. The fix is objective inspection.
SCORING SYSTEM (0-2 Scale):
For each qualification element:
- 0 = Not identified or unknown
- 1 = Partially identified (surface level, not validated)
- 2 = Fully qualified (specific, validated, actionable)
QUALIFICATION ELEMENTS TO SCORE:
1. PAIN/PROBLEM
- 0: No pain identified or generic "wants improvement"
- 1: Pain identified but not quantified (no dollar or time impact)
- 2: Pain quantified with specific business impact ($X cost, Y hours wasted)
2. IMPACT/VALUE
- 0: No ROI discussion or vague "will help"
- 1: General value acknowledged but not calculated
- 2: ROI model built with customer's data, validated by economic buyer
3. CHAMPION
- 0: No internal advocate identified
- 1: Contact identified who likes you (but not tested)
- 2: Champion tested and proven (provides access, shares intel, sells internally)
4. ECONOMIC BUYER
- 0: Decision maker unknown
- 1: Economic buyer identified but no direct engagement
- 2: Economic buyer engaged and supportive
5. DECISION PROCESS
- 0: Approval path unknown
- 1: Some steps identified, key stakeholders unclear
- 2: Full process mapped with all stakeholders, roles, and timeline
6. DECISION CRITERIA
- 0: Evaluation criteria unknown
- 1: Some criteria identified, not all mapped to your strengths
- 2: All criteria documented and aligned to your capabilities
7. CRITICAL EVENT/TIMELINE
- 0: No deadline or urgency driver
- 1: Soft timeline ("want to implement by Q2")
- 2: Hard deadline with consequences (audit date, contract expiration, board mandate)
8. COMPETITION
- 0: Competitive landscape unknown
- 1: Competitors identified, differentiation unclear
- 2: All competitors mapped, clear win strategy in place
STAGE GATE REQUIREMENTS (Default - adjust to your methodology):
PIPELINE (30-50% confidence):
- Pain ≥ 1
- Champion ≥ 1
- Decision Process ≥ 0.5
UPSIDE (60-70% confidence):
- Pain ≥ 1.5
- Impact ≥ 1.5
- Champion ≥ 1.5 (tested)
- Economic Buyer ≥ 1
- Decision Process ≥ 1.5
- Critical Event ≥ 1
COMMIT (80-90% confidence):
- Pain = 2
- Impact = 2 (validated ROI)
- Champion = 2 (proven)
- Economic Buyer = 2 (engaged)
- Decision Process = 2 (mapped)
- Critical Event ≥ 1.5
- Verbal commitment or equivalent signal
ANALYSIS PROCESS:
STEP 1: EXTRACT DEAL INFORMATION
From the provided data, identify:
- Account name and deal amount
- Current stage and days in stage
- Rep's forecast category and stated close date
- Rep's confidence level (if stated)
STEP 2: SCORE EACH ELEMENT
For each of the 8 qualification elements:
- Assign a score (0, 0.5, 1, 1.5, or 2)
- Note the specific evidence supporting that score
- Flag if evidence is missing or unclear
STEP 3: COMPARE TO STAGE GATES
Based on the rep's stated forecast category:
- List the requirements for that category
- Compare current scores to requirements
- Identify which gates are passed vs. failed
STEP 4: ASSESS FORECAST ALIGNMENT
- What forecast category does the EVIDENCE support?
- What is the gap between stated and supported confidence?
- Is this deal over-forecasted, under-forecasted, or accurate?
STEP 5: IDENTIFY GAPS AND NEXT ACTIONS
For each failed gate or weak score:
- What specific information is missing?
- What action would close this gap?
- Who needs to do what by when?
STEP 6: GENERATE MANAGER QUESTIONS
Create 3-5 specific questions the manager should ask in the next deal review to validate scores and close gaps.
=============
FORMAT
=============
=====================================
DEAL SNAPSHOT
Account: [Name]
Amount: $[X]
Stage: [Current Stage] | Days in Stage: [X]
Rep Forecast: [Category] | Close Date: [Date]
=====================================
QUALIFICATION SCORES
| Element | Score | Evidence |
|---------|-------|----------|
| Pain | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Impact | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Champion | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Economic Buyer | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Decision Process | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Decision Criteria | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Critical Event | X.X | [Brief evidence or "NOT DOCUMENTED"] |
| Competition | X.X | [Brief evidence or "NOT DOCUMENTED"] |
TOTAL: [X] / 16
=====================================
STAGE GATE ASSESSMENT
Stated Forecast: [CATEGORY]
Required for [CATEGORY]:
• [Element] ≥ [X] — Current: [Y] — [PASS/FAIL]
[Continue for all requirements]
Gate Status: [X of Y requirements met]
=====================================
FORECAST ALIGNMENT
Rep Confidence: [CATEGORY] ([X]% implied)
Evidence Supports: [CATEGORY] ([Y]% implied)
Alignment: [ACCURATE / OVER-FORECASTED BY X% / UNDER-FORECASTED BY X%]
VERDICT: [Clear statement on whether forecast category should change]
=====================================
GAPS TO CLOSE
1. [Element]: [Specific gap]
Action: [What needs to happen]
Owner: [Rep/Manager/Both]
[Continue for all significant gaps]
=====================================
MANAGER INSPECTION QUESTIONS
1. "[Specific question targeting gap #1]"
2. "[Specific question targeting gap #2]"
3. "[Specific question targeting gap #3]"
4. "[Question to validate the strongest element]"
5. "[Question about timeline/urgency]"
=====================================
DEAL HEALTH SUMMARY
Overall Readiness: [HIGH / MEDIUM / LOW]
Forecast Recommendation: [Keep at X / Move to Y / Needs more discovery]
Biggest Risk: [Single biggest factor that could kill this deal]
Next Milestone: [What needs to happen for this deal to advance]
=====================================
=============
EXAMPLE
=============
INPUT:
Account: TechFlow Solutions | $127,000 | Evaluation stage | 45 days
Rep Forecast: COMMIT | Close: Dec 31
Pain: CFO mentioned "reporting takes too long" - no hours or cost
Champion: Sarah Chen (VP Ops) - "really likes our approach" - met twice
Economic Buyer: CFO Jim Walsh - identified but haven't met
Decision Process: "Sarah needs Jim's approval, then procurement"
Timeline: "Want to implement in Q1" - no specific driver
Competition: "Looked at Competitor X last year, didn't buy"
OUTPUT (abbreviated):
=====================================
DEAL SNAPSHOT
Account: TechFlow Solutions
Amount: $127,000
Stage: Evaluation | Days: 45
Rep Forecast: COMMIT | Close: Dec 31
=====================================
QUALIFICATION SCORES
| Element | Score | Evidence |
|---------|-------|----------|
| Pain | 1.0 | "Reporting takes too long" - not quantified |
| Impact | 0.5 | No ROI model, vague value |
| Champion | 1.0 | Sarah "likes us" but not tested |
| Economic Buyer | 0.5 | Jim identified, no engagement |
| Decision Process | 1.0 | Basic flow known, incomplete |
| Decision Criteria | 0 | NOT DOCUMENTED |
| Critical Event | 0.5 | "Q1" is soft, no consequence |
| Competition | 0.5 | Historical info only |
TOTAL: 5.0 / 16
=====================================
STAGE GATE ASSESSMENT
Stated Forecast: COMMIT (80-90%)
Required: Pain = 2 — Current: 1.0 — FAIL
Required: Impact = 2 — Current: 0.5 — FAIL
Required: Champion = 2 — Current: 1.0 — FAIL
Required: Economic Buyer = 2 — Current: 0.5 — FAIL
Required: Decision Process = 2 — Current: 1.0 — FAIL
Gate Status: 0 of 5 COMMIT requirements met
=====================================
FORECAST ALIGNMENT
Rep Confidence: COMMIT (80-90%)
Evidence Supports: PIPELINE (30-40%)
Alignment: OVER-FORECASTED BY 50%
VERDICT: This deal should be moved from COMMIT to PIPELINE. Zero COMMIT requirements are met. The rep has a contact who likes them, not a proven champion. No economic buyer engagement. No quantified pain or ROI.
=====================================
GAPS TO CLOSE
1. Champion: Sarah not tested—has she provided access, shared intel, or sold internally?
Action: Ask Sarah to introduce you to Jim (CFO)
Owner: Rep
2. Pain Quantification: "Takes too long" is vague
Action: Ask "How many hours per month? What does that cost?"
Owner: Rep
3. Economic Buyer: Jim hasn't been engaged
Action: Get meeting with CFO before calling this UPSIDE
Owner: Rep + Manager
=====================================
MANAGER INSPECTION QUESTIONS
1. "When did we last talk to Jim Walsh? What did he say?"
2. "Has Sarah done anything to advocate for us internally?"
3. "What happens to TechFlow if they don't solve this by Q1?"
4. "What's the quantified cost of their current reporting problem?"
5. "What criteria will they use to make this decision?"
=====================================
=============
INPUT
=============
Deal Information:
[Paste deal details: account name, amount, stage, forecast category, close date, days in stage]
Qualification Data:
[Paste any MEDDPICC/SPICED/BANT scores, CRM notes, or qualification information you have]
Recent Activity:
[Paste last call notes, recent emails, documented next steps, days since last contact]
Stage Gate Requirements (optional):
[If your organization uses different stage gates than the defaults above, paste them here. Otherwise, the defaults will be used.]

What Success Looks Like
Second Example - Well-Qualified Deal:
Deal Information:
Account: DataSync Corp
Amount: $85,000
Stage: Validation
Forecast: UPSIDE
Close Date: Jan 15
Days in Stage: 21
Qualification Data:
Pain: "Manual data reconciliation costs 12 hours/week at $75/hour = $47K/year" - CFO validated
Champion: Marcus Lee (VP Ops) introduced us to CFO and shared budget timeline
Economic Buyer: CFO Lisa Park - had 45-min meeting, agreed to pilot
Decision Process: Marcus → Lisa → Board approval for >$50K, timeline mapped
Timeline: Board meeting Jan 10 for Q1 budget approval
Competition: Evaluated Competitor Y, concerns about implementation timeline
Output:
=====================================
DEAL SNAPSHOT
Account: DataSync Corp
Amount: $85,000
Stage: Validation | Days: 21
Rep Forecast: UPSIDE | Close: Jan 15
=====================================
QUALIFICATION SCORES
| Element | Score | Evidence |
|---------|-------|----------|
| Pain | 2.0 | Quantified: 12 hrs/week × $75 = $47K/year |
| Impact | 1.5 | ROI discussed, CFO validated pain cost |
| Champion | 2.0 | Marcus tested: introduced CFO, shared timeline |
| Economic Buyer | 2.0 | Lisa Park engaged, agreed to pilot |
| Decision Process | 2.0 | Full path: Marcus → Lisa → Board |
| Decision Criteria | 1.5 | Implementation timeline concern surfaced |
| Critical Event | 2.0 | Board meeting Jan 10 for budget |
| Competition | 1.5 | Competitor Y known, our differentiator clear |
TOTAL: 14.5 / 16
=====================================
FORECAST ALIGNMENT
Rep Confidence: UPSIDE (60-70%)
Evidence Supports: COMMIT (80-85%)
VERDICT: This deal is UNDER-FORECASTED. Consider moving to COMMIT. All major gates passed. Only gaps: finalize ROI model and confirm decision criteria alignment.
=====================================
MANAGER INSPECTION QUESTIONS
1. "Board meeting is Jan 10 and we're forecasting Jan 15 close. What happens between board approval and signature?"
2. "What's the exact ROI model we're presenting to the board?"
3. "Beyond implementation timeline, what other criteria is Lisa evaluating?"
4. "What's Marcus saying internally when we're not in the room?"
=====================================
DEAL HEALTH SUMMARY
Overall Readiness: HIGH
Forecast Recommendation: Consider upgrading to COMMIT
Biggest Risk: Board approval timing—Jan 10 meeting leaves 5 days to close
Next Milestone: Confirm board presentation and agenda
=====================================

Creator’s Note: Why this Works

Why Scoring Beats Gut Feel
85% of B2B firms miss their monthly forecast by more than 5%. The problem isn't methodology. Most have MEDDPICC or SPICED documented. The problem is consistent application.
Daniel Kahneman's research explains why: most errors in human judgment aren't bias, they're noise. Noise is inconsistency. The same manager evaluating the same deal on Monday and Friday might reach different conclusions.
Kahneman's conclusion: "You should replace humans by algorithms whenever possible. Even when the algorithm does not do very well, humans do so poorly and are so noisy that, just by removing the noise, you can do better."
A structured scoring system is an algorithm. It forces consistency.
The Evidence: It Works
Zendesk implemented 0-2 scoring for each MEDDPICC element with stage gates. Forecast accuracy improved from 25% variance to within 1-5%. Community Brands took the same approach: standardized framework, mandatory gates, weekly cadence. Their forecasts went from 15% swings to within 1% of projections.
The pattern is clear. Replace subjective confidence with objective scoring, and forecasts get accurate.
Killing Vibe Selling
"Vibe selling" is when a deal feels good but lacks evidence. The prospect is friendly. The demo went well. They said they're "really interested."
But optimism isn't qualification. Gong research shows top performers ask 11-14 questions per discovery call and achieve 74% success rates. Reps who ask only 1-6 questions succeed 46% of the time. Deeper questioning surfaces real pain, real timelines, real decision processes. Shallow discovery leaves you with vibes.
This prompt kills vibe selling by forcing evidence. You can't score Champion = 2 because they "really like you." You need proof: access to other stakeholders, shared internal information, internal advocacy.
If your deal can't survive these questions, you don't have a deal. You have a conversation.
Stage Gates Are Binary
A deal with Pain = 2, Impact = 2, Champion = 0 is unqualified. No averaging past a zero. That's why Zendesk and Community Brands enforced minimum scores by stage. The weakest link determines readiness, not the average.

Level up: Advanced Applications
- Customize stage gates for your methodology. The defaults work for most organizations, but if you use MEDDPICC, SPICED, or a custom framework, update the stage gate requirements to match your definitions. The key is consistency: every rep should be measured against the same standards.
- Run on all Commit deals before forecast call. Make this a weekly ritual. Every deal in Commit gets inspected. Deals that don't meet requirements get downgraded before the forecast goes to leadership. This builds forecast credibility over time.
- Track gap patterns across the team. If you run this on 20 deals and Champion is weak in 15 of them, that's not a deal problem. That's a coaching priority. Use the output to identify team-wide skill gaps, not just deal-specific issues.
- Build a Custom GPT for continuous use. This prompt works in any AI tool, but the real unlock is systematizing it. Create a Custom GPT with your stage gates, your methodology definitions, and your qualification criteria baked in. Then your team can run deal inspections without re-entering the framework each time.
- Combine with Day 9 and Day 10. Day 9 identifies the coaching moment from a single call. Day 10 spots patterns across calls. Day 11 inspects the deal. Together, they give you a complete picture: what happened in the conversation, what patterns are emerging, and whether the deal is actually ready.

Quick Reference: Forecast Categories
PIPELINE
30-50%
Pain identified, champion identified, process partially mapped
UPSIDE
60-70%
Pain quantified, champion tested, economic buyer identified, process mapped, timeline identified
COMMIT
80-90%
All elements at 2, economic buyer engaged, verbal commitment or equivalent
The rule: Your forecast category should match your lowest critical score, not your average. A deal with Champion = 0 is Pipeline regardless of other scores.
Tomorrow: Day 12— Activity-to-Revenue Linkage
See What’s Holding Your Revenue Back, And What Activates It
Revenue enablement wasn’t designed for execution in motion. Activation is.


