diff --git a/docs/README.skills.md b/docs/README.skills.md index 322e7e85..26182de0 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -136,6 +136,17 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to | [github-copilot-starter](../skills/github-copilot-starter/SKILL.md) | Set up complete GitHub Copilot configuration for a new project based on technology stack | None | | [github-issues](../skills/github-issues/SKILL.md) | Create, update, and manage GitHub issues using MCP tools. Use this skill when users want to create bug reports, feature requests, or task issues, update existing issues, add labels/assignees/milestones, set issue fields (dates, priority, custom fields), set issue types, manage issue workflows, link issues, add dependencies, or track blocked-by/blocking relationships. Triggers on requests like "create an issue", "file a bug", "request a feature", "update issue X", "set the priority", "set the start date", "link issues", "add dependency", "blocked by", "blocking", or any GitHub issue management task. | `references/dependencies.md`
`references/images.md`
`references/issue-fields.md`
`references/issue-types.md`
`references/projects.md`
`references/search.md`
`references/sub-issues.md`
`references/templates.md` | | [go-mcp-server-generator](../skills/go-mcp-server-generator/SKILL.md) | Generate a complete Go MCP server project with proper structure, dependencies, and implementation using the official github.com/modelcontextprotocol/go-sdk. | None | +| [gtm-0-to-1-launch](../skills/gtm-0-to-1-launch/SKILL.md) | Launch new products from idea to first customers. Use when launching products, finding early adopters, building launch week playbooks, diagnosing why adoption stalls, or learning that press coverage does not equal growth. Includes the three-layer diagnosis, the 2-week experiment cycle, and the launch that got 50K impressions and 12 signups. | None | +| [gtm-ai-gtm](../skills/gtm-ai-gtm/SKILL.md) | Go-to-market strategy for AI products. Use when positioning AI products, handling "who is responsible when it breaks" objections, pricing variable-cost AI, choosing between copilot/agent/teammate framing, or selling autonomous tools into enterprises. | None | +| [gtm-board-and-investor-communication](../skills/gtm-board-and-investor-communication/SKILL.md) | Board meeting preparation, investor updates, and executive communication. Use when preparing board decks, writing investor updates, handling bad news with the board, structuring QBRs, or building board-level metric discipline. Includes the "Three Things" narrative model, the 4-tier metric hierarchy, and the pre-brief pattern that prevents board surprises. | None | +| [gtm-developer-ecosystem](../skills/gtm-developer-ecosystem/SKILL.md) | Build and scale developer-led adoption through ecosystem programs. Use when deciding open vs curated ecosystems, building developer programs, scaling platform adoption, or designing student program pipelines. | None | +| [gtm-enterprise-account-planning](../skills/gtm-enterprise-account-planning/SKILL.md) | Strategic account planning and execution for enterprise deals. Use when planning complex sales cycles, managing multiple stakeholders, applying MEDDICC qualification, tracking deal health, or building mutual action plans. Includes the "stale MAP equals dead deal" pattern. | None | +| [gtm-enterprise-onboarding](../skills/gtm-enterprise-onboarding/SKILL.md) | Four-phase framework for onboarding enterprise customers from contract to value realization. Use when implementing new enterprise customers, preventing churn during onboarding, or solving the adoption cliff that kills deals post-go-live. Includes the Week 4 ghosting pattern. | None | +| [gtm-operating-cadence](../skills/gtm-operating-cadence/SKILL.md) | Design meeting rhythms, metric reporting, quarterly planning, and decision-making velocity for scaling companies. Use when decisions are slow, planning is broken, the company is growing but alignment is worse, or leadership meetings consume all time without producing decisions. | None | +| [gtm-partnership-architecture](../skills/gtm-partnership-architecture/SKILL.md) | Build and scale partner ecosystems that drive revenue and platform adoption. Use when building partner programs from scratch, tiering partnerships, managing co-marketing, making build-vs-partner decisions, or structuring crawl-walk-run partner deployment. | None | +| [gtm-positioning-strategy](../skills/gtm-positioning-strategy/SKILL.md) | Find and own a defensible market position. Use when messaging sounds like competitors, conversion is weak despite awareness, repositioning a product, or testing positioning claims. Includes Crawl-Walk-Run rollout methodology and the word change that improved enterprise deal progression. | None | +| [gtm-product-led-growth](../skills/gtm-product-led-growth/SKILL.md) | Build self-serve acquisition and expansion motions. Use when deciding PLG vs sales-led, optimizing activation, driving freemium conversion, building growth equations, or recognizing when product complexity demands human touch. Includes the parallel test where sales-led won 10x on revenue. | None | +| [gtm-technical-product-pricing](../skills/gtm-technical-product-pricing/SKILL.md) | Pricing strategy for technical products. Use when choosing usage-based vs seat-based, designing freemium thresholds, structuring enterprise pricing conversations, deciding when to raise prices, or using price as a positioning signal. | None | | [image-manipulation-image-magick](../skills/image-manipulation-image-magick/SKILL.md) | Process and manipulate images using ImageMagick. Supports resizing, format conversion, batch processing, and retrieving image metadata. Use when working with images, creating thumbnails, resizing wallpapers, or performing batch image operations. | None | | [import-infrastructure-as-code](../skills/import-infrastructure-as-code/SKILL.md) | Import existing Azure resources into Terraform using Azure CLI discovery and Azure Verified Modules (AVM). Use when asked to reverse-engineer live Azure infrastructure, generate Infrastructure as Code from existing subscriptions/resource groups/resource IDs, map dependencies, derive exact import addresses from downloaded module source, prevent configuration drift, and produce AVM-based Terraform files ready for validation and planning across any Azure resource type. | None | | [issue-fields-migration](../skills/issue-fields-migration/SKILL.md) | Bulk-migrate metadata to GitHub issue fields from two sources: repo labels (e.g. priority labels to a Priority field) and Project V2 fields. Use when users say "migrate my labels to issue fields", "migrate project fields to issue fields", "convert labels to issue fields", "copy project field values to issue fields", or ask about adopting issue fields. Issue fields are org-level typed metadata (single select, text, number, date) that replace label-based workarounds with structured, searchable, cross-repo fields. | `references/issue-fields-api.md`
`references/labels-api.md`
`references/projects-api.md` | diff --git a/skills/gtm-0-to-1-launch/SKILL.md b/skills/gtm-0-to-1-launch/SKILL.md new file mode 100644 index 00000000..a06c700c --- /dev/null +++ b/skills/gtm-0-to-1-launch/SKILL.md @@ -0,0 +1,318 @@ +--- +name: gtm-0-to-1-launch +description: Launch new products from idea to first customers. Use when launching products, finding early adopters, building launch week playbooks, diagnosing why adoption stalls, or learning that press coverage does not equal growth. Includes the three-layer diagnosis, the 2-week experiment cycle, and the launch that got 50K impressions and 12 signups. +license: MIT +--- + +# 0-to-1 Launch + +Launch new products from idea to first customers. The goal isn't headlines — it's finding 10 customers who can't live without you. + +## When to Use + +**Triggers:** +- "How do we launch this product?" +- "First customer acquisition strategy" +- "We launched but nobody's using it" +- "Product Hunt vs direct outreach?" +- "We have awareness but no conversion" +- "How do I know if this is working?" + +**Context:** +- New product launches +- Feature launches that feel like new products +- Finding first 10-50 customers +- Validating product-market fit +- Diagnosing why early traction stalls + +--- + +## Core Frameworks + +### 1. Press ≠ Growth (The Launch That Got 12 Signups) + +**The Pattern:** + +Coordinated a feature launch with full press tour. TechCrunch, VentureBeat, product blogs. Big announcement day. + +**Result:** +- 50K impressions +- 12 signups +- 2 conversions + +**Why It Failed:** + +Optimized for media buzz, not user value. The feature wasn't ready for self-serve. It needed education, context, hand-holding. Press gives you eyeballs. But eyeballs without activation = vanity. + +**What Works Better:** + +Email 50 target customers directly. "We built [feature] because teams like yours struggle with [problem]. Want early access?" Walk them through setup personally. Get feedback, iterate. + +**Result:** 50 emails → 15 replies (30% reply rate) → 8 trials → 4 conversions (50% trial-to-paid). + +**The Lesson:** + +Early customers come from direct outreach, not press coverage. Press matters later (Series A announcement, major milestone). For 0-to-1, it's distraction. + +--- + +### 2. The Three-Layer Diagnosis (Why Launches Stall) + +**The Pattern:** + +You launched. You have some awareness. But conversion is weak. The problem lives in one of three layers, and each requires a different intervention. + +**Layer 1: Positioning Problem** + +Symptoms: +- Messaging sounds like competitors +- Differentiation requires explaining complex technical details +- Buyers see you as interchangeable with alternatives +- Sales conversations get derailed by comparison questions + +Diagnosis: You're "fighting an asymmetric war on the wrong front" — competing on features against better-funded companies. Map where competitors claim unique value. Find the position they can't easily copy. + +Fix: Stake a claim you can own structurally (not just through product features). Test with outbound messaging before committing product resources. + +**Layer 2: Experience Problem** + +Symptoms: +- Strong awareness but weak activation +- Users sign up but don't complete first workflow +- Multiple entry points creating decision paralysis +- Documentation is feature-centric, not outcome-centric + +Diagnosis: Flexibility without opinionated defaults is a liability, not a feature. Users face the "paradox of choice" — too many options, not enough guidance to the aha moment. + +Fix: Identify 2-3 "undeniable use cases" that deliver immediate value. Restrict onboarding to those specific use cases. Gate advanced features behind a mastery path. Rewrite help content around jobs-to-be-done, not feature lists. + +**Layer 3: Alignment Problem** + +Symptoms: +- Team reports being "out of bandwidth" for customers +- Different functions optimize for different metrics +- Every idea has equal weight (no tiebreaker) +- No clear north star connecting activities to outcomes + +Diagnosis: "Exploratory mode" — where every initiative has equal priority — becomes destructive when resources are constrained. + +Fix: Define a single shared north star. Use it as tiebreaker for every decision: "Does this help us win a customer?" Cut activities that don't ladder up. Make progress visible weekly, not quarterly. + +**How to Use This:** + +When a launch stalls, diagnose which layer is broken before throwing resources at it. Fixing experience when the problem is positioning wastes engineering time. Fixing positioning when the problem is internal alignment wastes marketing spend. + +--- + +### 3. The First 10 Customers Framework + +**Principle:** First 10 customers are not for revenue. They're for learning. + +**What You're Learning:** +1. Does the product actually solve the problem? +2. What's the activation flow? (How do they get value?) +3. What objections come up? (Price, features, integrations?) +4. Who's the real buyer? (Title, role, budget authority?) +5. What's the sales cycle? (Days, weeks, months?) + +**How to Find Them:** + +**Channel 1: Personal Network (first 2-3)** +- "I'm building [X], can I get your feedback?" +- Convert to paying customers (don't give away for free — free users give different feedback than paying ones) + +**Channel 2: Direct Outreach (customers 3-20)** +- Build list of 100 target accounts +- Personalize to their specific pain +- Test messaging variants — which angle gets replies? + +**Channel 3: Ceiling Moment Targeting (highest-intent)** +- The highest-intent prospects are people who've already adopted a comparable solution and hit its limits +- They've invested in learning a tool, hit its ceiling, and have low switching costs +- Craft outreach around the limitation: "We see teams that outgrow [incumbent] when they need [capability]. That's what we built." +- These convert 3-5x better than cold outreach because they already understand the problem + +**Channel 4: Community (developer products)** +- "Built [X] to solve [problem], looking for early users" +- Offer white-glove onboarding +- Best for products where users congregate in Slack/Discord/forums + +--- + +### 4. The 2-Week Experiment Cycle + +**The Pattern:** + +Speed in early stages matters more than perfection. The constraint isn't whether you're right — it's how quickly you can test assumptions and iterate. + +**How to Execute:** + +- Frame every test with clear success criteria before starting +- Test one variable per experiment (messaging, channel, pricing, feature) +- Run for 2 weeks maximum — if it's not showing signal by then, it won't +- If it works, allocate 3x resources within a week +- If it doesn't, kill it and move to the next test +- Document what you learned regardless of outcome + +**The Playbook Rule:** + +Every successful experiment must become a playbook before scaling. Structure: Goal → Steps → Expected output → Metrics → Risks. If someone unfamiliar can't execute the playbook, it's not documented well enough. + +**Why This Matters:** + +One-off wins don't compound. Systematized experiments do. The goal isn't a single launch — it's building a repeatable machine for testing assumptions at speed. + +**Common Mistake:** + +Over-planning before testing. Waiting for "perfect" conditions before launching. Staying with failing experiments too long because you've invested emotional energy. Make decisions with 70% information. + +--- + +### 5. Partner-Led Market Entry (When You Don't Have Distribution) + +**The Pattern:** + +Rather than entering new markets through direct sales alone, use partnerships with established players to accelerate. + +**How to Execute:** + +1. Identify market leaders in your target segment +2. Approach with customer problem, not partnership pitch — "What if your users could access [capability]?" shifts from your need to their need +3. Start small: Help them solve one specific problem (narrow integration, not full partnership) +4. Prove value with a 3-6 month pilot before asking for broader commitment +5. Build reference customers together — reduces their risk +6. Leverage their GTM: once integrated, they market to their base + +**The Supernode Pattern:** + +Position yourself as the integration hub that other tools naturally connect through. You own critical data or workflows that other platforms need. This compounds — each new partner makes you more valuable to the next. + +**Category Sequencing:** + +Don't pursue partnerships everywhere. Dominate 2-3 categories per quarter: +1. Lead with genuine use cases: "Our users ask for [partner] integration 50x per month" +2. Once you partner with a top player, competitors feel urgency to work with you too +3. After 2-3 successful partnerships in a category, create joint customer stories + +**Common Mistake:** + +Launching partnerships without clear integration pathways. Expecting partners to drive awareness without support. Treating partnerships as a sales channel rather than platform expansion. + +--- + +### 6. PMF Validation Checklist + +**Product-market fit is when customers pull you forward, not when you push them.** + +**Retention:** +- [ ] 40%+ of Week 1 users return Week 4 +- [ ] Usage increasing over time +- [ ] Customers renewing without sales push + +**Organic Growth:** +- [ ] Word-of-mouth referrals happening +- [ ] Customers asking "can I add my team?" +- [ ] Inbound interest without paid marketing + +**Sales Velocity:** +- [ ] Sales cycles shortening +- [ ] Win rates >30% of trials +- [ ] Customers saying "we need this now" + +**Qualitative:** +- [ ] >40% very disappointed if product went away (Sean Ellis test) +- [ ] Customers can articulate what it's for (clear use case) +- [ ] Customers advocating publicly + +**If you don't have these, you don't have PMF yet. Don't scale marketing/sales.** + +--- + +## Decision Trees + +### Why Is Our Launch Stalling? + +``` +Do prospects understand what you are? +├─ No → Layer 1: Positioning problem +│ Fix: Test new messaging before changing product +└─ Yes → Continue... + │ + Do users activate after signing up? + ├─ No → Layer 2: Experience problem + │ Fix: Restrict onboarding to 2-3 use cases, guide to aha moment + └─ Yes → Continue... + │ + Is the team aligned on what matters? + ├─ No → Layer 3: Alignment problem + │ Fix: Single north star, weekly visibility, cut non-essential + └─ Yes → Keep iterating, you're on the right track +``` + +### Press Launch or Direct Outreach? + +``` +Self-serve ready? (Users get value in <10 min) +├─ No → Direct outreach only (press won't convert) +└─ Yes → Do you have >$1M funding to announce? + ├─ Yes → Both (press for awareness, outreach for conversion) + └─ No → Direct outreach first, press later +``` + +--- + +## Common Mistakes + +**1. Optimizing for headlines instead of activation** +50K impressions and 12 signups. Press ≠ growth. + +**2. No target customer list before launch** +Spray-and-pray doesn't work at 0-to-1. Build the list of 100 accounts first. + +**3. Flexibility without defaults** +Giving users every option paralyzes them. Pick 2-3 undeniable use cases and guide hard. + +**4. Giving product away for free** +Free users give polite feedback. Paying users give honest feedback. + +**5. Scaling before learning** +First 10 customers are for learning, not revenue. Document everything. + +**6. Over-planning, under-testing** +2-week experiments with clear kill criteria. Move fast, document learnings. + +**7. Diagnosing the wrong layer** +Positioning fix when the problem is experience = wasted marketing. Experience fix when the problem is positioning = wasted engineering. + +--- + +## Quick Reference + +**Three-layer diagnosis:** +Layer 1: Positioning (messaging sounds like competitors) → Test new messaging +Layer 2: Experience (awareness but no activation) → Guide to aha moment +Layer 3: Alignment (team scattered) → Single north star, weekly visibility + +**First 10 customers:** +Personal network (2-3) → Direct outreach (3-20) → Ceiling moment targeting (highest intent) → Community (developer products) + +**2-week experiment cycle:** +Hypothesis → Success criteria → Test (2 weeks max) → Kill or 3x → Document playbook + +**PMF signals:** +40%+ Week 1→4 retention + word-of-mouth + shortening sales cycles + >40% very disappointed + +**Partner-led entry:** +Customer problem first → Narrow pilot → Reference customers together → Leverage their GTM + +--- + +## Related Skills + +- **product-led-growth**: Scaling after initial traction +- **positioning-strategy**: Positioning for launch +- **partnership-architecture**: Partner-led market entry + +--- + +*Based on launching features that optimized for press and got 12 signups from 50K impressions, diagnosing launch stalls across three companies using the three-layer model, and building the 2-week experiment cycle that turned ad hoc testing into a repeatable machine. Also draws on partner-led market entry across multiple geographies and segments. Not theory — lessons from mistaking vanity metrics for growth and learning to diagnose the actual problem.* diff --git a/skills/gtm-ai-gtm/SKILL.md b/skills/gtm-ai-gtm/SKILL.md new file mode 100644 index 00000000..e0e97577 --- /dev/null +++ b/skills/gtm-ai-gtm/SKILL.md @@ -0,0 +1,566 @@ +--- +name: gtm-ai-gtm +description: Go-to-market strategy for AI products. Use when positioning AI products, handling "who is responsible when it breaks" objections, pricing variable-cost AI, choosing between copilot/agent/teammate framing, or selling autonomous tools into enterprises. +license: MIT +--- + +# AI Product GTM + +Go-to-market strategy for AI products. These aren't generic AI principles — they're patterns from selling autonomous AI agents into enterprises where "autonomous" scared buyers and "teammate" converted them. + +## When to Use + +**Triggers:** +- "How do we position this AI product?" +- "Buyers say they're worried about AI breaking production" +- "Should we call it autonomous or copilot?" +- "How do we price AI when usage varies 10x by customer?" +- "Enterprise security passed but ops rejected us — why?" + +**Context:** +- AI agent platforms (coding, support, ops) +- LLM-based applications +- Autonomous tools that *do* things (not just suggest) +- AI infrastructure +- Anything where the AI makes decisions + +--- + +## Core Frameworks + +### 1. The Real Enterprise AI Objection (It's Not What You Think) + +**What I Learned Selling Autonomous AI Agents:** + +Three months in, enterprise security reviews were passing fast. Good sign, right? Then the pattern emerged: security approved, but **operations rejected us**. + +The objection wasn't "will the AI break production?" — they *assumed* it would break production eventually. The real question was: + +**"Who's responsible when the agent does something wrong?"** + +Not "do we trust the agent?" — "do we trust our *team* to handle this?" + +**Why This Matters:** + +Autonomous agents create a new operational burden. You're not selling AI capability, you're selling organizational readiness. When your agent halts production at 2am, who gets paged? Who fixes it? Who explains it to the VP? + +**Framework: The Accountability Cascade** + +Before deploying AI agents, enterprises need clear answers: + +1. **L1 Response**: Who monitors the agent? (24/7 ops team, or dev team on-call?) +2. **L2 Escalation**: When agent action fails, who debugs? (Agent team, or product team?) +3. **L3 Ownership**: When something breaks badly, who owns customer communication? + +If you can't answer all three, **they won't buy**. Doesn't matter how good your AI is. + +**How This Changes Your Sales Process:** + +**Old approach:** +- Demo the AI +- Show accuracy metrics +- Talk about ROI + +**New approach:** +- Demo the AI +- Show the *failure modes* explicitly +- Ask: "Who on your team would handle this scenario?" +- Walk through their incident response process +- Map AI failures to their existing runbooks + +**The Qualification Question:** + +"Walk me through what happens when the agent takes an action that breaks a workflow. Who gets alerted? Who investigates? Who decides whether to roll back or fix forward?" + +If they can't answer, they're not ready. Pause the deal and help them build the process first. + +**Common Mistake:** + +Treating this as a *product* objection ("we'll make the AI more accurate"). It's an *organizational* objection. More accuracy doesn't solve "who owns this at 2am?" + +**Pattern I've Seen Work:** + +Companies that succeed with AI agents already have: +- On-call rotations for production systems +- Incident response playbooks +- Blameless postmortem culture +- Clear escalation paths + +Companies that struggle: +- Manual deployment processes +- Hero culture ("Steve fixes everything") +- No formal incident response +- Blame-focused culture + +**Decision Criteria:** + +Before demoing autonomous AI to enterprises, ask yourself: "If this breaks their production, who on *their* team owns the fix?" If you can't answer, they can't buy. + +--- + +### 2. Copilot vs Agent vs Teammate (Three Different GTM Motions) + +**The Positioning Trap:** + +Early enterprise conversations, we positioned as "autonomous AI agent." Buyers flinched. One word change — "autonomous" → "AI teammate" — and deal progression improved measurably. + +Why? **Word choice shapes buyer psychology.** + +**The Three Framings:** + +**1. Copilot (Safest, Lowest Value)** +- **What it means**: AI suggests, human decides every time +- **Buyer psychology**: Feels safe, non-threatening +- **GTM motion**: Developer adoption, bottoms-up +- **Use case**: Code completion, writing assistance, search +- **Objection**: "Is this worth paying for?" (low perceived value) + +**2. Agent (Scariest, Highest Value)** +- **What it means**: AI acts autonomously, human reviews periodically +- **Buyer psychology**: Scary, implies replacing humans +- **GTM motion**: Enterprise sales, top-down +- **Use case**: Batch processing, automated workflows, ops +- **Objection**: "What if it breaks production?" (accountability fear) + +**3. Teammate (Sweet Spot)** +- **What it means**: AI and human collaborate, split the work +- **Buyer psychology**: Partnership, not replacement +- **GTM motion**: Hybrid (dev adoption + manager approval) +- **Use case**: Most AI agent platforms +- **Objection**: "How do we integrate this into our workflow?" (process question) + +**The Positioning Shift:** + +**Before:** "Autonomous AI agent that handles complex workflows end-to-end" +- Developers: "Cool, but scary" +- Managers: "Will this replace our team?" +- Deal progression: Slow + +**After:** "AI teammate that pairs with your engineers on complex tasks" +- Developers: "This helps me" +- Managers: "This makes my team more productive" +- Deal progression: Three enterprise deals that had stalled 4+ months closed within 8 weeks of the shift + +**Specific Language Choices That Mattered:** + +❌ **Don't say:** +- "Autonomous" (scary) +- "Replaces" (threatening) +- "Fully automated" (no control) +- "AI-first" (what does that even mean?) + +✅ **Do say:** +- "Teammate" (collaborative) +- "Augments" or "Accelerates" (helping, not replacing) +- "You stay in control" (reassuring) +- "Handles the repetitive work" (specific value) + +**How to Choose Your Framing:** + +``` +Does your AI make decisions without human approval? +├─ Yes → Are you selling to developers or enterprises? +│ ├─ Developers → "Agent" framing (they want autonomous) +│ └─ Enterprises → "Teammate" framing (they want control) +└─ No → "Copilot" framing (augmentation, not automation) +``` + +**The Hard Truth:** + +You can build an agent but position it as a copilot. You can't build a copilot and position it as an agent. **Product capabilities set a ceiling, positioning chooses where you land below it.** + +**Common Mistake:** + +Using "autonomous" because it sounds impressive. Impressive ≠ trusted. If buyers flinch at your positioning, you've lost them. + +--- + +### 3. The AI Pricing Problem (When Usage Varies 10x) + +**The Pattern:** + +Every AI company I've worked with faces this: Customer A uses 1,000 API calls/month. Customer B uses 10,000. Do you charge Customer B 10x more? If yes, they churn. If no, your margins collapse. + +**The Three Models:** + +**1. Seat-Based ($X per user/month)** +- **When it works**: AI augments human work predictably +- **Example**: Code completion, writing assistant +- **Problem**: Doesn't capture AI value scaling +- **Real risk**: High-usage customers are your best customers, but they subsidize low-usage ones + +**2. Usage-Based ($X per API call / prediction / hour)** +- **When it works**: AI does variable work, customers understand the unit +- **Example**: Image generation, transcription, batch ML +- **Problem**: Sticker shock for high-usage customers +- **Real risk**: Customers optimize to use *less* of your product + +**3. Outcome-Based ($X per outcome achieved)** +- **When it works**: You can measure outcomes reliably +- **Example**: "Pay per bug fixed" or "Pay per support ticket resolved" +- **Problem**: Hard to measure, easy to game +- **Real risk**: You bear all the risk if AI doesn't perform + +**What Actually Works (Hybrid):** + +Base fee (covers fixed costs) + variable fee (scales with value). + +**Example structure:** +- Base: $X/month per team (access to platform) +- Variable: $Y per successful action/outcome +- Why this works: + - Base covers infra/support costs + - Variable aligns with customer value + - High-usage customers aren't punished (they're getting more value) + +**The Pricing Conversation I Wish I'd Had Earlier:** + +When pricing usage-based AI: + +**Ask the customer:** +"How much would it cost you to do this manually?" + +If it's $0.10 per API call but saves them $2 in labor, you're underpriced. If it costs $0.50 per call but saves them $0.40, they won't use it enough to matter. + +**Pricing Rule:** + +Your variable cost should be **20-30% of customer's alternative cost**. High enough to capture value, low enough that they'll use it liberally. + +**Common Mistake:** + +Copying OpenAI's pricing ($0.01 per 1K tokens) because "that's what everyone does." Your cost structure isn't OpenAI's cost structure. Your value isn't OpenAI's value. Price for *your* business. + +--- + +### 4. The AI Trust Ladder (From Someone Who Climbed It) + +**The Pattern:** + +You can't sell AI by saying "trust us, it works." You build trust in stages. + +**First: Transparency (Before First Demo)** + +Send these three docs before they ask: +- Model card (what model, trained on what, accuracy on what benchmarks) +- Security whitepaper (where data goes, how it's processed) +- Explainability doc (how to interpret AI decisions) + +**Why this works:** +Buyers expect to do diligence. If you send docs *before they ask*, you look confident and credible. + +**Second: Control (In the Demo)** + +Show them the safety mechanisms: +- How users approve/reject AI suggestions +- Kill switches and rollback mechanisms +- Confidence scores and when AI says "I'm not sure" + +**Why this works:** +Fear of "runaway AI" is real. Showing control mechanisms proves you thought about failure modes. + +**Third: Performance (Week 4-8)** + +Prove it works: +- Benchmarks vs baseline (human or previous tool) +- Case study from similar company +- Live demo on their data (if possible) + +**Why this works:** +Proof beats promises. One customer saying "we saved X hours/week" is worth 100 marketing claims. + +**Fourth: Scale (When They're Serious)** + +Show enterprise readiness: +- Enterprise deployment examples +- Performance at scale (latency, throughput, error rates) +- Compliance docs (SOC 2, GDPR, etc.) + +**Why this works:** +Enterprises don't deploy MVPs. They need proof you won't fall over at 1000 users. + +**The Mistake I Made:** + +Trying to prove performance before explaining how the AI worked. Buyers didn't trust the benchmarks because they didn't understand the system. **Order matters.** + +**Decision Criteria:** + +If buyers ask "how does this work?" before you've demoed, you skipped transparency. Back up and send the docs. + +--- + +### 5. The Enterprise AI Demo (Show Failure, Not Just Success) + +**What Doesn't Work:** + +Canned demo where AI magically solves everything. Buyers think "this won't work on our messy data." + +**What Works:** + +Show the AI making a mistake and recovering. Seriously. + +**Demo Structure That Works:** + +**1. The Problem (30 seconds)** +"Your engineers spend hours on [specific task]. Here's what that looks like." +- Show: Current manual workflow +- Quantify: Time × Engineers × Weeks = Total cost + +**2. The AI Attempt (60 seconds)** +"Here's the AI handling the same task." +- Show: AI analyzing, taking action +- **Key move**: Have AI encounter an error or uncertainty +- Show: AI re-analyzing, recovering, or asking for help +- **Narrate**: "Notice it didn't get it perfect first time. It handles uncertainty like a human would." + +**3. The Human Review (30 seconds)** +"Here's where the engineer reviews and approves." +- Show: Engineer examining AI's work +- **Key move**: Show the engineer overriding or adjusting something +- **Narrate**: "Human stays in control. AI handles repetitive work, human handles judgment calls." + +**4. The Outcome (30 seconds)** +"[X hours] → [Y minutes]. Engineer still owns the outcome, AI accelerates execution." +- Quantify: Time reduction, cost savings, capacity freed + +**Why This Works:** + +- Showing failure → Builds trust (you're not hiding anything) +- Showing recovery → Proves AI is robust +- Showing human override → Gives them control +- Quantifying savings → Makes ROI concrete + +**The Pattern I've Seen:** + +Demos with perfect AI → Buyers skeptical +Demos with imperfect AI that recovers → Buyers engaged + +**Common Mistake:** + +Cherry-picking examples where AI is 100% accurate. Buyers know real-world data is messy. If you don't show messiness, they assume you're hiding it. + +--- + +### 6. The "Who Owns This?" Objection Handler + +**The Objection:** + +"This looks great, but what happens when the AI does something wrong?" + +**Bad Answer:** +"Our AI is 95% accurate, and we're improving it every week." +(Translation: "It will break production 5% of the time, good luck with that") + +**Good Answer:** +"Great question. Let's walk through a failure scenario together." + +**Then Ask:** + +1. "When the AI takes an action that causes an error, who on your team investigates?" +2. "Do you have an incident response process for tooling failures?" +3. "Who owns rollback decisions — the engineer who approved it, or the ops team?" + +**What This Does:** + +- Shifts from "will it fail?" (yes, it will) to "how do we handle failures?" +- Makes them think through operational readiness +- Reveals whether they're ready for AI agents + +**The Follow-Up:** + +"Here's what we recommend: Start with low-risk environments. Let the AI handle non-critical workflows for 2-4 weeks. See how your team handles its mistakes. Then expand scope when you're confident in the process." + +**Why This Works:** + +You're not selling perfection. You're selling a tool that requires operational maturity. **Filtering for mature buyers is better than convincing immature ones.** + +**The Pattern:** + +Mature buyers say: "We already have runbooks for tool failures, we'll add AI to them." +Immature buyers say: "Can you make it never fail?" + +**Decision Criteria:** + +If a buyer demands 100% accuracy, walk away. They're not ready. Come back when they have incident response processes. + +--- + +### 7. The AI Positioning Trap (Fighting Asymmetric Wars) + +**The Pattern:** + +You're competing in the AI agent space. Every competitor's homepage says the same thing: "Automate [workflow] with AI." Your differentiation requires explaining complex technical benchmarks that buyers don't understand. + +This is the positioning trap: **competing on features against better-funded companies on their battlefield.** + +**How to Diagnose It:** + +1. Collect homepage messaging from 5-7 direct competitors +2. Identify shared claims (these are commoditized — you can't win here) +3. Map where you have structural advantage (not just product features) +4. Find the position competitors can't easily copy + +**Structural advantages that work for AI positioning:** +- Unique data or workflow ownership (you control something competitors can't replicate) +- Deployment flexibility (on-prem, private cloud, customer-controlled infrastructure) +- Pricing model innovation (outcome-based, usage-based when competitors are seat-based) +- Community or ecosystem (network effects that compound over time) + +Feature advantages that don't last: +- "Better accuracy" (competitors catch up in one sprint) +- "Faster inference" (infrastructure commoditizes) +- "More integrations" (easy to copy) + +**The Test:** + +For every positioning claim, ask: Can a competitor copy this with a single product sprint? If yes, it's not defensible. Don't build your GTM on it. + +**Common Mistake:** + +Claiming you're "better" at what everyone does. In AI, benchmarks change monthly. Position on what's structurally different about your approach, not what's temporarily better about your model. + +--- + +### 8. Ceiling Moment Qualification (Finding High-Intent AI Buyers) + +**The Pattern:** + +The highest-intent enterprise buyers for AI agents are people who've already adopted a comparable tool and hit its limits. They've invested in learning, they understand the problem space, and they have a clear business case for the upgrade. + +**How to Identify Ceiling Moments:** + +The prospect has: +- Used a copilot/assistant tool for 6+ months +- Hit its limitations (can't handle complex tasks, doesn't work with their stack, not autonomous enough) +- Low switching costs (the mental model transfers) +- Clear business case ("we're spending X hours on this manually even with the current tool") + +**How to Target Them:** + +1. Identify the tool(s) your AI product complements or displaces +2. Build target lists of companies known to use those tools +3. Craft outreach around the limitation, not your features: + - "Teams using [incumbent] often hit a ceiling when they need [capability your product provides]" + - Acknowledge the incumbent has value (don't trash-talk) + - Position as "next level," not replacement + +**Why This Converts Better:** + +Ceiling-moment conversations convert 3-5x vs cold outreach because: +- Prospect already understands the problem +- They've already invested in the category +- They have internal budget allocated +- They can articulate what's missing + +**The Qualification Question:** + +"What's the most complex task you've tried to automate with your current tool, and where did it break down?" + +If they have a specific answer with specific pain, they're a ceiling-moment buyer. If they say "it works fine," they're not ready. + +**Common Mistake:** + +Trying to convince tool-naive prospects to adopt AI agents. Bad conversion rates, long education cycles, and they'll compare you to "doing nothing" instead of "doing it better." Target buyers who already believe in the category. + +--- + +## Decision Trees + +### Which Positioning Should I Use? + +``` +Does your AI act autonomously (no approval per action)? +├─ Yes → Who are you selling to? +│ ├─ Developers → "Agent" framing +│ └─ Enterprises → "Teammate" framing +└─ No → "Copilot" framing +``` + +### Which Pricing Model Should I Use? + +``` +Can you measure customer outcomes reliably? +├─ Yes → Outcome-based (or hybrid with outcome component) +└─ No → Continue... + │ + Does usage vary 5x+ by customer? + ├─ Yes → Hybrid (base + usage) + └─ No → Seat-based +``` + +### Is This Buyer Ready for AI Agents? + +``` +Do they have incident response processes for tool failures? +├─ Yes → Continue... +│ │ +│ Do they have on-call rotations for production systems? +│ ├─ Yes → Qualified buyer +│ └─ No → Help them build it first +└─ No → Not ready (come back in 6 months) +``` + +--- + +## Common Mistakes + +**1. Using "autonomous" because it sounds impressive** + - I've watched this slow deals. "Autonomous" scares enterprises. "Teammate" progresses faster. + +**2. Hiding AI failure modes** + - Buyers know real-world data is messy. If you don't show failures, they assume you're hiding them. + +**3. Treating "will it break production?" as the objection** + - Real objection: "who's responsible when it does?" Organizational readiness, not accuracy. + +**4. Pricing usage-based AI like OpenAI** + - Your cost structure isn't theirs. Price for 20-30% of customer's alternative cost. + +**5. Skipping transparency docs before demo** + - Order matters. Transparency → Control → Performance → Scale. Don't skip steps. + +**6. Demoing perfect AI** + - Show mistakes + recovery. Builds more trust than fake perfection. + +**7. Selling to buyers who demand 100% accuracy** + - They're not ready. Filter for mature buyers with incident response processes. + +--- + +## Quick Reference + +**Enterprise objection checklist:** +- [ ] "Who gets paged when AI breaks production?" → Map to their on-call rotation +- [ ] "Who debugs AI failures?" → Map to their incident response +- [ ] "Who owns customer communication?" → Map to their escalation path + +**Positioning word choices:** +- ✅ Teammate, augments, accelerates, you stay in control +- ❌ Autonomous, replaces, fully automated, AI-first + +**Demo structure:** +1. Problem with quantified cost (30s) +2. AI attempt including failure/uncertainty (60s) +3. Human review and override (30s) +4. Outcome with ROI (30s) + +**Trust ladder:** +1. Transparency (model card, security, explainability) +2. Control (approval workflows, kill switches, confidence scores) +3. Performance (benchmarks, case studies, live demo) +4. Scale (enterprise deployments, compliance, SLAs) + +**Pricing hybrid formula:** +- Base: $X/month (covers fixed costs) +- Variable: $Y per unit (20-30% of customer's alternative cost) + +--- + +## Related Skills + +- **positioning-strategy**: General positioning frameworks and testing +- **technical-product-pricing**: Pricing models including AI-specific patterns +- **enterprise-account-planning**: Enterprise AI deal management + +--- + +*Based on enterprise AI agent GTM across developer tools and infrastructure. Patterns drawn from working enterprise deal cycles selling autonomous AI products — some carried directly, others supported alongside sales leadership — including the positioning trap diagnosis that shifted from feature competition to structural differentiation, the ceiling-moment qualification that improved outbound conversion significantly, and frameworks tested across security, operations, and engineering buyer personas. Not theory — lessons from deals where "autonomous" killed conversations and "teammate" converted.* diff --git a/skills/gtm-board-and-investor-communication/SKILL.md b/skills/gtm-board-and-investor-communication/SKILL.md new file mode 100644 index 00000000..2cb46cae --- /dev/null +++ b/skills/gtm-board-and-investor-communication/SKILL.md @@ -0,0 +1,453 @@ +--- +name: gtm-board-and-investor-communication +description: Board meeting preparation, investor updates, and executive communication. Use when preparing board decks, writing investor updates, handling bad news with the board, structuring QBRs, or building board-level metric discipline. Includes the "Three Things" narrative model, the 4-tier metric hierarchy, and the pre-brief pattern that prevents board surprises. +license: MIT +--- + +# Board and Investor Communication + +Structure board meetings, investor updates, and executive communication that builds trust and drives decisions — not slide decks that nobody reads. + +## When to Use + +**Triggers:** +- "How do I prepare for our board meeting?" +- "What should go in our investor update?" +- "We missed our numbers, how do we communicate this?" +- "Board deck structure" +- "How often should we update investors?" +- "Our board meetings aren't productive" + +**Context:** +- Seed through growth-stage companies +- Board meeting preparation and follow-up +- Monthly/quarterly investor updates +- Handling misses and bad news +- Cascading strategy from board to organization + +--- + +## Core Frameworks + +### 1. Tell the Story, Then Show the Data + +**The Pattern:** + +Most board meetings start with a data dump. Slide after slide of metrics, then 10 minutes of Q&A where board members try to figure out what it all means. + +Flip it. The narrative should lead; data should confirm. + +**How It Works:** + +Open with where you are in the journey. Not "here's our ARR" but "here's what we believed coming into the quarter, what we learned, and where that puts us now." Then show the data that validates the narrative. + +**Board Meeting Structure:** + +**Pre-Read (Sent 48-72 Hours Before)** +- Financial dashboard (ARR, burn, runway, pipeline) +- Metric scorecard with health indicators (green/yellow/red) +- 2-3 page narrative summary covering market context and strategic update +- Rule: If it can be read, don't present it + +**The Meeting (90-120 Minutes)** + +1. **Market context and questions on pre-read** (15 min) + - What's changed in the market since last meeting? + - Board asks about anything unclear from pre-read + - No re-presenting the data + +2. **CEO strategic update — The "Three Things"** (15 min) + - **What's working** (2-3 areas showing momentum) + - **What's not** (1-2 specific gaps, not vague) + - **What we're doing about it** (specific changes, not promises) + - This is narrative, not numbers + +3. **Decision items** (30-45 min) + - 1-3 topics where the board's input or approval is needed + - Come prepared: "We're deciding between X and Y, board thoughts?" + - Leave with a decision, not "let's revisit next quarter" + +4. **Deep dive** (20-30 min) + - One topic explored in depth (rotating each meeting) + - Bring the functional leader who owns it + - Examples: pricing strategy, competitive landscape, org design, market expansion + +5. **Closing** (10 min) + - Summarize decisions made and action owners + - Closed session (board without management) + - CEO and board chair debrief + +**Common Mistakes:** + +- Opening with data before narrative (board gets lost in numbers without context) +- Multiple competing narratives (board can't synthesize — pick one arc) +- Trying to cover too many topics deeply (results in rushed decisions on everything) +- No deep dives (board becomes a rubber stamp) +- Filling the meeting with good news (boards don't trust CEOs who only share wins) + +--- + +### 2. The "Three Things" Narrative Model + +**The Pattern:** + +Businesses are too complex to summarize in one headline. Three distinct points — what's working, what's not, what's changing — let the board grasp status and trajectory in minutes. + +**What's Working (2-3 areas):** +Be specific. Not "sales are going well" but "closed first six-figure enterprise deal, validating the commercial motion." Quantify momentum: +- New product launches gaining adoption +- Sales motion maturing (first big customer, repeatable motion emerging) +- Community/ecosystem milestones + +**What's Not Working (1-2 areas):** +Be equally specific. Not "we had some challenges" but "self-serve conversion is weak — strong awareness but 0.1% conversion rate." Board needs to understand the actual problem, not a euphemism. + +**What We're Doing About It (for each "not working"):** +Articulate specific changes. What product changes? What org changes? What's the timeline? What resources are committed? + +**Why This Works:** + +Board members read dozens of updates. The Three Things model gives them a mental filing system: momentum (feel good), risk (pay attention), agency (this team handles problems). Without it, boards either over-index on one bad metric or miss the real issue buried in a 40-slide deck. + +--- + +### 3. Progress Is Directional, Not Absolute + +**The Pattern:** + +It doesn't matter if you hit 100K users if you were aiming for 50K — or if you missed 200K. Context is everything. Show progress toward your goal, not just the number. + +**How to Frame Progress:** + +- **On track:** "Target 100K active developers. Currently 70K. All initiatives performing as planned. Confidence: High." +- **At risk:** "Target 100K active developers. Currently 50K. Acquisition cost higher than expected. Testing new channels. Adjusted target: 85K. Confidence: Medium." +- **Ahead:** "Target 100K active developers. Currently 95K. Exceeding acquisition forecasts. New partnerships accelerating growth. Confidence: Very high." + +**The Rules:** +1. Set the goal upfront (every metric needs a target) +2. Show progress as % toward goal, not just absolute number +3. Acknowledge misses; explain pivots +4. State confidence level based on trajectory + +**Common Mistake:** + +Changing goals when you miss them. Boards track this. If Q1 target was 100K and you hit 60K, don't make Q2 target 75K and call it "revised guidance." State the miss, explain why, show the recovery plan. Credibility compounds faster than metrics. + +--- + +### 4. The Four-Tier Metric Hierarchy + +**The Pattern:** + +Too many metrics confuse the board. Too few miss important signals. Structure your metrics in four tiers, and never change them quarter to quarter. + +**Tier 1: North Star (1 metric)** +The single metric that best represents company health. +- Examples: ARR, active users, tasks completed +- Show trend line, not just current state +- This is the one metric the board should remember if they forget everything else + +**Tier 2: Driver Metrics (3-5 metrics)** +Leading indicators that predict your north star. +- User acquisition rate +- Retention curves (how many active last month remain active this month?) +- Feature adoption (% of users on key feature) +- DAU/MAU ratio +- Pipeline coverage +- Why track these: early warning before revenue impact + +**Tier 3: Health Metrics (3-5 metrics)** +Signals of team and operational health. +- Burn rate and runway +- Headcount vs plan +- Engineering velocity / shipping pace +- NPS or customer satisfaction +- Why track these: capacity constraints on growth + +**Tier 4: Red Flags (2-3 thresholds)** +Metrics that, if they cross a threshold, require immediate action. +- Churn rate above X% +- CAC above $Y +- Retention decline greater than Z% +- Why track these: triggers for strategic conversation, not just monitoring + +**The Discipline Rules:** + +1. **Same metrics every quarter.** Build a 4-6 quarter track record. OK to add new metrics, but never drop old ones. +2. **One context sentence per metric.** What does this metric tell us? Is it good or bad? What's the implication? +3. **Actuals vs plan, always.** Never show a metric without showing what you planned. The delta is the conversation. +4. **Disaggregate when needed.** Total usage hides segments. Break down by customer type, segment, product line — wherever averages mask reality. +5. **Use health indicators.** Green/yellow/red status for quick board assessment. Lets board scan the dashboard and zoom in on yellows and reds. + +**Common Mistake:** + +Changing which metrics you show based on which ones look good this quarter. Boards notice immediately. It's the fastest way to lose credibility on your numbers. Show the same dashboard every time — when a metric looks bad, that's the conversation. + +--- + +### 5. Deliver Bad News Before the Board Asks + +**The Pattern:** + +Board confidence erodes when bad news is delayed or sugar-coated. They can handle bad news. They can't handle surprises. And they hate slow decision-making. + +**The Pre-Brief Pattern:** + +Before bad news hits the full board, pre-brief your lead director 48-72 hours before the meeting. + +- **Who:** Board chair or most experienced director +- **What:** The issue, your assessment, your plan +- **Why:** Director can help frame for the full board, avoid surprise shock, and may have seen this pattern before + +Then send a short email to the full board: + +"Wanted to flag something before our next meeting. [Specific problem]. Here's what we know so far, what we don't know yet, and what we're doing about it. Will have a full update at the board meeting." + +**The Bad News Framework:** + +For every piece of bad news, four elements: + +1. **What happened** (specific, one sentence — not defensive) + - "We missed Q1 target of 10 new enterprise customers; achieved 7" + +2. **Why it happened** (root cause, not excuse) + - "Customer evaluation timelines slipped due to product gaps we didn't diagnose early enough" + - Own it: "We misread the demand signal" not "the market shifted" + +3. **What's the impact** (honest about severity) + - "Affects Q1 ARR by ~$50K, pushing us below original guidance" + +4. **What we're doing about it** (specific actions, owners, timeline) + - "Changed sales motion to qualify on product readiness upfront; revised Q2 target to 15 (including Q1 backlog of 7)" + +**How NOT to Communicate a Miss:** +- "We didn't hit Q1 goals, but the pipeline is strong" +- "External factors slowed deals, but we're confident about Q2" +- "We had some implementation challenges, but they're resolved now" + +All vague. All defensive. None explain root cause or plan. + +**Follow-Through Pattern:** + +Every subsequent board update should reference previously raised issues: "Last quarter I flagged [issue]. Here's the update: [progress]. Status: [resolved / in progress / escalating]." + +This builds a track record of follow-through. Boards remember who tracks their own problems. + +**Common Mistake:** + +Sandwiching bad news between good news hoping nobody notices. Board members see this immediately. It damages trust more than the bad news itself. + +--- + +### 6. The Monthly Investor Update + +**The Pattern:** + +Most founders send investor updates quarterly at best. The best founders send monthly — even when things are bad. Especially when things are bad. + +**Why Monthly:** +- Keeps investors engaged without scheduling calls +- When you need help (intros, advice, bridge), investors already have context +- Consistency signals operational discipline; irregular updates signal chaos + +**The Format:** + +**Subject line:** [Company Name] — [Month] Update + +**Opening (1 paragraph):** +One headline. Brief context. + +**TL;DR (3 bullets):** +- One number (the metric that matters most right now) +- One win (specific — customer name, deal size, milestone) +- One challenge (with what you're doing about it) + +**Key Metrics (same table every month):** + +| Metric | This Month | Last Month | 3-Mo Avg | vs Plan | +|--------|-----------|------------|----------|---------| +| ARR | | | | | +| MRR Growth | | | | | +| Burn Rate | | | | | +| Runway (months) | | | | | +| Pipeline | | | | | +| Customers | | | | | + +Don't include metrics unchanged from last update — focus on what's new. + +**Key Updates (2-3 bullets):** +- Major hires, product launches, partnership/customer wins +- Strategic pivots or course corrections +- Specific enough to matter + +**Asks (1-2 bullets):** +- What help do you need? Intros, hiring leads, strategic advice +- Be specific: "Looking for intro to VP Eng at [company type]" not "Looking for engineering talent" + +**The Cadence Rules:** +- Same day every month. First Monday, last Friday — pick one and never miss it +- Send within a week of major news (don't hold it for the next scheduled update) +- During active fundraising: weekly updates to warm investors, monthly to existing + +**Common Mistake:** + +Only sending updates when things are good. Six months of good news followed by silence tells investors exactly what you're not telling them. + +--- + +### 7. Burn Is About Discipline; Revenue Is About Traction + +**The Pattern:** + +Financial communication to the board has two separate stories. Don't conflate them. + +**Burn Communication:** +- Monthly rate, runway in months, YoY improvement +- Efficiency gains (what did you cut, what did you optimize?) +- Capital plan: what's secured, what's targeted, what's the timeline + +The burn story is: "We are disciplined operators who make capital go further." + +**Revenue Communication:** +- MoM growth, CAC, LTV, payback period +- Customer segmentation (self-serve vs enterprise, what's the mix?) +- Monetization plan with timeline + +The revenue story is: "We have traction and know how to grow it." + +**Reconcile Both:** +The best financial communication shows: "We're burning less while growing faster." That's evidence of operating leverage, and it's the single most compelling thing a board can hear. + +**Common Mistake:** + +Conflating burn with growth. A board that hears "we're growing fast" while burn rate climbs thinks you're buying growth. A board that hears "we cut burn by 40%" while growth stalls thinks you're in survival mode. Tell both stories, separately, and then connect them. + +--- + +### 8. Cascading Strategy from Board to Organization + +**The Pattern:** + +Strategy decided in a board room doesn't matter if the team doesn't know it. Create a systematic communication cascade. + +**The Four Levels:** + +**Level 1: Executive Team (Weekly)** +- Strategic priorities and emerging risks +- Key decisions needing alignment + +**Level 2: Extended Leadership (Bi-weekly)** +- How strategy impacts each function +- Key priorities and how success will be measured + +**Level 3: All-Hands (Monthly, 60 min)** +- Leadership update (15 min): where we are, what's changed +- Wins celebration (10 min): customers, launches, people +- Deep dive (20 min): one strategic initiative in detail +- Q&A (15 min): real questions, real answers + +**Level 4: Team-Specific (Ongoing)** +- How team goals connect to company strategy +- What success looks like for this team specifically + +**The Test:** + +Pick a random employee. Can they explain the company strategy? If not, the cascade is broken. Strategy that exists only in executive heads is not strategy — it's a secret. + +**Connect OKRs to Cascade:** +- Company OKRs from strategy +- Function OKRs aligned to company +- Team OKRs aligned to function +- Individual understands how they contribute + +**Common Mistake:** + +Inconsistent messaging. If different leaders tell different stories about where the company is headed, the organization optimizes for conflicting goals. Write the strategy down. One page. Make everyone use the same words. + +--- + +## Decision Trees + +### Board Meeting vs Investor Update + +``` +Is this a full board meeting? +├─ Yes → Full structure: pre-read + Three Things + decisions + deep dive +└─ No → Continue... + │ + Is this a major event (fundraise close, big miss, key hire)? + ├─ Yes → Ad-hoc update immediately (don't wait for schedule) + └─ No → Monthly investor update format +``` + +### How to Frame a Metric + +``` +Did you hit the target? +├─ Yes → State it, move on. Don't over-celebrate. +└─ No → Continue... + │ + Do you understand why? + ├─ Yes → Root cause + plan + revised timeline + └─ No → Say "we're diagnosing" + what you're doing to find out + │ + Is the miss material (affects runway, strategy, or board confidence)? + ├─ Yes → Pre-brief lead director before board meeting + └─ No → Include in Three Things narrative, "what's not working" +``` + +--- + +## Common Mistakes + +**1. Board meetings as status reports** +Board members can read. Send the pre-read. Use the meeting for decisions and discussion, not data walkthroughs. + +**2. Changing metrics every quarter** +Consistency in metrics builds trust. The moment you swap out a metric that looked bad for one that looks good, you've told the board your numbers can't be trusted. + +**3. No clear ask** +Every investor update, every board meeting should have an ask. Intros, advice, approval, patience. Investors want to help — if you never ask, they disengage. + +**4. Defending instead of describing** +When you're explaining away a miss for 10 minutes, the board hears defensiveness. State the miss, state the cause, state the plan. Move on. + +**5. Strategy stays in the board room** +If your team can't explain the strategy, you haven't communicated it. Cascade it or it doesn't exist. + +**6. Surprises in the board meeting** +Bad news should never land for the first time in a board meeting. Pre-brief the lead director. Send the flag email. Let the board process before the meeting. + +--- + +## Quick Reference + +**Board meeting flow:** +Pre-read (48-72 hrs before) → Questions (15 min) → Three Things narrative (15 min) → Decision items (30-45 min) → Deep dive (20-30 min) → Closed session (10 min) + +**Three Things model:** +What's working + What's not + What we're doing about it + +**Metric hierarchy:** +North star (1) → Drivers (3-5) → Health (3-5) → Red flags (2-3 thresholds) + +**Bad news protocol:** +Pre-brief lead director (48-72 hrs) → Flag email to full board → What happened → Why → Impact → Plan → Follow up every subsequent meeting + +**Monthly investor update:** +TL;DR (3 bullets) → Same metrics table → Key updates (2-3) → Specific asks (1-2) + +**Communication cascade:** +Exec (weekly) → Leadership (bi-weekly) → All-hands (monthly) → Team (ongoing) + +--- + +## Related Skills + +- **operating-cadence**: Internal meeting rhythms that feed board preparation +- **enterprise-account-planning**: Pipeline and deal metrics for board reporting +- **technical-product-pricing**: Pricing decisions that require board input + +--- + +*Based on preparing and supporting board communications across multiple companies from Series A through post-IPO — building the deck and sitting in the room at some, reporting updates to the board at others. Includes the investor update cadence that maintained trust through missed quarters, the "Three Things" narrative model used across multiple companies, and the metric discipline framework that survived years of hypergrowth. Not theory — patterns from being close enough to the board to see what lands and what doesn't.* diff --git a/skills/gtm-developer-ecosystem/SKILL.md b/skills/gtm-developer-ecosystem/SKILL.md new file mode 100644 index 00000000..2d0406ac --- /dev/null +++ b/skills/gtm-developer-ecosystem/SKILL.md @@ -0,0 +1,307 @@ +--- +name: gtm-developer-ecosystem +description: Build and scale developer-led adoption through ecosystem programs. Use when deciding open vs curated ecosystems, building developer programs, scaling platform adoption, or designing student program pipelines. +license: MIT +--- + +# Developer Ecosystem + +Build and scale developer-led adoption through ecosystem programs, community, and partnerships. Focus on what actually drives adoption, not vanity metrics. + +## When to Use + +**Triggers:** +- "How do we build a developer ecosystem?" +- "Should we curate quality or go open?" +- "Developer community isn't growing" +- "Nobody's building on our API" +- "How do we compete with larger platforms?" + +**Context:** +- API platforms and developer tools +- Products with extensibility (plugins, integrations) +- Developer-first GTM motion +- Platform business models + +--- + +## Core Frameworks + +### 1. Open vs Curated Ecosystem (The Marketplace Decision) + +**The Pattern:** + +Running ecosystem at a developer platform. Leadership debate: Open the marketplace to anyone, or curate for quality? + +**Quality control camp:** "We need gatekeeping. Otherwise we'll get SEO spam, low-quality integrations, brand damage." + +**Open camp:** "Developers route around gatekeepers. Network effects matter more than quality control." + +**The decision:** Went open. Quality concerns were real, but we made a bet: control comes from discovery and trust layers, not submission gatekeeping. + +**What We Built Instead of Gatekeeping:** + +1. **Search and discovery** — Surface high-quality integrations through algorithms, not human curation +2. **Trust signals** — Verified badges, usage stats, health scores +3. **Community curation** — User ratings, collections, recommendations +4. **Moderation** — Remove spam after publication, not block before + +**Result:** Network effects won. Thousands of integrations published. Quality surfaced through usage, not through us deciding upfront. + +**Decision Framework:** +- **Curated** works when: Brand risk high, dozens of partners, can scale human review +- **Open** works when: Hundreds/thousands of potential partners, network effects matter more than quality control + +**Common Mistake:** + +Defaulting to curated because "we need quality control." This works when you have 10 partners. At 100+, you become the bottleneck. Build discovery and trust systems instead. + +--- + +### 2. The Three-Year Student Program Arc + +**The Pattern:** + +Most developer programs optimize for quick wins. Better approach: Build long-term talent pipeline. + +**Year 1: University Partnerships** +- Partner with CS departments +- Curriculum integration (hackathons, coursework) +- Student licenses (free or heavily discounted) +- Metrics: # universities, # students activated + +**Year 2: Student Community & Certification** +- Student expert certification program +- Student-led workshops and events +- Campus ambassadors +- Metrics: # certified, # student-led events + +**Year 3: Career Bridge** +- Job board connecting students → companies +- Enterprise partnerships (hire certified students) +- Alumni network +- Metrics: # hired, company partnerships + +**Why This Works:** + +Students become enterprise buyers 5-10 years later. You're building brand loyalty before they have purchasing power. + +**Common Mistake:** + +Treating students as immediate revenue. They're not. They're future enterprise decision-makers. + +--- + +### 3. Developer Journey (Awareness → Integration → Advocacy) + +**Stage 1: Awareness** +- How do they discover you? +- Content, search, word-of-mouth, events + +**Stage 2: Onboarding** +- First API call in <10 minutes +- Quick-start guides +- Sample code in popular languages + +**Stage 3: Integration** +- Building real use cases +- Integration guides +- Support when stuck + +**Stage 4: Production** +- Deployed and generating value +- Monitoring usage +- Enterprise upgrade path + +**Stage 5: Advocacy** +- Sharing publicly +- Recommending to others +- Contributing back (docs, code, community) + +**Metrics That Matter:** +- Time to first API call (onboarding) +- % reaching production (integration success) +- Monthly active developers (engagement) +- Developer NPS (advocacy) + +**Common Mistake:** + +Measuring vanity metrics (sign-ups, downloads) instead of real engagement (API calls, production deployments). + +--- + +### 4. Documentation Hierarchy + +**Tier 1: Quick Starts (Get to Value Fast)** +- "Hello World" in 5 minutes +- Common use case examples +- Copy-paste code that works + +**Tier 2: Guides (Solve Real Problems)** +- Use case-specific tutorials +- Integration patterns +- Best practices + +**Tier 3: Reference (Complete API Docs)** +- Every endpoint documented +- Request/response examples +- Error codes and handling + +**Tier 4: Conceptual (Understand the System)** +- Architecture overviews +- Design philosophy +- Advanced patterns + +**Most developers need:** Tier 1 first, then Tier 2. Very few read Tier 4. + +**Common Mistake:** + +Starting with Tier 3 (comprehensive API reference). Developers want quick wins first. + +--- + +### 5. Community vs Support (When to Use Which) + +**Community (Async, Scalable):** +- Slack/Discord for real-time help +- Forum for searchable Q&A +- GitHub discussions for feature requests +- Best for: Common questions, peer-to-peer help + +**Support (Sync, Expensive):** +- Email support for enterprise +- Dedicated Slack channels for partners +- Video calls for complex integrations +- Best for: Paying customers, strategic partners + +**How to Route:** + +**Community first:** +- Developer asks question +- Community member answers +- You validate and upvote +- Searchable for future developers + +**Escalate to support when:** +- No community answer in 24 hours +- Enterprise/paying customer +- Security or compliance issue +- Complex integration requiring custom work + +**Common Mistake:** + +Providing white-glove support to everyone. Doesn't scale. Build community that helps itself. + +--- + +### 6. Partner Tiering for Developer Ecosystems + +**Tier 1: Integration Partners (Self-Serve)** +- Build with public API +- You provide: docs, Slack channel, office hours +- They drive their own marketing +- Best for: Ambitious partners with resources + +**Tier 2: Strategic Partners (Co-Development)** +- Co-developed integration +- You provide: dedicated channel, co-marketing +- Joint case studies +- Best for: High-impact integrations + +**Don't over-tier.** 2 tiers is enough. More creates confusion. + +--- + +## Decision Trees + +### Open or Curated Ecosystem? + +``` +Is brand damage risk high if low-quality partners join? +├─ Yes (regulated, security) → Curated +└─ No → Continue... + │ + Can you scale human review? + ├─ No (hundreds/thousands) → Open + discovery systems + └─ Yes (dozens) → Curated +``` + +### Community or Support? + +``` +Is this a common question? +├─ Yes → Community (forum, Slack, docs) +└─ No → Continue... + │ + Is requester paying customer? + ├─ Yes → Support (email, dedicated) + └─ No → Community (with escalation path) +``` + +--- + +## Common Mistakes + +**1. Building ecosystem before product-market fit** + - Fix core product first, then build ecosystem + +**2. No developer success team** + - Developers need help to succeed beyond docs + +**3. Poor documentation** + - Foundation of ecosystem, non-negotiable + +**4. Treating all developers equally** + - Tier support by strategic value (paying > free, partners > hobbyists) + +**5. No integration quality standards** + - Low-quality integrations hurt your brand + +**6. Measuring only vanity metrics** + - Track activation and production usage, not just sign-ups + +**7. Developer advocates with no technical depth** + - Hire developers who can code and teach + +--- + +## Quick Reference + +**Open ecosystem checklist:** +- [ ] Search and discovery (surface quality algorithmically) +- [ ] Trust signals (verified badges, usage stats, ratings) +- [ ] Community curation (user recommendations, collections) +- [ ] Moderation (remove spam after publication) + +**Developer journey metrics:** +- Awareness: Traffic, sign-ups +- Onboarding: Time to first API call (<10 min target) +- Integration: % reaching production deployment +- Advocacy: Developer NPS, public sharing + +**Documentation hierarchy:** +1. Quick starts (5-min "Hello World") +2. Use case guides (solve real problems) +3. API reference (complete documentation) +4. Conceptual (architecture, philosophy) + +**Partner tiers:** +- Tier 1: Self-serve (public API, docs, community) +- Tier 2: Strategic (co-development, co-marketing) + +**Student program timeline:** +- Year 1: University partnerships, activation +- Year 2: Certification, student community +- Year 3: Job board, enterprise hiring bridge + +--- + +## Related Skills + +- **partnership-architecture**: Partner deal structures and co-marketing +- **product-led-growth**: Self-serve activation funnels for developer products +- **0-to-1-launch**: Launching developer products + +--- + +*Based on building developer ecosystems at multiple platform companies, including the open vs curated marketplace decision, student program development (3-year arc building talent pipeline), and partner ecosystem growth. Not theory — patterns from building developer ecosystems that actually drove platform adoption and multi-year brand loyalty.* diff --git a/skills/gtm-enterprise-account-planning/SKILL.md b/skills/gtm-enterprise-account-planning/SKILL.md new file mode 100644 index 00000000..e22b47c5 --- /dev/null +++ b/skills/gtm-enterprise-account-planning/SKILL.md @@ -0,0 +1,426 @@ +--- +name: gtm-enterprise-account-planning +description: Strategic account planning and execution for enterprise deals. Use when planning complex sales cycles, managing multiple stakeholders, applying MEDDICC qualification, tracking deal health, or building mutual action plans. Includes the "stale MAP equals dead deal" pattern. +license: MIT +--- + +# Enterprise Account Planning + +Strategic account planning and execution for enterprise deals. Turn complex sales cycles into systematic wins — or at least know when they're dying before you waste months. + +## When to Use + +**Triggers:** +- "How do I plan this enterprise deal?" +- "This deal has been in motion 3 months, why isn't it closing?" +- "Should I create a full account plan or simplified version?" +- "How do I know if this deal is actually moving?" +- "MEDDICC qualification" +- "Building a mutual action plan" + +**Context:** +- Strategic deals above your average ACV +- Multiple stakeholders involved +- Sales cycle exceeds 60 days +- Complex buying process (legal, procurement, security) +- Enterprise or mid-market accounts + +--- + +## Core Frameworks + +### 1. If Your MAP Hasn't Been Updated in 3 Weeks, That Deal Is Dead + +**The Pattern I've Seen:** + +The Mutual Action Plan (MAP) is the single best indicator of deal health. Not pipeline stage. Not verbal commitments. Not "they love the product." + +**The MAP tells you everything:** + +**Healthy deal:** +- MAP updated weekly +- Customer adding their own action items +- Both sides completing tasks on schedule +- New stakeholders appearing in MAP +- Dates moving up (not pushed out) + +**Dying deal:** +- MAP last updated 3+ weeks ago +- Only your side has action items +- Customer tasks marked "pending" for weeks +- No new stakeholders engaged +- All dates in the past + +**Why This Happens:** + +When a deal is real, the customer wants it to happen. They're doing work. They're involving stakeholders. They're moving through their process. + +When a deal is dying, you're doing all the work. They're "too busy." They'll "get back to you next week." The economic buyer is "traveling." + +**The 3-Week Rule:** + +If your MAP hasn't been updated in 3 weeks, the deal is dead — you just don't know it yet. **I've never seen a deal close with a stale MAP. Not once in 11 years.** + +**What to Do:** + +**Week 1 of silence:** Send MAP update: "Here's what we've completed. What's your status on [specific customer action]?" + +**Week 2 of silence:** Escalate to champion: "Haven't heard back on MAP. Are we still on track for [date]? If priorities shifted, let me know." + +**Week 3 of silence:** Qualify out or reset: "It seems like timing might not be right. Should we pause and reconnect in [timeframe], or is there a blocker I can help with?" + +**Common Mistake:** + +Keeping deals in pipeline because "they said they want it." Verbal interest ≠ action. If they're not doing work, they're not buying. + +--- + +### 2. The EB Discovery Problem (And Why Deals Die at Week 8) + +**The Pattern:** + +You're 8 weeks into a deal. POC went great. Champion loves you. Technical validation complete. You send the proposal. + +Then: radio silence. + +**What happened?** You never met the Economic Buyer. + +**The Economic Buyer (EB) is the person who:** +- Controls budget allocation +- Makes final purchase decision +- Signs the contract + +**Not:** +- Your champion (they influence, don't decide) +- The technical lead (they validate, don't buy) +- The VP who attended one demo (they advise, don't sign) + +**Why Deals Die Without EB Access:** + +You built the business case with your champion's assumptions. But the EB has different priorities: +- Champion cares about: solving their team's pain +- EB cares about: ROI, risk mitigation, strategic alignment + +When you send proposal to EB through the champion, EB sees: +- Price tag with no context +- Solution to a problem they didn't articulate +- Risk they haven't evaluated + +**Result:** Deal stalls or dies. + +**The Framework: EB Validation Checklist** + +Before sending proposal, validate: + +- [ ] Have you identified the EB? (Name, title, confirmed by champion) +- [ ] Have you met the EB? (Video call minimum, in-person ideal) +- [ ] Does EB agree on the problem? (In their words, not yours) +- [ ] Does EB agree on success metrics? (How they'll measure ROI) +- [ ] Does EB know the price range? (Ballpark discussed, not surprised) +- [ ] Does EB understand timeline? (Implementation, go-live, value realization) + +**If you answered "no" to any, don't send the proposal yet.** + +**How to Get EB Access:** + +**Ask your champion:** +"Before we finalize pricing, I'd love 15 minutes with [EB name] to make sure we're aligned on outcomes and timeline. Can you intro us?" + +**If champion blocks:** "I can handle that, you don't need to talk to them" +→ This is a red flag. Either champion doesn't have access (not a real champion) or they're afraid EB will kill the deal (which means deal is weak). + +**Push back:** +"I totally understand. At the same time, I want to make sure [EB] sees the full value before seeing the price. In my experience, when economic buyers aren't involved early, deals get delayed in procurement. Can we do a quick alignment call?" + +**Common Mistake:** + +Treating EB meeting as "nice to have." It's mandatory for any deal >$50K. No EB access = no deal. + +--- + +### 3. Personal Win Mapping (People Buy for Themselves) + +**The Pattern:** + +Enterprise software purchases are made by committees. But committees don't buy. **People buy.** + +And people buy for personal reasons: +- Career advancement +- Looking good to their boss +- Reducing their workload +- Covering their ass (CYA) +- Proving they were right +- Not looking stupid + +**Framework: Personal Win Identification** + +For each stakeholder, map: + +**Professional Win:** +- What do they get credit for if this succeeds? +- What pain goes away for them personally? +- How does this make them look good? + +**Professional Risk:** +- What happens to them if this fails? +- What's their reputation cost if this goes wrong? +- Who's skeptical of them internally? + +**Personal Motivations:** +- Are they new in role? (Need quick wins) +- Facing budget cuts? (Need to justify spend) +- Up for promotion? (Need visible success) +- Burned by vendors before? (Extra risk-averse) + +**Example: VP of Engineering** + +**Professional Win:** +- Reduce on-call burden for team (they'll stop complaining to her) +- Faster incident response (looks good in QBRs) +- Attract better eng talent (modern tooling) + +**Professional Risk:** +- Team rejects new tool (she forced it on them) +- Migration goes badly (downtime, incidents) +- Vendor fails (she picked them) + +**Personal Motivations:** +- New in role (6 months), needs wins +- Under pressure to improve uptime metrics +- Previous monitoring tool she picked failed + +**How This Changes Your Pitch:** + +**Generic pitch:** +"Our platform improves incident response time by 40%." + +**Personal win pitch:** +"You mentioned the on-call burden is burning out your team. We've seen teams reduce on-call pages by 40% in the first month, which helps with retention. And since you're focused on uptime metrics for the board, the improved response time shows up immediately in your QBR dashboards." + +**The Difference:** + +Generic = business case +Personal = career case + +Both matter. But personal wins close deals. + +**Common Mistake:** + +Selling only to the business problem. "This saves money. This improves efficiency." That's necessary but not sufficient. **People need to see what's in it for them personally.** + +--- + +### 4. Enterprise Account Plan Structure (Four Components) + +A complete account plan has four interconnected pieces. Each feeds the others. + +**Component 1: Account Summary** +- Company basics (HQ, size, industry, subsidiaries) +- Technical landscape (infrastructure, tools, platforms) +- Top corporate initiatives (from press, annual reports, LinkedIn) +- Hypothesis: "How can we help?" (write this before engaging) +- LinkedIn keyword analysis (quantify their investment in your domain) + +**Component 2: Org Chart** +- Map all relevant contacts: name, title, location, LinkedIn, email, phone, notes +- Notes capture: domain of responsibility, technical specialties, personal win +- Include people across levels: C-suite, directors, architects, leads, specialists +- Don't just map buyer — map influencers, users, potential blockers + +**Component 3: Opportunity Plan (MEDDICC)** +- **M - Metrics**: How will the customer measure success? (Validated with EB) +- **E - Economic Buyer**: Who has budget authority? Have you met them? +- **D - Decision Criteria**: What criteria will they use to decide? (Technical, business, political) +- **D - Decision Process**: What's their buying process? (Procurement, legal, security review) +- **I - Identified Pain**: What specific pain have they articulated? (Their words, not yours) +- **C - Champion**: Who inside the account is actively selling on your behalf? +- **C - Competition**: Who else are they evaluating? What's the competitive dynamic? + +Plus: Issues/Risks table with mitigation plans, help needed, responsible parties + +**Component 4: Mutual Action Plan (MAP)** +- Joint timeline with: Action, Your Owner, Customer Owner, Others Involved, Due Date +- Both sides must have actions — if only your team has actions, it's not a deal, it's a demo +- Track status (complete/in-progress) +- Use MAP as running agenda for check-in calls +- **If MAP isn't updated in 3 weeks, deal is dead** + +**Decision Criteria:** + +Full account plans worth investment for top 10-20% of accounts by potential deal size. For rest, use simplified version (summary + MEDDICC + next steps). + +**Common Mistakes:** +- Creating account plan after deal is in motion (build before first engagement) +- Not maintaining MAP weekly (stale MAP = stale deal) +- Filling MEDDICC with assumptions instead of validated info +- Mapping only obvious contacts instead of full org chart +- Not tracking personal win for each stakeholder + +--- + +### 5. LinkedIn Keyword Analysis for Account Intelligence + +Before engaging strategic account, quantify their investment in your domain via LinkedIn. + +**How to Execute:** + +1. Define 8-10 keywords relevant to your space (e.g., category terms, technical roles, workflow keywords) +2. Search LinkedIn for "[company name] + [keyword]" and record count +3. Map concentrations: Which locations? Which departments? +4. Identify outliers (high keyword concentration in specific departments signals maturity) + +**Why This Works:** + +If a company has 50 employees with "SRE" in their profile, they're mature in site reliability. If they have 2, they're not ready for advanced observability tools. + +This tells you: +- Whether to pursue the account (do they have the team?) +- Who to target (where are the concentrations?) +- How to personalize outreach (reference their specific context) + +**Example:** + +Searching "[Company] + DevOps": +- 120 results → Mature DevOps org, good fit +- 5 results → Early, not ready + +Searching "[Company] + SRE": +- 50 results → They care about reliability, pitch uptime/incident reduction +- 0 results → Don't lead with SRE value prop + +**Common Mistakes:** +- Just searching job titles (vary wildly) instead of keywords (consistent) +- Not comparing counts to total employee count +- Not refreshing analysis (hiring trends change quarterly) + +--- + +### 6. The Unified Sales Process (Stage Gates) + +Enterprise sales follows defined stages with clear exit criteria. Don't advance stages without meeting criteria. + +**Stage 0 — Pipeline Generation:** Prospecting → Qualified interest confirmed +**Stage 1 — Discovery:** Environment/pain/requirements → Pain identified, stakeholders mapped +**Stage 2 — Demonstrating:** Product demo, champion building → Champion identified +**Stage 3 — Proving Value:** POC/trial → Technical validation complete +**Stage 4 — Proposal:** Pricing, terms, scope → Proposal delivered, EB aligned +**Stage 5 — Paper Process:** Legal, procurement, security → Approvals secured +**Stage 6 — Closed Won:** Deal signed → Customer success handoff + +**Exit Criteria Matter:** + +Don't move from Stage 2 → Stage 3 until you have a champion. +Don't move from Stage 3 → Stage 4 until POC success criteria are met. +Don't move from Stage 4 → Stage 5 until EB has approved. + +**Common Mistake:** + +Advancing stages based on activity, not criteria. "We demoed, so we're in Stage 3" — but if they haven't agreed to POC, you're still in Stage 2. + +--- + +## Decision Trees + +### Do I Need a Full Account Plan? + +``` +Is deal size above average ACV? +├─ No → Simplified plan (summary + MEDDICC) +└─ Yes → Continue... + │ + Sales cycle >60 days? + ├─ Yes → Full account plan + └─ No → Simplified plan +``` + +### Is This Deal Actually Moving? + +``` +Is MAP being updated weekly? +├─ Yes → Healthy +└─ No → Continue... + │ + Has it been >3 weeks since last MAP update? + ├─ Yes → Dead deal (qualify out or reset) + └─ No → At risk (escalate to champion) +``` + +### Should I Send the Proposal? + +``` +Have you met the Economic Buyer? +├─ No → Don't send yet (get EB access first) +└─ Yes → Continue... + │ + Does EB agree on problem and success metrics? + ├─ Yes → Send proposal + └─ No → Align with EB before sending +``` + +--- + +## Common Mistakes + +**1. Creating account plan too late** + - Build before first engagement, not after deal is in motion + +**2. MEDDICC filled with assumptions** + - Validate each element with customer, don't guess + +**3. Stale Mutual Action Plan** + - If MAP isn't updated weekly, deal is stalling. 3+ weeks = dead. + +**4. Mapping only the buyer** + - Need full org chart: influencers, users, blockers + +**5. Ignoring personal wins** + - People buy for career/reputation reasons, not just business ROI + +**6. Not tracking deal health** + - Green/yellow/red indicators catch dying deals early + +**7. Skipping champion validation** + - Without internal champion, you're selling alone + +--- + +## Quick Reference + +**MAP Health Check:** +- Green: Updated weekly, both sides have actions, customer completing tasks +- Yellow: Updated bi-weekly, mostly your actions, customer slow to respond +- Red: 3+ weeks stale, only your actions, customer unresponsive → **Dead deal** + +**MEDDICC Validation:** +- [ ] Metrics: Success criteria agreed with EB +- [ ] Economic Buyer: Met them, validated problem/solution +- [ ] Decision Criteria: Understand their evaluation rubric +- [ ] Decision Process: Know procurement/legal/security steps +- [ ] Identified Pain: In customer's words, not yours +- [ ] Champion: Actively selling internally on your behalf +- [ ] Competition: Know alternatives they're considering + +**Personal Win Questions:** +- "What does success look like for you personally?" +- "What happens to your team if this works? If it doesn't?" +- "What are you being measured on this year?" +- "Who internally is skeptical? Why?" + +**Account Plan Checklist:** +- [ ] Account summary with hypothesis +- [ ] Org chart with personal wins mapped +- [ ] MEDDICC fully validated (not assumed) +- [ ] MAP with customer actions (not just yours) +- [ ] Weekly MAP update cadence scheduled + +--- + +## Related Skills + +- **enterprise-onboarding**: Post-close customer implementation +- **partnership-architecture**: Deals involving partner relationships +- **technical-product-pricing**: Enterprise pricing strategy + +--- + +*Based on enterprise sales at a platform company during hypergrowth, with patterns from closing strategic accounts, navigating complex procurement processes, and learning the hard way that stale MAPs = dead deals. Not theory — lessons from watching deals die because we didn't track health metrics and closing deals because we validated EB alignment early.* diff --git a/skills/gtm-enterprise-onboarding/SKILL.md b/skills/gtm-enterprise-onboarding/SKILL.md new file mode 100644 index 00000000..e72deb61 --- /dev/null +++ b/skills/gtm-enterprise-onboarding/SKILL.md @@ -0,0 +1,454 @@ +--- +name: gtm-enterprise-onboarding +description: Four-phase framework for onboarding enterprise customers from contract to value realization. Use when implementing new enterprise customers, preventing churn during onboarding, or solving the adoption cliff that kills deals post-go-live. Includes the Week 4 ghosting pattern. +license: MIT +--- + +# Enterprise Onboarding + +Four-phase framework for onboarding enterprise customers from contract to value realization. The goal isn't just go-live — it's sustained adoption that doesn't cliff at Week 12. + +## When to Use + +**Triggers:** +- "How do we onboard this enterprise customer?" +- "Customer went live but adoption is weak" +- "We keep losing customers 3 months after go-live" +- "POC to production transition" +- "How do I prevent Week 4 ghosting?" +- "Customer success onboarding framework" + +**Context:** +- Enterprise or mid-market deals +- Complex technical requirements +- Multiple stakeholders involved +- 30-90 day implementation timelines +- Risk of churn during first year + +--- + +## Core Frameworks + +### 1. The Week 4 Ghosting Problem (And How to Prevent It) + +**The Pattern:** + +Week 1: Kickoff call goes great. Everyone's excited. +Week 2-3: Technical discovery, requirements gathering. Still good. +Week 4: Customer stops responding. Meetings get cancelled. "Too busy." + +**What Happened?** + +You started customer onboarding before internal alignment on their side. + +**Who Owns This Project Internally?** +- Sales rep? (Already moved to next deal) +- Technical champion? (Day job took over) +- Executive sponsor? (Delegates, doesn't drive) +- Nobody? (**This is why they're ghosting**) + +**The Framework: Internal Owner Validation** + +Before kickoff call, answer: + +**Who on customer side will:** +- Attend weekly project meetings? (Not "invited" — will actually show up) +- Unblock issues with procurement/legal/security? (Has authority) +- Drive adoption with end users? (Has influence) +- Escalate when things stall? (Has executive access) + +**If you can't name a specific person for each, you don't have a project owner. You have a signed contract with nobody driving it.** + +**How to Fix It:** + +**During sales → CS handoff (before customer kickoff):** + +Sales rep must identify: +- Primary project owner (name, not role) +- Their capacity (dedicated or side project?) +- Their authority (can they unblock?) +- Their motivation (what's in it for them?) + +**If there's no clear owner:** + +Don't start onboarding yet. Have sales introduce you to economic buyer: + +"Before we kick off implementation, we want to make sure we have the right project owner on your side. In our experience, implementations succeed when someone owns driving this forward week-to-week. Who on your team should we partner with?" + +**Common Mistake:** + +Assuming someone will own it. Ask explicitly. If they can't name someone, the deal is at risk. + +--- + +### 2. The Adoption Cliff (Week 12 Problem) + +**The Pattern:** + +Go-live happens Week 6. Usage spikes. You celebrate. + +Week 8: Usage plateaus. +Week 10: Usage declining. +Week 12: Usage down 50% from peak. + +**Why This Happens:** + +You treated go-live as the finish line. **Go-live is the starting line.** + +**What Drives Sustained Adoption:** + +**Not:** Feature completeness, technical integration, training sessions + +**Yes:** Ongoing value demonstration, user success stories, expanding use cases + +**Framework: Adoption Stages Beyond Go-Live** + +**Week 1-6 (Implementation):** Get it working +- Measure: % of technical setup complete +- Owner: Technical lead + +**Week 6-12 (Initial Adoption):** Get people using it +- Measure: # active users, frequency of use +- Owner: Enablement / DevRel + +**Week 12-26 (Sustained Adoption):** Prove ongoing value +- Measure: Use case expansion, team spread +- Owner: Customer success + +**Week 26+ (Expansion):** Grow within account +- Measure: New teams, new use cases, upgrade triggers +- Owner: Account executive + CS + +**The Handoff That Most Teams Miss:** + +Week 6 (go-live) → Week 12 (sustained adoption) + +Most CS teams celebrate go-live and move to next customer. **This is when churn seeds get planted.** + +**What to Do Week 6-12:** + +**Week 7:** First value report +"Here's what your team accomplished in the first week: [specific metric]. Here's what good looks like at Week 12: [target]." + +**Week 9:** User success story +"[User name] on [team name] saved [X hours/reduced Y errors] this week. Here's how they're using it." + +**Week 11:** Use case expansion conversation +"You're using us for [primary use case]. Teams like yours also use us for [adjacent use case]. Want to explore?" + +**Common Mistake:** + +Measuring "go-live completion" instead of "sustained active usage." Go-live is not success. Week 26 retained adoption is success. + +--- + +### 3. Pre-Onboarding: Success Is Built Before First Customer Call + +**The Pattern:** + +Most onboarding failures trace back to pre-kickoff gaps. + +**What Gets Missed:** + +**Sales didn't brief CS properly:** +- Deal drivers unknown +- Stakeholder dynamics unclear +- Technical requirements assumed + +**No internal project owner identified:** +- CS reaches out, nobody responds +- Meetings get scheduled with wrong people +- Decisions don't stick + +**Customer timeline unrealistic:** +- They want go-live in 2 weeks +- Technical setup takes 6 weeks minimum +- Expectations misaligned from Day 1 + +**Framework: Pre-Kickoff Checklist** + +Before scheduling kickoff call, validate: + +**Account Intelligence:** +- [ ] Sales handoff completed (deal drivers, stakeholders, technical requirements) +- [ ] Past interactions reviewed (demo notes, proposal, emails) +- [ ] Organizational structure mapped (team sizes, reporting lines) +- [ ] Use cases documented (primary + future) + +**Internal Setup:** +- [ ] Internal Slack channel created (#account-[customer-name]) +- [ ] Account plan updated in CRM +- [ ] Project plan template prepared +- [ ] Roles assigned (CSM lead, technical lead, exec sponsor) + +**Customer Readiness:** +- [ ] Project owner identified by name (not just "their DevRel team") +- [ ] Executive sponsor confirmed on both sides +- [ ] Timeline realistic (their goals vs your typical timeline) +- [ ] Known blockers documented (procurement, security, legal) + +**Timeline Validation:** +- [ ] Customer's go-live date is realistic given technical requirements +- [ ] Internal capacity available (not overbooked) +- [ ] Dependencies identified (SSO, integrations, data migration) + +**Decision Criteria:** + +Only schedule kickoff when all four sections validated. If gaps exist, surface to sales or executive sponsor before engaging customer. + +**Common Mistake:** + +Starting onboarding without internal clarity. This creates confusion, missed deadlines, and erosion of customer confidence. + +--- + +### 4. The Four-Phase Onboarding Flow + +**Phase 1: Kickoff (Week 1)** + +**Goal:** Align on objectives, timeline, success metrics + +**Attendees:** Executive sponsors + project leads + technical leads + +**Agenda:** +1. Introductions and roles (5 min) +2. Executive alignment on strategic objectives (5 min) +3. Success definition: "What does success look like in 3/6/12 months?" (10 min) +4. Timeline and milestones (5 min) +5. Meeting cadence (weekly project team, monthly exec review) (5 min) +6. Next steps (technical discovery call, success plan review) (5 min) + +**Deliverable:** Kickoff recap sent within 24 hours with success metrics, timeline, next meetings + +**Phase 2: Discovery & Planning (Week 2-3)** + +**Goal:** Understand technical landscape, map use cases, plan rollout + +**Three parallel workstreams:** + +**Workstream 1: Technical Discovery** +- Current infrastructure (on-prem, cloud, hybrid) +- Existing tools and integrations +- Security/compliance requirements +- Timeline constraints + +**Workstream 2: Success Planning** +- Use cases prioritized (start with highest-value) +- Success metrics defined (how to measure adoption) +- Training needs identified (who needs what) + +**Workstream 3: Technical Setup** +- SSO/identity configuration +- Integrations required +- Data migration (if applicable) +- Pilot group identified + +**Deliverable:** Customer Success Plan document with use cases, metrics, timeline, milestones + +**Phase 3: Implementation (Week 4-6)** + +**Goal:** Deploy to pilot group, validate use cases, prepare for broader rollout + +**Three parallel tracks:** + +**Track 1: Administration & Setup** +- SSO configuration complete +- Integrations live +- Data migrated (if applicable) + +**Track 2: User Enablement** +- Training sessions for pilot group +- Documentation shared +- Office hours scheduled + +**Track 3: Pilot & Feedback** +- Pilot group using product +- Feedback collected weekly +- Issues triaged and resolved + +**Deliverable:** Go-live readiness checklist completed, pilot group validated + +**Phase 4: Go-Live & Ongoing Success (Week 6+)** + +**Goal:** Roll out broadly, sustain adoption, expand use cases + +**Week 6-8 (Rollout):** +- Broader rollout to all teams +- Training sessions scheduled +- Support available (Slack, email, office hours) + +**Week 8-12 (Value Demonstration):** +- First value report (Week 7) +- User success stories shared (Week 9) +- Use case expansion conversation (Week 11) + +**Week 12-26 (Sustained Adoption):** +- Monthly business reviews +- Adoption tracking (active users, frequency, use cases) +- Expansion opportunities identified + +**Common Mistake:** + +Treating go-live as completion. Phase 4 is where retention is won or lost. + +--- + +### 5. The Parallel Tracks Anti-Pattern + +**The Pattern:** + +Most onboarding teams run workstreams **sequentially**: +1. Technical setup (Weeks 1-2) +2. Then training (Weeks 3-4) +3. Then pilot (Weeks 5-6) + +**Total time: 6 weeks** + +**What Works Better: Parallel Tracks** + +Run technical setup, training, and pilot **simultaneously**: +- Week 1: Technical discovery + identify pilot group + schedule training +- Week 2: SSO config + pilot group training + pilot starts +- Week 3: Integrations + broader training + pilot feedback + +**Total time: 3 weeks** + +**Why Parallel Works:** + +1. Shortens time-to-value +2. Keeps customer engaged (something happening every week) +3. Identifies blockers early (pilot group surfaces issues before broad rollout) + +**How to Execute:** + +Assign clear owners to each track: +- Track 1 (Admin): Technical lead +- Track 2 (Enablement): Training/DevRel lead +- Track 3 (Pilot): CSM + pilot group champion + +Weekly sync across tracks to surface dependencies and blockers. + +**Common Mistake:** + +Waiting for "perfect technical setup" before starting pilot. Get pilot group using it early, even if setup isn't perfect. Their feedback makes the broad rollout better. + +--- + +## Decision Trees + +### Should I Start Customer Onboarding? + +``` +Has sales identified a project owner by name? +├─ No → Get project owner identified before kickoff +└─ Yes → Continue... + │ + Is their timeline realistic given typical deployment? + ├─ No → Reset expectations before kickoff + └─ Yes → Continue... + │ + Do you have internal capacity? + ├─ No → Delay kickoff or get more resources + └─ Yes → Proceed to kickoff +``` + +### Is This Onboarding At Risk? + +``` +Is customer responding to meeting invites? +├─ No → Week 4 ghosting, escalate to exec sponsor +└─ Yes → Continue... + │ + Are they completing their action items? + ├─ No → No project owner, identify who drives this + └─ Yes → Continue... + │ + Is pilot group using the product? + ├─ No → Pilot group wrong or product not solving pain + └─ Yes → On track +``` + +### Is Adoption Sustained Post-Go-Live? + +``` +Are active users growing Week 6 → Week 12? +├─ Yes → Healthy adoption +└─ No → Continue... + │ + Are active users declining? + ├─ Yes → Adoption cliff, intervene immediately + └─ No (plateau) → At risk, start value demonstration +``` + +--- + +## Common Mistakes + +**1. Starting customer onboarding before internal alignment** + - Wastes first 2-3 weeks, creates confusion, kills credibility + +**2. Not identifying real project owner upfront** + - Discovers it Week 4, has to restart or deal stalls + +**3. Overcommitting on timeline without technical requirements** + - Discovers blockers mid-implementation, misses deadline + +**4. No internal communication hub** + - Decisions don't propagate across teams, rework happens + +**5. Treating go-live as project complete** + - Adoption cliff at Week 12, account at risk + +**6. Sequential tracks instead of parallel** + - Implementation takes twice as long, customer loses momentum + +**7. No ongoing metrics post go-live** + - Discovers adoption issues too late to save account + +--- + +## Quick Reference + +**Pre-Kickoff Validation:** +- [ ] Sales handoff complete (deal drivers, stakeholders, requirements) +- [ ] Project owner identified by name on customer side +- [ ] Timeline realistic (their goals vs typical deployment) +- [ ] Internal roles assigned (CSM, technical, exec sponsor) + +**Kickoff Agenda (30-45 min):** +1. Introductions (5 min) +2. Executive alignment (5 min) +3. Success definition (10 min) +4. Timeline and milestones (5 min) +5. Meeting cadence (5 min) +6. Next steps (5 min) + +**Adoption Tracking (Week 6-26):** +- Week 7: First value report +- Week 9: User success story +- Week 11: Use case expansion conversation +- Week 13: First monthly business review +- Week 26: Expansion readiness assessment + +**Four Phases:** +1. Kickoff (Week 1): Align +2. Discovery (Week 2-3): Plan +3. Implementation (Week 4-6): Deploy to pilot +4. Go-Live & Sustained (Week 6+): Rollout, value demonstration, expansion + +**Red Flags:** +- Customer not responding Week 4 → No project owner +- Pilot group not using product Week 5 → Wrong group or wrong use case +- Active users declining Week 8-12 → Adoption cliff forming + +--- + +## Related Skills + +- **enterprise-account-planning**: Pre-sale deal planning and stakeholder mapping +- **operating-cadence**: Onboarding review cadence and health metrics +- **product-led-growth**: Self-serve onboarding patterns + +--- + +*Based on enterprise onboarding across multiple platform companies — designing partner onboarding directly and collaborating closely with CS on customer onboarding. Not theory — lessons from seeing Week 4 ghosting happen repeatedly and learning that go-live ≠ success, and understanding the adoption cliff that kills 30% of deals in first year.* diff --git a/skills/gtm-operating-cadence/SKILL.md b/skills/gtm-operating-cadence/SKILL.md new file mode 100644 index 00000000..71a48bff --- /dev/null +++ b/skills/gtm-operating-cadence/SKILL.md @@ -0,0 +1,417 @@ +--- +name: gtm-operating-cadence +description: Design meeting rhythms, metric reporting, quarterly planning, and decision-making velocity for scaling companies. Use when decisions are slow, planning is broken, the company is growing but alignment is worse, or leadership meetings consume all time without producing decisions. +license: MIT +--- + +# Operating Cadence + +The meeting structure that worked at 30 people collapses at 100. What worked at 100 collapses at 300. The failure mode is always the same: too many people in too many meetings making too few decisions. + +## When to Use + +**Triggers:** +- "Our meetings don't produce decisions" +- "We're growing but alignment is getting worse" +- "How often should we meet?" +- "Nobody knows what's happening across functions" +- "Decisions take forever" +- "Leadership is in meetings all day" + +**Context:** +- Companies scaling from 20 to 300+ people +- Post-PMF through growth stage +- Distributed / remote teams +- Any stage where "we need to talk about this" has become the default + +--- + +## Core Frameworks + +### 1. The Five-Level Meeting Architecture + +**The Pattern:** + +Different meetings serve different purposes. Conflating them creates either inefficiency (too much time) or confusion (unclear decisions). Separate meetings by function, frequency, and decision authority. + +**Level 1: Daily Standup (15 min, teams only)** + +- What we finished yesterday, what we're starting today, what's blocking us +- 5-10 people max. Whole-company standups are theater +- Anti-pattern: Status reporting (use Slack, not meetings) +- Anti-pattern: Strategic discussion (wrong time, wrong place) +- Success criteria: Finishes in 15 minutes, surfaces 1-2 blockers + +**Level 2: Weekly Functional Reviews (60 min, function leadership)** + +Each function gets its own weekly rhythm: +- Product team Friday 4pm: metrics, user feedback, roadmap blockers +- GTM team Tuesday 4pm: pipeline, customer updates, deal health +- Engineering Wednesday 4pm: velocity, bug backlog, deployment + +Format: Metric recap (10 min) → Wins/blockers (15 min) → One deep-dive (30 min) → Next week priorities (5 min) + +Anti-pattern: Trying to solve every problem in the meeting. Pick 1-2, delegate the rest to follow-ups. + +**Level 3: Weekly All-Hands (60 min, whole company)** + +The single most important alignment mechanism at a scaling company. + +- CEO update (15 min): north star progress, week focus, what's changed +- Metric dashboard (10 min): same format every week (consistency enables pattern recognition) +- Deep dive (20 min): one strategic topic needing team input — not a presentation, a discussion +- Q&A (15 min): real questions, real answers + +Anti-pattern: Defensive tone. All-hands should be straightforward, not spin. +Anti-pattern: Inconsistent metrics. If you change the dashboard, the team can't track progress. + +**Level 4: Bi-Weekly Leadership Alignment (90 min)** + +- North star progress (5 min) +- Functional updates (30 min, 5-7 min each) +- Major decisions needing resolution (30-40 min): resource conflicts, strategic pivots, customer/product decisions +- Next 2 weeks planning (15 min) + +This is where cross-functional blockers get resolved. If functions operate independently, this meeting isn't working. + +**Level 5: Quarterly Strategic Planning (half-day to full-day)** + +- Previous quarter retrospective (90 min): What worked, what didn't, what we'd do differently +- Next quarter planning (120 min): What are we optimizing for? What's the roadmap? +- Function breakouts (90 min): Each function plans their quarter +- Synthesis (60 min): Functions share commitments, resolve conflicts + +Anti-pattern: Too much "fun activity," not enough substance. +Anti-pattern: No clear decisions coming out. + +**Scaling Adjustments:** + +- **<30 people**: Levels 2-3 only. Skip daily standups (you see everything). Skip bi-weekly leadership (you ARE leadership). +- **30-100 people**: Add all 5 levels. Monthly review catches what you no longer see daily. +- **100-300 people**: Add skip-level reviews. You're 2+ layers from execution. +- **300+ people**: Add function-specific sub-cadences. CEO should be in *fewer* meetings than at 50 — not more. + +**The Rule That Makes This Work:** + +Every meeting must produce decisions or be cancelled. Status updates are async. If you're in a meeting and nobody is making a decision, leave. + +--- + +### 2. Weekly Metric Reporting (The Dashboard That Catches Problems Early) + +**The Pattern:** + +Monthly reporting catches problems 30 days late. By then, a bad month is baked. Weekly reporting catches problems in week 2, when you can still save the month. + +**The Format (Same Structure Every Week):** + +``` +WEEK OF [DATE] + +North Star: [Metric] +This Week: [Value] | Last Week: [Value] | Change: [+/-] [↑↓] +Context: [One sentence — why this trend matters] + +Functional Metrics: + Product: 7-Day Retention: 34% | Last: 33% | +1% ↑ + Feature Adoption: 18% | Last: 16% | +2% ↑ + Context: Onboarding improvements showing impact + + GTM: Pipeline: $8.2M | Last: $7.8M | +$400K ↑ + New POCs: 3 | Last: 2 + Context: Partner pipeline adding deals + + Health: Team Morale: 7.2/10 (down from 7.5) + Context: Org restructure causing uncertainty +``` + +**The Discipline Rules:** + +1. **Same metrics every week.** Consistency enables pattern recognition. OK to add metrics, never drop them. +2. **One context sentence per metric.** Not just the number — why does this matter? Vs plan? Vs last period? +3. **Trend direction for every metric.** Up/down/flat arrow. If it moved significantly: temporary or structural? +4. **Traffic light colors.** GREEN (on track), YELLOW (watch), RED (action needed). Every RED item must have: owner, specific action, deadline. + +**The Escalation Rule:** + +If a metric is RED two weeks in a row with the same action plan, escalate — the action plan isn't working. + +**How Many Metrics:** + +Pick 8-12 total. If a metric doesn't change your behavior when it moves, remove it. Dashboards with 40 metrics are decoration, not decision tools. + +**Common Mistake:** + +Vanity metrics that look good but don't predict business outcomes. Total downloads without adoption context. CEO headlines without supporting metrics. + +--- + +### 3. Quarterly Planning (The Process That Prevents Strategic Drift) + +**The Pattern:** + +Without quarterly planning, companies drift. Each function optimizes locally. Sales chases deals outside ICP. Product builds features for one customer. Marketing runs campaigns that don't connect to pipeline. + +**The 3-Week Planning Cycle:** + +**Week 1: Retrospective + Data Gathering** +- Previous quarter results vs plan (leadership prepares) +- Each function writes 1-page retrospective: what worked, what didn't, what we'd do differently +- Finance prepares: revenue actuals, spend actuals, forecast +- Market data: competitive moves, customer feedback themes, win/loss analysis + +**Week 2: Priority Setting (Leadership Half-Day)** +- Review retrospectives (30 min — pre-read, don't present) +- Agree on 3-5 company-level priorities +- For each: owner, success metric, resource requirements +- Identify what you're *not* doing (as important as what you are) +- Resolve cross-functional dependencies +- Use the north star as tiebreaker: "Does this help us hit the goal? Prioritize. Nice-to-have? Defer." + +**Week 3: OKR Cascade + Resource Allocation** +- Each function translates company priorities into team OKRs +- Leadership reviews for alignment +- Resource allocation finalized (headcount, budget, tools) +- Final plan shared company-wide + +**The Quarterly Commitment Format:** + +``` +Q2 2026 Roadmap +North Star: [What we're optimizing for] + +Pillar 1: Product (25% team effort) + Initiative: [Name] + Problem: [What we're solving] + Success: [Specific metric] + Owner: [Name] + Timeline: [When] + +Pillar 2: GTM (50% team effort) + Initiative: [Name] + ... + +Pillar 3: People (10% effort) + Initiative: [Name] + ... + +Pillar 4: Tech Debt (15% effort) + Initiative: [Name] + ... +``` + +**The "Not Doing" List:** + +For every priority you add, identify one thing you're stopping. If you can't name what you're *not* doing, you have too many priorities. + +**Common Mistake:** + +Quarterly planning that produces a 30-page doc nobody reads. The output should be: 3-5 priorities on one page, each with owner and metric. That's it. + +--- + +### 4. Decision Velocity and Authority + +**The Pattern:** + +At 20 people, the CEO makes every decision in real-time. Fast. At 100 people, decisions require alignment. Slow. At 300, decisions require alignment, approval, and documentation. Glacial. + +**The fix isn't more meetings. It's clear decision rights.** + +**Decision Authority Matrix:** + +| Decision | Who Decides | Timeline | Escalation | +|----------|------------|----------|------------| +| Company strategy | CEO | 1 week | Board if strategic | +| Feature priority | Product lead | 1 week | CEO if >3 eng weeks | +| Customer support issue | CSM | Immediately | CS lead if escalated | +| Marketing campaign | Marketing lead | 2 weeks | CMO if >$10K budget | +| Hiring | Function leader | 2 weeks | CEO if role not approved | +| New partnership | CEO | 2 weeks | Board if strategic | +| Vendor selection | Function leader | 1 week | CEO if >$50K/year | + +**The Problem:** + +Scaling companies start treating reversible, low-stakes decisions like irreversible, high-stakes ones. Everything needs approval. Everything needs a meeting. Everything needs consensus. + +**The Fix:** + +**Type 1 (Irreversible, high-stakes):** Pricing model, market entry, major partnership → CEO/leadership decides with debate in one meeting. Timeline: 1-2 weeks max. + +**Type 2 (Reversible, low-stakes):** Campaign creative, feature prioritization, single hire → Function owner decides, informs, iterates. Timeline: same day or next day. + +**Make decisions with 70% information, not 100%.** Speed is a competitive advantage at every stage. + +**Common Mistake:** + +Consensus culture masquerading as collaboration. "Let's get everyone aligned" often means "nobody wants to decide." Name the decider. Let them decide. Move on. + +--- + +### 5. Async-First Communication + +**The Pattern:** + +Synchronous meetings don't scale. Default to async, escalate to sync. + +**Async First (No Meeting Needed):** +- Decision documents (even major ones — write up proposal, solicit comments, 48-72 hours for feedback, decide if consensus or no material objections) +- Progress updates (use weekly reporting, not meetings) +- Process changes and SOPs +- Decisions already made (inform, don't discuss) + +**Sync When:** +- Real-time brainstorming needed +- Major disagreement to work through +- Complex topic needing whiteboard +- Team building / relationship + +**Documentation Discipline:** + +Every decision documented: What was decided? Why? Who decided? When does it take effect? Who needs to know? + +Store in searchable format (wiki, shared drive). New hires onboard faster. Past decisions don't get relitigated. + +**Common Mistake:** + +"Quick sync" meetings that grow to consume 10 hours per week. Over-communicating in Slack (ephemeral, noisy) and under-communicating in persistent formats (docs, emails). The important stuff should be searchable 6 months later. + +--- + +### 6. The CEO Weekly Update + +**The Pattern:** + +The single highest-leverage communication tool at a scaling company. 5-10 minutes to write. Everyone reads it. It sets context, celebrates wins, names priorities, and creates shared understanding. + +**Format (Sent Sunday Night or Monday Morning):** + +**1. Week Focus (1 paragraph):** +What's the priority this week? What should the team be focused on? + +**2. North Star Progress (1-2 bullets):** +Where are we on the key metric? Trend up/down/flat? Why does this matter? + +**3. Wins This Week (3-5 bullets):** +What shipped? Customer/partner wins? Big picture implication? + +**4. Blockers Getting Resolved (1-2 bullets):** +What are we unblocking this week? Who needs to know? + +**5. Ask (1 bullet, optional):** +What help does the team need? Referrals, feedback, customer introductions? + +**The Rule:** + +Same day every week. Consistency signals operational discipline. If you skip a week, the team notices — and starts wondering what you're not telling them. + +**Common Mistake:** + +Too long (team doesn't read), too detailed (save that for function meetings), only good news (team loses trust), inconsistent (team stops reading). + +--- + +### 7. Role Clarity > Titles + +**The Pattern:** + +The most powerful tool for speed isn't hierarchy — it's explicit role clarity. When someone knows exactly what they own and can't delegate it away, decisions happen faster. + +**How to Execute:** + +- Every initiative gets exactly one owner (with supporting teammates) +- Metrics are tied to that owner +- Success is measured by moving KPIs, not completing tasks +- Eliminate initiatives without clear ownership within 48 hours + +**The Test:** + +Can you name the single person who owns this outcome? Not "the team" — a person. If you can't, the initiative will drift. + +**Common Mistake:** + +Assigning projects to multiple people ("everyone owns it" = nobody owns it). Measuring activity instead of impact. Burn rate going up without clear ROI tracking per initiative. + +--- + +## Decision Trees + +### Which Meeting Levels Do We Need? + +``` +Company size <30? +├─ Yes → Levels 2-3 only (weekly functional + all-hands) +└─ No → Continue... + │ + 30-100 people? + ├─ Yes → All 5 levels + └─ No → All 5 + skip-level reviews + function sub-cadences +``` + +### Is This Meeting Worth Keeping? + +``` +Does it produce decisions? +├─ No → Can it be async? +│ ├─ Yes → Make it async, cancel the meeting +│ └─ No → Redesign with decision agenda +└─ Yes → Are the right people in the room? + ├─ No → Fix attendee list (fewer > more) + └─ Yes → Keep it +``` + +--- + +## Common Mistakes + +**1. Adding meetings as you grow** +Replace them. At 200 people, the CEO should be in fewer meetings than at 50. + +**2. Status update meetings** +If it can be an email, it should be an email. Meetings are for decisions. + +**3. Changing metrics every quarter** +Consistency enables trend identification. Same dashboard, every time. + +**4. Consensus culture** +Name the decider. Let them decide. Inform everyone else. + +**5. All information in Slack** +Ephemeral, noisy, unsearchable. Important decisions go in docs. + +**6. Quarterly planning that produces 30-page docs** +3-5 priorities on one page. That's the output. + +--- + +## Quick Reference + +**Meeting architecture:** +Daily standup (15 min) → Weekly functional (60 min) → Weekly all-hands (60 min) → Bi-weekly leadership (90 min) → Quarterly planning (half-day) + +**Weekly metric dashboard:** +8-12 metrics, same format every week, traffic light colors, one context sentence per metric, owner + action + deadline for every RED + +**Quarterly planning cycle:** +Week 1: Retro + data → Week 2: Priority setting (3-5 max) → Week 3: OKR cascade + resources + +**Decision authority:** +Type 1 (irreversible): CEO/leadership, 1-2 weeks → Type 2 (reversible): Function owner, same day + +**CEO weekly update:** +Week focus → North star progress → Wins → Blockers → Ask + +**Information flow:** +Daily: Slack wins/customer-voice → Weekly: CEO email + function updates → Monthly: All-hands + skip-levels → Quarterly: Planning share + demos + +--- + +## Related Skills + +- **enterprise-account-planning**: Stakeholder management and deal cadence patterns +- **0-to-1-launch**: Launch-specific execution cadence +- **board-and-investor-communication**: Board meeting structure and investor updates + +--- + +*Based on operating cadence design across companies scaling from 20 to 1,000+ employees, including the five-level meeting architecture that survived 3x headcount growth, the weekly reporting format that caught pipeline problems 3 weeks earlier than monthly reviews, and the CEO weekly update format refined across multiple companies. Not theory — patterns from building operating systems through hypergrowth and teaching them to the next team.* diff --git a/skills/gtm-partnership-architecture/SKILL.md b/skills/gtm-partnership-architecture/SKILL.md new file mode 100644 index 00000000..630438e5 --- /dev/null +++ b/skills/gtm-partnership-architecture/SKILL.md @@ -0,0 +1,467 @@ +--- +name: gtm-partnership-architecture +description: Build and scale partner ecosystems that drive revenue and platform adoption. Use when building partner programs from scratch, tiering partnerships, managing co-marketing, making build-vs-partner decisions, or structuring crawl-walk-run partner deployment. +license: MIT +--- + +# Partnership Architecture + +Build and scale partner ecosystems that drive revenue and platform adoption. These aren't theory — they're patterns from building partner programs that drove 8-figure ARR and observing partnerships with real economic commitment. + +## When to Use + +**Triggers:** +- "How do I structure a partner program?" +- "Should we build this or partner for it?" +- "Partner-led vs direct sales motion" +- "Ecosystem strategy" +- "How to recruit and tier partners" +- "Co-marketing with partners" +- "When does a partnership actually matter?" + +**Context:** +- Building partnership program from scratch (0→1) +- Scaling existing program (1→100) +- Evaluating build vs partner decisions +- Structuring partner deals and economics +- Planning partner GTM motions + +--- + +## Core Frameworks + +### 1. Real Partnerships Require Skin in the Game + +**The Pattern:** + +Most "partnerships" are co-marketing theater. Joint webinars, logo swaps, press releases. No economic commitment. No real skin in the game. + +Real partnerships look different: +- Economic commitment (spend, revenue share, co-investment) +- Product roadmap alignment (features built for the partnership) +- Executive sponsorship (leadership engaged quarterly) +- Mutual risk (both sides can fail if it doesn't work) + +**How to Tell the Difference:** + +Ask: "If this partnership fails, what does each side lose?" + +If the answer is "nothing" — it's not a partnership. It's a handshake. + +The best partnerships I've seen involved uncomfortable commitments on both sides. Multi-year cloud spend commitments. Dedicated engineering teams. Revenue guarantees. The discomfort is the point — it forces both sides to make the partnership work. + +**Framework: Three-Sided Value Proposition** + +Every successful partnership creates clear value for three parties: + +**Your Company:** +- Distribution (access to partner's customers) +- Credibility (association with known brand) +- Revenue (direct or influenced) +- Product leverage (capability you don't build) + +**The Partner:** +- Revenue or margin improvement +- Customer retention/stickiness +- Competitive differentiation +- Reduced support burden + +**Shared Customers:** +- Workflow improvement +- Reduced integration pain +- Single vendor relationship +- Cost efficiency + +**Decision Criteria:** + +Before pursuing any partnership, answer: + +1. What is our economic commitment? (Eng resources, spend, revenue share?) +2. What is partner's economic commitment? (Are they investing too?) +3. What happens if this fails? (Do we both lose something real?) + +If both sides can walk away with zero cost, **it's not a partnership — it's a handshake.** + +**Common Mistake:** + +Treating "partnerships" as marketing announcements. Integration launches, joint webinars, co-branded content. These create buzz, not business. Real partnerships require uncomfortable commitments. + +--- + +### 2. Ecosystem Control = Discovery, Not Gatekeeping + +**The Developer Marketplace Decision:** + +Running ecosystem at a platform company during hypergrowth. Leadership debate: Open the network to anyone, or curate for quality? + +**Quality control camp:** "We need gatekeeping. Otherwise we'll get SEO spam, low-quality APIs, brand damage." + +**Open network camp:** "Developers route around gatekeepers. Network effects matter more than quality control." + +**The decision:** Went open. Quality concerns were real, but we made a bet: **Control comes from discovery + trust layers, not submission gatekeeping.** + +**What We Built Instead of Gatekeeping:** + +1. **Search and discovery** - Surface high-quality APIs through algorithms +2. **Trust signals** - Verified badges, usage stats, health scores +3. **Community curation** - User ratings, collections, recommendations +4. **Moderation** - Remove spam after publication, not block before + +**Result:** Network effects won. Thousands of APIs published. Quality surfaced through usage, not through us deciding upfront. + +**The Pattern:** + +**Curated ecosystem (Gatekeeper Model):** +- Pros: High quality, controlled brand +- Cons: Slow growth, partner friction, you become the bottleneck + +**Open ecosystem (Discovery Model):** +- Pros: Network effects, rapid growth, self-service +- Cons: Quality variance, moderation overhead + +**When to Use Which:** + +``` +Is brand damage risk high if low-quality partners join? +├─ Yes (regulated, security-critical) → Curated +└─ No → Continue... + │ + Can you scale human review? + ├─ No (thousands of potential partners) → Open + └─ Yes (dozens of partners) → Curated +``` + +**Common Mistake:** + +Defaulting to curated because "we need quality control." This works when you have 10 partners. At 100+, you become the bottleneck. Build discovery and trust systems instead. + +--- + +### 3. Partnership Tactics > Partnership Theater + +**The Certification Wedge:** + +Early in a cloud partnership, looking for channel leverage. Targeting managed service providers (MSPs). + +**The insight:** Buried in the cloud provider's partner program requirements: "Must include [our product category] in certified stack." + +**The play:** Built entire partnership pitch around that one line. MSPs didn't just want our product — they **needed it** to maintain certification. + +**Result:** We became required, not "nice to have." Closed MSP deals 3x faster than generic partnerships. + +**Framework: Partnership Leverage Types** + +**1. Requirement leverage** (Strongest) +- Partner needs you for certification/compliance/partnership status +- Example: Cloud provider certification requiring your category of product +- How to find: Read partner program requirements, marketplace rules + +**2. Economic leverage** (Strong) +- Helps partner make or save money directly +- Example: Reduce partner's support costs by 30% +- How to measure: Calculate partner's ROI in their P&L terms + +**3. Competitive leverage** (Moderate) +- Gives partner differentiation vs competitors +- Example: Exclusive integration for 6 months +- How to validate: Ask "would competitors want this?" + +**4. Customer leverage** (Moderate) +- Partner's customers demand the integration +- Example: 50+ support tickets requesting integration +- How to measure: Partner support ticket volume + +**5. Co-marketing leverage** (Weak) +- Joint content, events, logo swaps +- Example: Co-branded webinar +- Reality: Nice to have, rarely closes deals + +**How to Apply:** + +**Before pitching partnership, identify your leverage:** + +High leverage (requirements, economics) → Full partnership investment +Moderate leverage (competitive, customer) → Light partnership, test first +Low leverage (co-marketing only) → Don't do it, you'll waste time + +**The Qualification Question:** + +"If we don't do this partnership, what happens to you?" + +- "We lose cloud provider certification" → High leverage, pursue +- "We might lose some customers" → Moderate, test carefully +- "Nothing really changes" → No leverage, walk away + +**Common Mistake:** + +Pitching partnerships based on your benefit, not theirs. "We want access to your customers" is co-marketing theater. "You'll maintain cloud provider certification" is leverage. + +--- + +### 4. Partner Tiering: Three-Tier Model + +Structure partner programs into clear tiers based on commitment and capability: + +**Tier 1: Integration Partner (Self-Serve)** +- Partner builds with your public API/docs +- You provide: documentation, Slack channel, office hours +- Partner drives their own promotion +- Timeline: 2-6 months +- Best for: Ambitious partners with engineering resources + +**Tier 2: Partnership Partner (Joint Development)** +- Co-developed integration +- You provide: dedicated channel, regular syncs, product input +- Platform provides co-marketing support +- Timeline: 6-12 months +- Best for: Strategic fit partners, accelerating integration quality + +**Tier 3: Strategic Partner (Co-Development)** +- Deep product roadmap integration +- You provide: dedicated partner manager, executive relationship +- Customized co-marketing, revenue objectives +- Timeline: Ongoing +- Best for: Marquee partnerships that shift positioning + +**Decision Criteria:** +- Tier based on strategic fit AND partner capability +- Don't over-tier (creates expectations you can't meet) +- Create clear graduation path between tiers + +**Common Mistake:** + +Treating all partners equally. Tier 1 partners want self-serve, Tier 3 want white-glove. Mismatch creates frustration. + +--- + +### 5. Crawl-Walk-Run Partnership Deployment + +De-risk partnerships with phased validation before full commitment. + +**Crawl (4-8 weeks):** +- 1-2 pilot customers using both solutions +- Manual or lightweight integration (not production-grade) +- Measure specific outcomes: time savings, adoption, revenue impact +- Go/no-go: 20%+ improvement on stated metric + +**Walk (8-12 weeks):** +- 5-10 additional customers +- Build formal integration +- Co-marketing: joint announcements, webinars +- Sales enablement: training, playbooks +- Go/no-go: 70%+ adoption rate of invited customers + +**Run (6-12 months ongoing):** +- Full-scale deployment +- Joint enterprise sales, integrated customer success +- APIs/native integrations, marketplace listing +- Quarterly business reviews, executive steering + +**The Pattern:** + +Most partnerships fail in Crawl phase. That's good — you learn fast with minimal investment. + +**Common Mistakes:** +- Skipping Crawl phase (jumping straight to full commitment) +- Running phases in parallel (creates confusion, can't isolate signal) +- Continuing partnerships not delivering value (sunk cost fallacy) +- Moving to next phase without clear go/no-go criteria + +**Go/No-Go Criteria:** + +**After Crawl:** +- Did pilot customers see 20%+ improvement? +- Would they recommend to peers? +- Can we scale this integration? + +**After Walk:** +- Did 70%+ of invited customers adopt? +- Is partner actively promoting? +- Is support burden manageable? + +**Enter Run Only If:** +- Both Crawl and Walk passed criteria +- Both sides committed to next phase +- ROI model validates at scale + +--- + +### 6. Partnership Value Exchange Clarity + +If you can't articulate what each party gets, the partnership will fail. + +**Partnership Charter (Required Before Launch):** + +**Mutual Goals:** +- What does success look like for us? +- What does success look like for partner? +- What does success look like for customers? + +**Value Exchange:** +- What we give (engineering time, co-marketing, revenue share) +- What partner gives (distribution, credibility, co-investment) +- Is this balanced? (Would both sides still do this if other walked?) + +**Timeline:** +- Crawl phase (dates, deliverables, metrics) +- Walk phase (dates, deliverables, metrics) +- Run phase (ongoing cadence, QBRs) + +**Measurement:** +- Specific metrics for success (revenue, customers, retention) +- How we'll track (dashboard, reports, reviews) +- Review cadence (monthly? quarterly?) + +**Governance:** +- Who owns decisions on each side? +- Escalation path for disputes +- Exit criteria (what triggers ending partnership?) + +**The Signature Test:** + +Both sides should sign the charter. If either side won't commit to paper, there's no real partnership. + +**Common Mistake:** + +Verbal agreements without documentation. When things get hard (and they will), you need written alignment. + +--- + +### 7. Co-Marketing Execution Checklist + +**Pre-Launch (4-6 weeks before):** +- [ ] Joint value prop finalized (reviewed by both marketing teams) +- [ ] Customer case study identified (ideally 2-3 options) +- [ ] Technical integration validated (no launch-day bugs) +- [ ] Sales enablement ready (one-pager, deck, demo) +- [ ] Support trained (both teams know how to handle tickets) +- [ ] Marketplace listings prepared (if applicable) + +**Launch Week:** +- [ ] Press release (coordinated timing) +- [ ] Blog posts (both companies) +- [ ] Joint webinar scheduled (within 2 weeks of launch) +- [ ] Social media campaign (coordinated hashtags) +- [ ] Sales teams briefed (live training session) +- [ ] Customer comms sent (email to relevant segments) + +**Post-Launch (Weeks 2-8):** +- [ ] Customer adoption tracked (weekly dashboard) +- [ ] Support issues triaged (joint Slack channel) +- [ ] Case study published (quantified results) +- [ ] Pipeline impact measured (influenced deals) +- [ ] Quarterly business review scheduled + +**Common Mistake:** + +Treating launch as finish line. Real work starts after launch — adoption, support, iteration. + +--- + +## Decision Trees + +### Should We Build or Partner? + +``` +Is this capability core to our product differentiation? +├─ Yes → Build it yourself +└─ No → Continue... + │ + Would building this delay our roadmap by >6 months? + ├─ Yes → Partner + └─ No → Continue... + │ + Is there a credible partner who needs us too? + ├─ Yes → Partner + └─ No → Build +``` + +### Which Partner Tier? + +``` +Does partner have engineering resources to self-serve? +├─ Yes → Start at Tier 1, evaluate for Tier 2 after 6 months +└─ No → Continue... + │ + Is this a marquee logo that shifts our positioning? + ├─ Yes → Tier 3 (Strategic) + └─ No → Tier 2 (Joint Development) +``` + +### Should We Continue This Partnership? + +``` +Did Crawl phase meet success criteria? +├─ No → End partnership, learn from failure +└─ Yes → Continue... + │ + Did Walk phase meet success criteria? + ├─ No → End partnership or restart Crawl with changes + └─ Yes → Move to Run phase +``` + +--- + +## Common Mistakes + +1. **Treating partnerships as sales channel, not platform expansion** + - Partnerships should expand what your product can do, not just who buys it + +2. **Launching without clear integration pathways** + - Partners will struggle and fail without step-by-step guides + +3. **Expecting partners to self-promote** + - You must provide co-marketing templates, resources, support + +4. **Creating too many tiers** + - 2-3 is optimal; more causes confusion and expectation mismatch + +5. **Ghosting after launch** + - Relationships need ongoing cultivation; schedule recurring touchpoints + +6. **Pursuing partnerships for vanity** + - Brand name or funding connections don't equal customer value + +7. **No clear exit criteria** + - Define upfront what failure looks like and when to deprioritize + +--- + +## Quick Reference + +**Before starting any partnership:** +- [ ] Three-sided value prop articulated +- [ ] Partner tier identified +- [ ] Crawl phase scope defined +- [ ] Success metrics agreed +- [ ] Partnership charter drafted + +**Before launching any partnership:** +- [ ] Customer ready criteria met +- [ ] Co-marketing checklist complete +- [ ] Sales team briefed +- [ ] Health management cadence scheduled + +**Partnership leverage hierarchy:** +1. Requirement (they need you for cert/compliance) +2. Economic (saves/makes them money) +3. Competitive (differentiates them) +4. Customer (their customers want it) +5. Co-marketing (nice to have, rarely decisive) + +**Go/no-go criteria:** +- Crawl: 20%+ customer outcome improvement +- Walk: 70%+ adoption rate +- Run: Both phases passed + ROI validated + +--- + +## Related Skills + +- **developer-ecosystem**: Developer-specific ecosystem programs +- **enterprise-account-planning**: Managing enterprise deals with partners +- **technical-product-pricing**: Pricing partnership deals + +--- + +*Based on partnerships work across multiple platform companies during hypergrowth, including running a developer marketplace ecosystem (open vs curated decision) and leveraging cloud provider certification requirements for channel growth. Not theory — patterns from partnerships that actually drove revenue and platform adoption.* diff --git a/skills/gtm-positioning-strategy/SKILL.md b/skills/gtm-positioning-strategy/SKILL.md new file mode 100644 index 00000000..deaa8d8b --- /dev/null +++ b/skills/gtm-positioning-strategy/SKILL.md @@ -0,0 +1,435 @@ +--- +name: gtm-positioning-strategy +description: Find and own a defensible market position. Use when messaging sounds like competitors, conversion is weak despite awareness, repositioning a product, or testing positioning claims. Includes Crawl-Walk-Run rollout methodology and the word change that improved enterprise deal progression. +license: MIT +--- + +# Positioning Strategy + +Find and own a defensible market position. Turn generic messaging into clear differentiation — or at least test whether your differentiation actually resonates before committing to it. + +## When to Use + +**Triggers:** +- "Our messaging sounds exactly like competitors" +- "Brand awareness is strong but conversion is weak" +- "Sales team can't explain why we're different" +- "Buyers see us as interchangeable" +- "Should we reposition before we rebrand?" +- "How do we test positioning claims?" + +**Context:** +- Competitive markets with similar offerings +- Messaging that isn't converting +- New product launches +- Repositioning existing products +- Sales team reports buyer confusion + +--- + +## Core Frameworks + +### 1. One Word Can Change Everything (The "Autonomous" Problem) + +**The Pattern:** + +Early enterprise conversations for an autonomous AI product. Positioned as "autonomous AI agent." + +Developers: "Cool, but scary." +Managers: "Will this replace our team?" +Deal progression: Slow. Lots of "we'll think about it." + +**The Change:** + +One word: "autonomous" → "AI teammate" + +Same product. Same capabilities. Different framing. + +**Result:** + +Developers: "This helps me." +Managers: "This makes my team more productive." +Deal progression: Measurably faster. + +**Why This Matters:** + +Positioning isn't what you do. It's what you **don't say**. + +We could've said "replaces developers" (technically true for some tasks). Would've killed every enterprise deal. + +**The Framework: Word Choice Shapes Buyer Psychology** + +**Words that scare enterprises:** +- Autonomous (implies: no control, replacing humans) +- Replaces (threatens: job security) +- Fully automated (removes: human judgment) +- AI-first (means: unclear, buzzword) + +**Words that convert:** +- Teammate (implies: collaboration, helping) +- Augments (helps: makes humans better) +- You stay in control (reassures: human oversight) +- Handles repetitive work (specific: saves time) + +**How to Test Word Choice:** + +Don't guess. Test. + +**Test 1: Outbound Email A/B** +- Send 100 prospects Version A ("autonomous agent") +- Send 100 prospects Version B ("AI teammate") +- Measure: Reply rate, meeting booked rate +- Signal strength: High (real buyer intent) + +**Test 2: Website Homepage A/B** +- Version A: Current positioning +- Version B: New word choice +- Measure: Click-through rate on key CTAs +- Duration: 1-2 weeks minimum +- Signal strength: Moderate (interest without commitment) + +**Test 3: Sales Call Scripts** +- Half of AEs use positioning A +- Half use positioning B +- Measure: Demo-to-trial conversion +- Signal strength: High (real sales cycle) + +**Common Mistake:** + +Changing positioning based on internal consensus, not customer feedback. Your team isn't the buyer. + +--- + +### 2. Test Before You Commit (Crawl-Walk-Run Positioning Rollout) + +**The Pattern:** + +Positioning changes create risk. Brand confusion. Sales misalignment. Customer churn (if existing customers don't recognize you). + +**De-risk through phased rollout:** + +**Crawl Phase (1-2 weeks): Validation** + +Test messaging without committing product/org resources. + +**Actions:** +- A/B test website headlines (new vs incumbent) +- Run two outbound email sequences (different positioning angles) to cold prospects +- Ask existing customers: "If we described ourselves as [new positioning], would you still recognize us?" + +**Measurement:** +- Track CTR on web variants +- Track reply rates on outbound sequences +- Document qualitative feedback + +**Go/No-Go:** +- At least one positioning variant outperforms incumbent by 20%+ on reply rate +- Existing customers don't say "wait, that's not what you do" +- Proceed if clear winner; if tied, run longer or test new angles + +**Walk Phase (2-3 weeks): Alignment** + +If testing validates, align product and sales to new positioning (but don't rebrand publicly yet). + +**Actions:** +- Rewrite website copy (homepage, enterprise pages, CTAs) +- Create sales enablement: updated pitch deck, call scripts, email templates +- Update documentation to match new narrative +- Build use-case-specific examples + +**Measurement:** +- Sales team feedback on messaging usability +- Website analytics on engagement (not conversion yet) +- Segment analysis (who's responding?) + +**Go/No-Go:** +- Sales team says new messaging is easier to use +- CTR metrics improve vs incumbent +- No major customer confusion +- Proceed to Run + +**Run Phase (2-3 weeks and ongoing): Scale** + +Full commitment. This is the rebrand. + +**Actions:** +- Launch dedicated landing pages per use case +- Outbound campaigns per positioning angle +- Update all customer-facing materials +- Train customer success team on new narrative +- Announce repositioning (if appropriate) + +**Measurement:** +- Pipeline volume by positioning angle +- Win rate by positioning angle +- CAC efficiency by channel +- Customer retention (did we lose anyone?) + +**Common Mistakes:** +- Skipping Crawl (jumping to full rebrand without validation) +- Running phases in parallel (creates confusion if messaging changes mid-rollout) +- Waiting for perfect product before repositioning (product should follow positioning, not precede it) +- Not measuring at each phase (can't determine if test "won") + +--- + +### 3. Positioning Clarity Diagnosis + +**The Pattern:** + +If your messaging closely resembles competitors' messaging, you have a positioning problem, not a product problem. + +**Positioning Failure Manifests As:** +- All competitors describe nearly identical value props +- Differentiation requires explaining complex technical details +- Buyers see offerings as interchangeable +- Marketing metrics (CTR, engagement) weak vs industry benchmarks +- Sales conversations get derailed by comparison questions + +**How to Execute:** + +**Step 1: Competitor Messaging Audit** +- Collect homepage headlines from 5-7 direct competitors +- Identify shared claims (these are commoditized) +- Map where competitors claim unique value + +**Example:** + +If everyone says "fastest," "most reliable," "easiest to use" — these are table stakes, not differentiation. + +**Step 2: Assess Your Actual Strengths** +- What can you do that competitors can't, without massive R&D? +- What do your best customers choose you for? (Ask them) +- What structural advantages do you have? (Deployment model, data ownership, pricing, network effects, etc.) + +**Step 3: Find Under-Served Position** +- Where do existing solutions fail users? +- What problem is everyone ignoring? +- What buyer segment is under-served? + +**Step 4: Stake a Clear Claim** + +Must be: +- Something you can own now (not future roadmap) +- Something competitors can't easily copy (structural advantage) +- Resonant with your best customer segments + +**Common Mistakes:** +- Claiming you're "better" at what everyone does (unbelievable) +- Positioning on features competitors already have +- Multiple positions simultaneously (choose one) +- Waiting for perfect product before positioning shift + +--- + +### 4. Market Positioning Architecture (Three Layers) + +**Layer 1: Market Context** +- What problem is the market experiencing? +- Why is it experiencing this problem now? +- What happens if problem goes unsolved? + +**Example:** +"Infrastructure teams manage increasingly complex deployments across hybrid environments. Organizations adopt microservices and distributed systems. This creates operational complexity that traditional monitoring tools can't handle." + +**Layer 2: Positioning Statement (1-2 sentences)** +- **Who we serve**: What customer segment? +- **What problem we solve**: The specific pain +- **How we're different**: Why we matter vs alternatives +- **Proof**: Why should they believe us? + +**Example:** +"We help platform teams ship faster through [core capability] that connects [workflow A], [workflow B], and [business outcome] in real-time." + +**Layer 3: Narrative** + +Expand positioning into story: +- Why the world is changing +- Why existing solutions don't work +- Why our approach is better +- What the future looks like with us + +**How to Execute:** + +Write all three layers before testing. Test Layer 2 (positioning statement) first with Crawl-Walk-Run methodology. If that validates, build out Layer 3. + +--- + +### 5. Headline and Sub-headline Testing + +**Principle:** Clear positioning requires testable structure: headline (what are you?) + sub-headline (for whom? why?). + +**Main Headline Formats:** +- "The [adjective] [category] that [differentiator]" +- "[Product] for [specific use case]" +- "[Product] that [core benefit]" + +**Examples:** +- "The customizable platform for [workflow]" +- "Infrastructure for autonomous teams" +- "The enterprise-grade alternative to [incumbent]" + +**Red Flags:** +- Using competitor name (defensive, not confident) +- Too technical (buyer won't understand) +- Claiming multiple benefits (choose one) +- Vague ("the future of X" — unbelievable) + +**Sub-headline Purpose:** + +Clarifies who, why, how it's different from status quo. + +**Examples:** +- "Deploy anywhere. Scale instantly. Your infrastructure, your rules." +- "For teams drowning in repetitive work. Automation that handles the 80%, humans handle the 20%." +- "Enterprise-grade. No lock-in. Works with your existing stack." + +**How to Test:** + +A/B test headline + sub-headline combinations: +- Variant A: Current messaging +- Variant B: New positioning angle +- Variant C: Different differentiation claim + +Measure CTR, reply rates, conversion. + +Pick winner based on data, not opinion. + +--- + +### 6. Positioning Defensibility Assessment + +**Principle:** A positioning is only valuable if competitors can't easily copy it. + +**Defensibility Hierarchy:** + +**1. Structural Advantage (Strongest)** +- Hard to copy: unique data ownership, deployment flexibility, pricing model, network effects +- Example: "Built for regulated industries with on-prem deployment" (can't copy without rebuilding architecture) + +**2. Market Position (Strong if First)** +- Defensible if you own it first and scale +- Example: "First AI platform for [specific workflow]" (copycats look derivative) + +**3. Product Feature (Weak)** +- Easy to copy: UX, specific capability +- Example: "Faster API calls" (competitor ships speed improvement in 6 weeks) + +**How to Assess:** + +For each positioning claim, ask: + +1. Can competitor copy this with single product sprint? (Yes = not defensible) +2. Do we have structural advantage? (No = temporary positioning) +3. Is this credible given current product? (No = don't claim it yet) +4. Can we own this position before competitors react? (No = too slow) + +**Common Mistake:** + +Positioning on features competitors can easily match. This creates positioning treadmill — you're always defending, never owning. + +--- + +## Decision Trees + +### Should We Reposition? + +``` +Is brand awareness strong but conversion weak? +├─ Yes → Positioning problem, test new angles +└─ No → Continue... + │ + Does our messaging sound like competitors? + ├─ Yes → Positioning problem + └─ No → Not a positioning issue +``` + +### Which Positioning Angle Should We Test? + +``` +Do we have structural advantage competitors can't copy? +├─ Yes → Position on structural advantage +└─ No → Continue... + │ + Are we first in a category? + ├─ Yes → Position on category ownership + └─ No → Find under-served segment/use case +``` + +### Should We Move from Crawl to Walk Phase? + +``` +Did new positioning outperform incumbent by 20%+? +├─ Yes → Move to Walk (alignment phase) +└─ No → Continue... + │ + Did we run test long enough (2+ weeks)? + ├─ No → Run longer + └─ Yes → Try different positioning angle or stay with incumbent +``` + +--- + +## Common Mistakes + +**1. Claiming to be "better" at what everyone does** + - Unbelievable. Find different angle. + +**2. Positioning on easily-copied features** + - Competitors will match. Need structural advantage. + +**3. Waiting for perfect product before positioning shift** + - Product work should follow positioning, not precede + +**4. Testing too many positioning angles simultaneously** + - Can't determine what's working. Test one at a time. + +**5. Skipping validation phase** + - Jumping to full rebrand without testing = risk + +**6. One positioning for all buyer personas** + - Different personas care about different things + +**7. Generic positioning that doesn't differentiate** + - "Best-in-class," "innovative" = meaningless + +--- + +## Quick Reference + +**Crawl-Walk-Run Testing:** +- Crawl (1-2 weeks): A/B test messaging, measure reply rates +- Walk (2-3 weeks): Align sales/product to winning angle +- Run (ongoing): Full repositioning, measure pipeline/conversion + +**Word choice that converts:** +- ✅ Teammate, augments, you stay in control +- ❌ Autonomous, replaces, fully automated + +**Positioning audit steps:** +1. Collect competitor messaging +2. Identify your actual strengths (ask customers) +3. Find under-served position +4. Stake clear, defensible claim + +**Defensibility hierarchy:** +1. Structural advantage (unique data, deployment flexibility, pricing model) +2. Market position (category ownership) +3. Product feature (weakest, easily copied) + +**Testing hierarchy (signal strength):** +1. Outbound reply rates (highest) +2. Sales call conversion (high) +3. Website CTR (moderate) + +--- + +## Related Skills + +- **ai-gtm**: AI-specific positioning (copilot vs agent vs teammate) +- **technical-product-pricing**: Price as a positioning signal +- **0-to-1-launch**: Positioning for new product launches + +--- + +*Based on positioning work at AI agent and developer platforms, including navigating the framing spectrum from "autonomous" to "AI companion" and how category framing changes enterprise buyer perception. Also includes Crawl-Walk-Run rollout methodology from repositioning products without breaking existing customer recognition. Not theory — patterns from testing positioning before committing to rebrands.* diff --git a/skills/gtm-product-led-growth/SKILL.md b/skills/gtm-product-led-growth/SKILL.md new file mode 100644 index 00000000..d1814a7e --- /dev/null +++ b/skills/gtm-product-led-growth/SKILL.md @@ -0,0 +1,336 @@ +--- +name: gtm-product-led-growth +description: Build self-serve acquisition and expansion motions. Use when deciding PLG vs sales-led, optimizing activation, driving freemium conversion, building growth equations, or recognizing when product complexity demands human touch. Includes the parallel test where sales-led won 10x on revenue. +license: MIT +--- + +# Product-Led Growth + +Build self-serve acquisition and expansion motions. But first, figure out if PLG is even the right motion for your product. + +## When to Use + +**Triggers:** +- "Should we build PLG or sales-led?" +- "How do we drive self-serve adoption?" +- "Freemium to paid conversion isn't working" +- "Developer-led adoption strategy" +- "Which growth channels should we invest in?" +- "How do I know if PLG will work?" + +**Context:** +- Developer tools and platforms +- B2B SaaS with self-serve potential +- Products where value is obvious without demo +- Bottom-up adoption motions +- Growth channel prioritization + +--- + +## Core Frameworks + +### 1. The PLG Reality Check (Test Before You Commit) + +**What I Learned Running Both Motions in Parallel:** + +Classic startup debate. PLG camp: "Developers want self-serve." Sales camp: "Enterprises need hand-holding." Instead of arguing, we tested both for 6 months. Same product, two GTM motions, tracked everything. + +**The Results:** + +PLG: High volume, low ACV (~$5K), fast time-to-revenue, higher churn. Sales-led: Lower volume, high ACV (~$50K), slower time-to-revenue, lower churn. **Sales won 10x on dollars despite 10x less volume.** + +**Why:** Product complexity + buyer seniority = sales-led wins. The product required integration with existing infrastructure, change management across teams, and multi-stakeholder alignment. Developers loved self-serve. But they weren't the economic buyer. + +**PLG works when:** +- Value is obvious in first 5 minutes +- Implementation is trivial +- Individual user gets value without team buy-in +- No procurement/legal hurdles +- Buyer = user + +**Sales-led works when:** +- Product requires integration/setup +- Multiple stakeholders need alignment +- Buyer ≠ user +- Deal size justifies human touch +- Customer needs education to see value + +**Before building PLG, test your motion. Don't assume PLG is better because it's trendy.** PLG is efficient at volume, but sales-led can be more profitable with complexity. + +--- + +### 2. The Growth Equation (Map Inputs to Outputs) + +**The Pattern:** + +Growth compounds when you systematize the relationship between activities and user acquisition. Not "do more marketing" — map specific inputs to measurable outputs. + +**How to Build Your Growth Equation:** + +For each channel, define: Activity (input) → Traffic (output) → Conversions. + +- **Organic Search:** 1 quality blog post → 400 users/month → 5% conversion = 20 new users +- **Paid Ads:** $1K spend at 8% conversion on 100K impressions = 8K clicks → conversions at X% +- **Community Events:** 1 event → 60 attendees → 35% conversion = 21 users +- **Referral:** 1 integration partner → N referred users → conversions at Y% + +**Why This Matters:** + +Once you validate the equation, scaling becomes math. "I need 200 more users next month" → "I need 10 more blog posts" or "I need $5K more ad spend." Without the equation, you're guessing. + +**Testing the Equation:** + +1. Start with hypothesis: "If I create X, it drives Y conversion" +2. Test with small sample: 1 blog post, measure actual conversion +3. Validate: Does reality match hypothesis? +4. Scale with confidence: If yes, increase input +5. Kill if not: 4 weeks of data is enough to decide + +**Common Mistake:** + +Guessing at conversion rates without testing. Assuming all users from the same channel are equal quality. Scaling before validating the equation. + +--- + +### 3. Channel Economics (Kill Losers, Double Down on Winners) + +**The Pattern:** + +Every channel has economics. Without tracking them, you over-invest in losers and under-invest in winners. + +**Track Per Channel:** +1. **CAC:** Total spend / new users +2. **Conversion rate:** Signups → paying +3. **Retention:** 30-day, 90-day by source +4. **LTV:** Revenue over customer lifetime, by channel +5. **Payback period:** How long to recoup CAC + +**The Decision Framework:** + +- CAC < (LTV × margin) → Scale aggressively +- CAC ≈ (LTV × margin) → Optimize, don't scale +- CAC > (LTV × margin) → Kill within 4 weeks + +**Monthly channel review:** Which channels are profitable? Which are drains? Quarterly reallocation: 3x budget to winners, kill losers. + +**Critical Insight: Channel Quality Varies** + +Cheap CAC doesn't mean good CAC. Organic search might deliver users at $0 CAC with 85% 30-day retention. Paid search might deliver users at $12 CAC with 45% 30-day retention. The "free" channel is 10x more valuable when you factor in retention and LTV. + +**Systematic Testing:** + +Test 2 new channels monthly. Give each 4 weeks of data. Kill decisively if economics don't work. Document learnings regardless of outcome — what didn't work is as valuable as what did. + +**Common Mistake:** + +Tracking CAC without retention. A cheap channel that churns users costs more than an expensive channel that retains them. + +--- + +### 4. Time to First Value (The Only Activation Metric) + +**The Pattern:** + +Users decide product value in the first 5-10 minutes. If they don't reach the aha moment fast, they abandon. + +**The Activation Audit:** + +1. Sign up for your own product as a new user +2. Time how long to first value +3. Count steps to aha moment +4. Where did you get stuck? + +**If TTFV > 10 minutes, you have an activation problem.** + +**Before:** Sign up → confirm email → fill profile → configure settings → read docs → first action + +**After:** Sign up → pre-loaded sample data → first action (immediate aha moment) + +**Specific Fixes:** + +1. **Pre-load sample data.** Users want to see value, not set up. Give them a working example immediately. +2. **Skip non-essential setup.** Email confirmation, profile, settings — all can wait until after the aha moment. +3. **Progressive disclosure.** Don't show all features upfront. Start with one core workflow. Reveal complexity gradually. +4. **Show, don't tell.** Interactive tutorial > video > text docs. Let them click through a workflow. + +**Common Mistake:** + +Assuming users will read documentation. They won't. They'll click around for 5 minutes, and if nothing works, they leave. + +--- + +### 5. The $5K → $50K Inflection (When PLG Breaks) + +**The Pattern:** + +PLG works for $1K-$10K ARR. Between $20K-$50K, the motion breaks because organizational friction kicks in: procurement, legal, security, multi-stakeholder buy-in. + +**The Hybrid Approach:** + +**PLG ($0-$10K):** Self-serve sign-up → free tier → paid tier → credit card checkout → automated onboarding + +**Sales-Assisted ($10K-$50K):** Self-serve discovery → sales engages on usage signals → human-negotiated contract → dedicated onboarding + +**Enterprise ($50K+):** Outbound or inbound lead → demo → POC → proposal → legal/security review → executive sponsor + +**PQL Signals (When to Trigger Sales):** + +- **Usage depth:** Daily active, core features used, approaching limits +- **Expansion signals:** Multiple users from same company, team features, integrations +- **Buying signals:** Requests for SSO/compliance/SLAs, asks about team pricing + +**The Handoff:** + +Bad: "Hey, I saw you signed up." (Cold, generic, kills trust) +Good: "Your team is using [specific feature] across 12 repos. We can help you [specific value]. Want 15 minutes?" (Warm, specific, offers value) + +**Common Mistake:** + +Sales engaging too early on <$5K deals. Kills PLG motion, scares users. Let them self-serve until they need help. + +--- + +### 6. Growth Forecasting (Plan for Uncertainty) + +**The Pattern:** + +Forecasts are always wrong. Plans are still valuable because they force thinking and create accountability. + +**Model Three Scenarios:** + +**Baseline (current trajectory continues):** +- Organic search: 35% growth → 40K new users +- Paid: Flat → 2K new users +- Community: 10% growth → 400 new users +- Total: 42.4K + +**Upside (if all growth initiatives execute):** +- Organic: 50% growth (3x content) → 48K +- Paid: 2x spend, same efficiency → 4K +- New initiative (partnerships): ramp → 3K +- Total: 55K + +**Downside (if key channels fail):** +- Organic: 0% growth → 26K +- Paid: CPA doubles → 1K +- Total: 27K + +**Use This For:** +- Setting baseline targets (baseline scenario) +- Stretch goals (upside scenario) +- Escalation triggers (if you hit downside, something needs to change) +- Resource allocation (what inputs change to hit upside?) + +**Monthly Update:** Compare forecast to actual. Adjust model. Don't forecast-and-forget. + +**Common Mistake:** + +Overly optimistic forecasts that assume everything works. Not updating monthly. Treating forecast as target (it's a range, not a number). + +--- + +### 7. The Playbook Documentation Habit + +**The Pattern:** + +Knowledge dies with people. The goal isn't one-off wins — it's systematizing what works. + +**After every successful campaign or experiment, write a 1-page playbook:** + +``` +PLAYBOOK: [Channel/Tactic Name] + +Goal: [What outcome] +Steps: [Numbered, specific enough for someone unfamiliar] +Expected Output: [Specific metrics] +Metrics to Track: [How to measure] +Risks & Mitigations: [What could go wrong] +Owner: [Name] +Last Updated: [Date] +``` + +**The Test:** Could someone who wasn't involved execute this playbook? If not, it's too vague. + +**Review quarterly.** Remove playbooks that no longer work. Update ones that have evolved. This becomes your growth operating system. + +**Common Mistake:** + +Running experiments without documenting learnings. Scaling before you understand the mechanism. Having growth knowledge trapped in one person's head. + +--- + +## Decision Trees + +### Should We Build PLG or Sales-Led? + +``` +Can users get value in <10 min without docs? +├─ No → Sales-led required +└─ Yes → Can they self-serve implementation? + ├─ No → Sales-led required + └─ Yes → Is buyer = user? + ├─ No → Hybrid (PLG + sales-assist) + └─ Yes → Pure PLG viable +``` + +### Keep, Scale, or Kill This Channel? + +``` +CAC < (LTV × margin)? +├─ No → Kill within 4 weeks +└─ Yes → 90-day retention > 60%? + ├─ No → Optimize (improve activation/onboarding) + └─ Yes → Scale aggressively (3x budget) +``` + +--- + +## Common Mistakes + +**1. Assuming PLG always works** +Product complexity + buyer seniority = sales-led wins. Test before committing. + +**2. No channel economics** +Every channel has CAC, retention, and LTV. Track them or you're flying blind. + +**3. Free tier too generous or too limited** +Too generous: no conversion. Too limited: no activation. Allow 10-20 aha moments. + +**4. No growth equation** +"Do more marketing" isn't a strategy. Map inputs → outputs → conversions per channel. + +**5. Scaling before validating** +4 weeks of data before scaling any channel. Kill decisively if economics don't work. + +**6. Growth knowledge in one person's head** +Document every successful experiment as a playbook. + +--- + +## Quick Reference + +**PLG readiness:** Value in <10 min + self-serve implementation + buyer = user + +**Growth equation:** Activity (input) → Traffic (output) → Conversions, per channel + +**Channel economics:** CAC, conversion, 30/90-day retention, LTV, payback — per channel, monthly review + +**Kill criteria:** CAC > (LTV × margin) → 4 weeks to improve, then kill + +**PQL signals:** Usage depth + expansion (multi-user) + buying (SSO/compliance requests) + +**Sales handoff:** <$10K: PLG → $10K-$50K: Sales-assist → >$50K: Full sales + +**Forecast:** Baseline + Upside + Downside, updated monthly + +--- + +## Related Skills + +- **technical-product-pricing**: Freemium thresholds and pricing gates +- **developer-ecosystem**: Developer-specific adoption programs +- **0-to-1-launch**: Finding first customers before PLG scales + +--- + +*Based on experience across multiple platform companies — leading a growth team building PLG and sales-led motions from scratch, and operating inside successful PLG + sales-led machines at hypergrowth companies. The combination taught both sides: what it takes to establish these motions early (when resources are thin and every bet matters) and what the mature version looks like at scale (growth equations, channel economics systems, freemium pricing gates, and systematic A/B testing that documents every win and loss into executable playbooks). Not theory — lessons from building the machine and operating inside ones that worked.* diff --git a/skills/gtm-technical-product-pricing/SKILL.md b/skills/gtm-technical-product-pricing/SKILL.md new file mode 100644 index 00000000..30751c43 --- /dev/null +++ b/skills/gtm-technical-product-pricing/SKILL.md @@ -0,0 +1,350 @@ +--- +name: gtm-technical-product-pricing +description: Pricing strategy for technical products. Use when choosing usage-based vs seat-based, designing freemium thresholds, structuring enterprise pricing conversations, deciding when to raise prices, or using price as a positioning signal. +license: MIT +--- + +# Technical Product Pricing + + +## Initial Assessment + +Before recommending pricing, understand: + +1. **Product type**: API/platform, developer tool, SaaS application, infrastructure? +2. **Current pricing**: What do you charge now? How long has it been this way? +3. **GTM motion**: Self-serve, sales-assisted, enterprise, or hybrid? +4. **Cost structure**: What's your marginal cost per customer/user/unit? +5. **Competitive landscape**: What do alternatives cost? (Including "do nothing") + +--- + +## Core Frameworks + +### 1. The Price Increase Nobody Noticed (You're Probably Underpriced) + +**The Pattern:** + +Platform company, growth stage. Pricing hadn't changed since launch. Enterprise customers paying $15K/year for a product saving them $200K+ in engineering time. + +Leadership debate: "If we raise prices, we'll lose customers." + +**What actually happened:** + +Raised enterprise tier from $15K to $45K/year. Added dedicated support, SSO, audit logs to justify the jump. + +Lost: 0 enterprise customers. Zero. + +Gained: 3x revenue per enterprise account. Plus the customers who stayed started taking the product more seriously — higher adoption, more internal champions, more expansion. + +**Why This Happens:** + +Technical founders anchor pricing to cost ("it costs us $X to serve them, so we charge $2X"). Enterprise buyers anchor pricing to value ("this saves us $200K, so $45K is cheap"). + +**The Pricing Sanity Check:** + +For every customer segment, calculate: + +``` +Value Ratio = Customer's alternative cost / Your price + +If Value Ratio > 10x → You're massively underpriced +If Value Ratio > 5x → You're underpriced (most startups are here) +If Value Ratio 3-5x → Healthy pricing +If Value Ratio < 3x → Approaching ceiling +If Value Ratio < 2x → You're expensive (need strong differentiation) +``` + +**How to Calculate Alternative Cost:** + +- Hours spent on manual process × hourly rate × frequency +- Cost of building in-house (engineers × months × loaded cost) +- Cost of existing tool + switching cost + productivity loss during transition +- Cost of *not solving the problem* (incidents, downtime, churn) + +**Common Mistake:** + +Comparing your price to competitors instead of to customer's alternative cost. Competitors anchor you to a race to the bottom. Value anchors you to what the customer actually saves. + +--- + +### 2. The Three Pricing Models (And When Each Breaks) + +**Model 1: Seat-Based ($X/user/month)** + +**Works when:** +- Value scales with number of users (collaboration tools, communication) +- Usage is relatively uniform across users +- You want predictable revenue + +**Breaks when:** +- Power users and casual users get same price (casual users churn) +- Product value doesn't scale with seats (one admin configures for 1,000 users) +- Customers consolidate seats to reduce cost (usage goes up, revenue doesn't) + +**Model 2: Usage-Based ($X/unit)** + +**Works when:** +- Usage varies significantly by customer (API calls, compute, storage) +- Marginal cost is meaningful (you need usage to track with revenue) +- Value directly correlates with usage + +**Breaks when:** +- Customers can't predict bills (sticker shock at month-end) +- Low-usage customers aren't worth supporting +- High-usage customers negotiate volume discounts that compress margins + +**Model 3: Outcome-Based ($X/result)** + +**Works when:** +- You can measure outcomes reliably (leads generated, tickets resolved, code deployed) +- Outcomes directly create customer value +- You have confidence in your product's effectiveness + +**Breaks when:** +- Outcomes depend on factors outside your control +- Measurement is disputed ("that lead wasn't from your tool") +- Customers game the metric + +**The Hybrid That Usually Wins:** + +Platform fee (covers your fixed costs) + usage/outcome variable (scales with value). + +Example: $500/month base + $0.05 per transaction (or API call, task completed, record processed — whatever your unit of value is). + +Why this works: +- Base fee ensures every customer covers cost to serve +- Variable fee aligns price with value +- Customers can predict minimum spend (base) while scaling naturally +- You capture upside when customers grow + +--- + +### 3. Freemium Threshold Design (Where Free Ends and Paid Begins) + +**The Pattern:** + +The hardest pricing decision for developer tools: where do you draw the line between free and paid? + +**The Framework: Find the Production Boundary** + +Free users who never pay are fine — they create awareness, community, and content. The problem is when *production users* never pay. + +**How to Find the Boundary:** + +Map your usage distribution: + +``` +Usage Level | User Type | Willingness to Pay +───────────────────────────────────────────────────────────── +<100 units/mo | Hobbyist/learner | $0 (never paying) +100-1K units/mo | Side project | $0-20/mo (maybe) +1K-10K units/mo | Production use | $50-200/mo (will pay) +>10K units/mo | Business-critical| $200-2K/mo (must pay) +``` + +**Set your free tier limit just below where production usage starts.** In this example: 1,000 units/month free. + +Why: Hobbyists and learners stay free (they're your marketing engine). Production users hit the limit naturally and convert (they have budget). + +**The Three Types of Free-to-Paid Triggers:** + +**1. Usage limit** (most common for platforms) +- Free: 1,000 units/month (API calls, tasks, records, whatever your value unit is) +- Triggers when: Production usage exceeds limit +- Conversion signal: User is building something real + +**2. Team/collaboration gate** (best for tools) +- Free: Individual use +- Triggers when: User invites second person +- Conversion signal: Tool is valuable enough to share + +**3. Enterprise feature gate** (best for platforms) +- Free: Core features +- Triggers when: Needs SSO, RBAC, audit logs, SLAs +- Conversion signal: IT/security involved (real deployment) + +**Common Mistake:** + +Setting free tier too high ("we want developers to love us"). If production users don't hit the limit, they never convert. Generosity in free tier should target *learners*, not *production users*. + +--- + +### 4. Enterprise Pricing (The Conversation, Not the Number) + +**The Pattern:** + +Enterprise pricing isn't a page on your website. It's a conversation. The "Contact Sales" button exists because enterprise deals have unique requirements — and because you should be pricing based on value, not a menu. + +**Enterprise Pricing Variables:** + +**1. Deployment model** (self-serve cloud, dedicated cloud, on-prem, hybrid) +- Each has different cost to serve → different price floor +- On-prem commands 2-5x premium over cloud (support complexity) + +**2. Usage scale** (seats, API volume, data volume) +- Volume discounts should never go below cost to serve + 40% margin +- Discount off list price, not off already-discounted price + +**3. Support level** (community, standard, premium, dedicated) +- Premium support: 1.5-2x base price +- Dedicated CSM: 2-3x base price +- 24/7 support with SLA: 3-5x base price + +**4. Compliance requirements** (SOC 2, HIPAA, FedRAMP, data residency) +- Each compliance adds real cost (audits, infrastructure, process) +- Price accordingly: 1.5-2x base per compliance standard + +**The Enterprise Pricing Conversation:** + +When prospect says "what does it cost?": + +1. **Don't answer with a number.** Answer with a question: "It depends on your deployment and scale requirements. Help me understand what you need." + +2. **Map their requirements:** + - How many users/seats/units? + - Cloud or on-prem? + - Compliance needs? + - Support expectations? + - Integration requirements? + +3. **Anchor to value before presenting price:** + - "Based on what you've described, you're currently spending [X] on this problem. Our solution typically reduces that by [Y%]." + - Then present price as fraction of savings. + +4. **Present 3 options:** + - Good: Meets minimum requirements + - Better: Meets requirements + nice-to-haves (anchor here) + - Best: Everything, including things they didn't ask for + - Most buyers pick Better. That's your real price. + +**Common Mistake:** + +Publishing enterprise pricing on your website. The moment you publish a number, that's the ceiling — the negotiation only goes down from there. Keep enterprise pricing as a conversation. + +--- + +### 5. Pricing as Positioning Signal + +**The Pattern:** + +Your price tells buyers who you're for. This is as much a positioning decision as a revenue decision. + +**Price Signals:** + +**$0 (Open source / free tier):** +- Signal: We're for developers who want to try before they buy +- Attracts: Individual contributors, experimenters +- Risk: Perceived as "not enterprise-ready" + +**$20-100/month:** +- Signal: We're for teams and small businesses +- Attracts: Self-serve buyers, startups +- Risk: Enterprises won't take you seriously (too cheap) + +**$500-2,000/month:** +- Signal: We're for production workloads +- Attracts: Growing companies with real budgets +- Risk: Startups priced out (may need free tier) + +**$5,000-50,000/year:** +- Signal: We're for enterprises +- Attracts: Mid-market and enterprise +- Risk: Need sales team (can't be self-serve at this price) + +**$100K+/year:** +- Signal: We're mission-critical infrastructure +- Attracts: Large enterprises +- Risk: Long sales cycles, heavy support expectations + +**The Positioning Test:** + +If you price at $50/month but want enterprise customers, your price is undermining your positioning. Enterprise buyers associate low price with low value. You may need to *raise* prices to attract the customers you want. + +**Common Mistake:** + +Pricing for the customer you have instead of the customer you want. If your roadmap is enterprise, price for enterprise — even if it means losing some SMB customers today. + +--- + +### 6. When and How to Raise Prices + +**Timing Signals:** + +- Value ratio > 5x for most customers +- Win rates above 40% (you're not losing on price) +- No customer pushback on pricing in last 6 months +- Customers expanding faster than expected +- Competitors raised prices and didn't lose share + +**How to Raise Without Losing Customers:** + +**1. Grandfather existing customers** (12-24 months) +- New pricing for new customers only +- Existing customers get notice + timeline +- Creates urgency for prospects ("price is going up") + +**2. Add value to justify increase** +- New tier with new features at higher price +- Move current tier features to new higher tier +- This is repositioning, not just a price increase + +**3. Annual increase clause in contracts** +- Include 5-10% annual escalator in enterprise contracts +- Normalizes price increases +- Prevents "we've been paying the same for 4 years" conversations + +**The Communication:** + +"We're investing in [specific improvements]. To continue this level of investment, we're updating our pricing on [date]. Your current plan is locked in at your current rate until [grandfather date]." + +Never apologize for raising prices. Frame it as investment in the product they love. + +--- + +## Decision Trees + +### Which Pricing Model? + +``` +Does usage vary >5x between customers? +├─ Yes → Usage-based (or hybrid with usage component) +└─ No → Continue... + │ + Does value scale with team size? + ├─ Yes → Seat-based + └─ No → Continue... + │ + Can you measure customer outcomes reliably? + ├─ Yes → Outcome-based (or hybrid) + └─ No → Platform fee + feature-based tiers +``` + +### Should We Raise Prices? + +``` +Is value ratio > 5x for most customers? +├─ Yes → Raise prices +└─ No → Continue... + │ + Are win rates > 40%? + ├─ Yes → Price isn't the problem, but consider raising + └─ No → Continue... + │ + Are you losing deals specifically on price? + ├─ Yes → Don't raise. Improve value or segment better. + └─ No → Something else is wrong (product, positioning, sales) +``` + +--- + +## Related Skills + +- **ai-gtm**: AI-specific pricing models (variable-cost AI, pricing outputs vs inputs) +- **product-led-growth**: Freemium conversion and PLG pricing gates +- **enterprise-account-planning**: Enterprise deal structuring and negotiation +- **positioning-strategy**: Price as positioning signal + +--- + +*Based on pricing work at developer platforms and enterprise software companies, including enterprise price increases with zero customer loss, freemium threshold design that separated hobbyists from production users, partner pricing models, and pricing conversations across hundreds of enterprise deal cycles. Not theory — patterns from pricing decisions that directly impacted revenue.*