Back to portfolio
Workflow Automation · Strategy

Apex

Innovation Pipeline

Unstructured business problems get transformed into executive-ready innovation blueprints by running them through a sequential five-stage AI pipeline grounded in First Principles, JTBD, and Doblin's 10 Types.

01The Problem

Strategic analysis that applies rigorous frameworks — First Principles deconstruction, Jobs-to-be-Done mapping, hypothesis evaluation, and innovation portfolio development — typically requires experienced consultants working across multiple sessions. Without a structured process, teams either skip frameworks entirely (producing shallow analysis) or apply them inconsistently (producing outputs that can't be compared or built upon). The result is strategic work that is slow, expensive, and dependent on individual practitioner expertise rather than repeatable methodology.

02What the AI Does

Accepts a problem statement plus optional context fields (subject matter, specific challenges, hypotheses, target audience, business context, constraints, and document attachments) and runs them through five sequential AI stages, each producing structured text that feeds the next: Stage 1 (Claude Haiku): Ingests and catalogs all inputs; extracts explicit and implicit problems, assumptions, and hypothesis inventory Stage 2A (Claude Haiku): Autonomous problem restatement — generates and self-answers 3–5 clarifying questions, produces an evolved problem context without requiring user interaction Stage 2B (GPT-5 Mini): Infers and validates 5–10 testable, falsifiable hunches from the problem analysis, categorized by type (Market Sizing, Technical Feasibility, User Behavior, Economic Viability, Competitive Landscape) Stage 3 (GPT-5 Mini): First Principles deconstruction — builds a hierarchical axiom tree across Scientific, Economic, Human/Psychological, and Systemic domains Stage 4 (GPT-5 Mini): Full JTBD analysis — defines core functional/emotional/social jobs, builds a complete 9-stage Job Map with desired outcomes, struggles, and unmet needs per stage Stage 5 (GPT-5 Mini): Hypothesis evaluation scorecard — rates each original hypothesis as Validated/Plausible/Contradicted against the strategic model with evidence requirements Stage 6 (GPT-5 Mini): Innovation portfolio — generates ideas across all 10 Doblin types tied to Job Map struggles, applies creativity triggers, produces a strategic validation plan Final Assembly (GPT-5 Mini): Synthesizes all six stages into a single executive-ready Apex Strategy Blueprint Each stage's system prompt embeds the full reference frameworks (JTBD principles, First Principles methodology, Doblin's 10 Types, 17 Journey Patterns, Innovation Matrix) as inline context rather than relying on model pre-training alone.

03Design Decisions

01 · Choice

Each stage receives the prior stage's full text output as its primary input, not the original problem statement

Why

Ensures each stage builds on synthesized understanding rather than re-interpreting raw inputs independently, preventing analytical drift across stages. [Creator: add rationale if different]

Constraint

Forces coherence — Stage 5 can only evaluate hypotheses against a JTBD model that was itself built on a First Principles foundation

02 · Choice

Claude Haiku for early ingestion stages; GPT-5 Mini for analytical and synthesis stages; GPT-5 Mini for final assembly

Why

Haiku is faster and cheaper for structured extraction tasks (Stage 1, 2A) where reasoning depth matters less than faithful cataloging. GPT-5 Mini's reasoning capability is reserved for stages requiring framework application and synthesis. [Creator: add rationale if different]

Constraint

Cost and latency are managed by not using the most capable model uniformly

03 · Choice

Full text of JTBD principles, First Principles methodology, Doblin's 10 Types, 17 Journey Patterns, and Innovation Matrix are hardcoded into each relevant stage's system prompt

Why

Ensures the model applies the specific version of each framework the creator has validated, rather than relying on model pre-training which may have absorbed different or conflicting versions of these frameworks. [Creator: add rationale if different]

Constraint

Increases token consumption significantly per stage; trades cost for methodological fidelity

04 · Choice

The problem analysis stage generates clarifying questions AND answers them itself, explicitly instructed never to ask the user for confirmation

Why

Removes the human-in-the-loop bottleneck that would otherwise require a back-and-forth session before analysis can proceed. The tradeoff is that inferred answers may not match what the user would have said. [Creator: add rationale if different]

Constraint

Quality of downstream analysis depends on the quality of autonomous inference; weak problem statements produce weaker inferred context

05 · Choice

Exactly 1 Market Sizing hunch required; specific category distribution enforced (1–2 Technical Feasibility, 2–3 User Behavior, 1–2 Economic Viability, 0–2 Competitive Landscape); final count must be 5–10; internal self-validation pass required before output

Why

Prevents the common failure mode of LLMs generating vague, non-falsifiable, or redundant hypotheses that look plausible but can't be tested. The rubric PDFs (hunch-inference-rubric.pdf, hunch-scoring-rubric.pdf) are attached as documents to ground evaluation. [Creator: add rationale if different]

Constraint

Rigid structure may occasionally force the model to generate a category-compliant hunch that is weaker than an unconstrained alternative

06 · Choice

Two PDF rubrics are attached directly to Stage 2B's system prompt as DocumentPromptBlocks

Why

Externalizes the quality standard for hypothesis evaluation into a document the creator controls and can update, rather than encoding it entirely in prompt text. [Creator: add rationale if different]

Constraint

PDFs have expiry dates on signed URLs — this is a maintenance risk if the workflow is long-lived

07 · Choice

The artifacts input field accepts a VellumDocument (PDF, etc.) that gets passed into Stage 1's prompt

Why

Allows users to ground the analysis in existing research, reports, or prior work rather than relying solely on text descriptions. [Creator: add rationale if different]

Constraint

Document handling is only wired into Stage 1; downstream stages receive the cataloged text summary, not the raw document

08 · Choice

Dataset includes scenarios ranging from a two-word problem statement ("Internal Audit is costly and inefficient") to a 1,000+ word fully-specified platform design brief (an internal client-facing platform)

Why

Tests the pipeline's robustness across the full spectrum of input quality — from minimal to over-specified. [Creator: add rationale if different]

Constraint

Scenarios include real organizational context (firm-specific), meaning the sandbox is not safe for external demonstration without sanitization

05Key Insight

Embedding the full text of validated frameworks directly into each stage's prompt — rather than relying on model pre-training — is the difference between a pipeline that applies your methodology and one that applies the internet's average understanding of that methodology.