Back to portfolio
Workflow Automation · Product

Hyper Innovation Workflow

Interview-to-Innovation Pipeline: Transcript → Prioritized Solutions

Raw interview transcripts pile up unanalyzed → this workflow extracts the job-to-be-done, maps outcomes, scores market urgency, generates 10 innovation concepts, and ranks the top 3 by competitive viability.

01The Problem

Organizations conducting customer discovery interviews generate rich qualitative data that rarely gets systematically converted into actionable product or service innovation. The gap between "we talked to clients" and "here are the three highest-priority innovation opportunities with market validation" typically requires weeks of analyst work, multiple frameworks applied manually, and significant expertise in jobs-to-be-done methodology, competitive analysis, and business model design. Without a structured pipeline, insights decay, frameworks get applied inconsistently, and the connection between a specific customer pain and a viable business model is never made explicit.

02What the AI Does

The system performs a sequential, multi-stage analysis pipeline on a raw interview transcript: Extract (PMIdentifyProblemJobExecutorLabV1): Identifies the job executor, core job-to-be-done, problem statement, desired outcomes, current solution gaps, and emotional/related jobs from the transcript using a deployed prompt. Decompose in parallel: Simultaneously extracts the job executor string, core job string, and problem statement string as discrete structured outputs for downstream use. Generate persona (PMPersonaGeneratorLabV1): Builds a detailed Forrester-style B2B persona (functional, emotive, behavioral, decisioning attributes) from the job executor. Map the job (PMODIJobMapperLabV1): Expands the core job into a full 8-step ODI job map with 10 outcome statements per step (80 total), formatted as "Minimize the time/likelihood that..." Prioritize outcomes (PMJobMapOSPrioritizationResearchLabV1): Uses deep research (Perplexity integration) to score each outcome statement on importance (1–5) and urgency (1–5), filtering to only those with cumulative score ≥8, with root cause citations. Generate innovations (PMInnovationSolutionGeneratorLabV1): Produces 10 innovation concepts mapped to the highest-priority outcomes, each typed to one of 7 business model archetypes (PSS, SaaP, PaaS, MS, SS, PlaaS, eSaaS), with value proposition, feasibility, and viability sections. Evaluate each innovation (MAPInnovationIdeaEvalution — parallel Map Node): For each of the 10 innovations simultaneously runs: UVP Generator analysis (market needs fit) and Blue Ocean analysis (competitive landscape scoring via web research), then synthesizes both into a combined priority score. Rank and filter (PrioritizationAnalysis): Reviews all 10 scored innovations and surfaces the top 3 by cumulative score, presenting each with full analysis in a standardized format. Models used: Deployed prompt nodes (PromptDeploymentNode) for most stages — these reference versioned, externally managed prompts. The final prioritization step uses gpt-4o-mini inline. The Blue Ocean and UVP analysis nodes use Perplexity-backed research via deployed prompts. Configuration differentiators: The workflow is not a single prompt — it is a 9-stage directed acyclic graph with parallel execution branches, a Map Node running up to 10 concurrent subworkflow evaluations, and mock data infrastructure for testing long-running stages without re-running expensive upstream nodes.

03Design Decisions

01 · Choice

The pipeline enforces the specific sequence: job executor → persona → job map → outcome prioritization → innovation generation → evaluation → ranking. Each stage's output is the required input for the next.

Why

Jobs-to-be-done methodology (specifically ODI/Outcome-Driven Innovation) requires this sequence — you cannot generate valid outcome statements without first defining the job, and you cannot prioritize without the full map. Skipping steps produces generic outputs. [Creator: add rationale for choosing ODI specifically over other frameworks]

Constraint

The job map must produce exactly 8 steps × 10 outcomes before prioritization runs. The innovation generator receives the full prioritized outcomes list, not a summary.

02 · Choice

After the first prompt extracts the full JSON (job executor, core job, problem statement together), three separate templating nodes extract each field independently before the merge node.

Why

Downstream nodes need these as discrete string inputs, not nested JSON. The parallel extraction avoids sequential bottlenecks and allows the persona generator and job mapper to start simultaneously. [Creator: add rationale]

Constraint

The merge node uses AWAIT_ALL, ensuring all three extractions complete before the job mapper proceeds.

03 · Choice

All 10 innovations are evaluated concurrently via a Map Node (max_concurrency=1 in current config, though architecture supports higher).

Why

Each innovation requires independent UVP and Blue Ocean analysis — these are not dependent on each other. Parallel execution reduces total runtime significantly. The max_concurrency=1 setting suggests either rate limiting or cost control was a consideration. [Creator: add rationale for concurrency setting]

Constraint

Each subworkflow output must be named result and the Map Node output is an array of priority analyses, one per innovation.

04 · Choice

The sandbox includes disabled mocks for PMInnovationSolutionGeneratorLabV1, PMODIJobMapperLabV1, PMJobMapOSPrioritizationResearchLabV1, and MAPInnovationIdeaEvalution — all the computationally expensive stages.

Why

The pipeline takes significant time and cost to run end-to-end. Mocks allow testing of downstream logic (prioritization, ranking) without re-running upstream research stages. The mocks are disabled by default, meaning they must be explicitly enabled for testing.

Constraint

Mock data is scenario-specific (keyed to specific interview subjects), preventing accidental cross-contamination of test data.

05 · Choice

The core analytical stages (persona generation, job mapping, outcome prioritization, innovation generation, UVP analysis, Blue Ocean analysis) use PromptDeploymentNode rather than inline prompts.

Why

These prompts are versioned and managed externally, allowing prompt iteration without workflow code changes. This separates prompt engineering from workflow engineering. [Creator: add rationale for which stages were kept inline vs. deployed]

Constraint

Deployed prompts are pinned to "LATEST" release tag, meaning prompt updates automatically propagate to the workflow.

06 · Choice

The innovation generator is constrained to produce innovations typed to exactly one of 7 archetypes: PSS, SaaP, PaaS, MS, SS, PlaaS, eSaaS.

Why

Forces the output to be commercially actionable rather than vague "AI-powered solution" descriptions. Each archetype implies a specific revenue model, cost structure, and go-to-market motion. [Creator: add rationale for this specific taxonomy]

Constraint

The final prioritization prompt explicitly references these types when presenting the top 3, ensuring the business model framing is preserved through to the deliverable.

07 · Choice

The final prioritization prompt enforces a rigid template: Innovation Name, Business Model Type, Description, Desirability/Feasibility/Viability, UVP Score, Blue Ocean Score, Cumulative Score, Market Needs Analysis, Blue Ocean Analysis — separated by dashes.

Why

Consistent format enables the output to be used directly in client-facing materials or fed into downstream tools (BMC creator, prototype prompt, synth panel) that are present in the unused_graphs section.

Constraint

The prompt explicitly states "EXACT structure" — deviation from format is treated as an error condition.

08 · Choice

The workflow includes unused_graphs containing a Synth Panel node, Prototype Prompt node, Competitive Analysis nodes, and BMC Creator node — none connected to the main graph.

Why

These represent the next stages of the innovation pipeline (synthetic customer panels, UI prototyping, business model canvas creation) that are built but not yet activated. They exist as available capabilities that can be connected when needed. [Creator: add rationale for keeping these in unused vs. separate workflow]

Constraint

These nodes are serialized with the workflow but never executed in the main flow.

05Key Insight

The most valuable AI implementation design decision is often the sequence enforcement — when you force AI to complete each analytical stage before the next begins, you prevent the model from collapsing a multi-step reasoning process into a single plausible-sounding but shallow output.