Back to portfolio
[Data Extraction & Classification · [Product

JTBD User Insights Synthesizer

JTBD Interview Insight Synthesizer

Customer interview analysis → structures only the designated customer’s statements into JTBD-style insights → gives founders clearer problem and outcome signals.

01The Problem

Customer interviews often produce lots of raw conversation but not a reliable synthesis of what the customer is actually trying to achieve, what is blocking them, and what a product would need to deliver. Without a structured pass over the transcript, teams can over-index on anecdotes, mix together multiple speakers, or skip from quotes to solutions without clearly defining the job, pain points, desired outcomes, and buying criteria.

02What the AI Does

This is a custom GPT built on OpenAI’s chat model stack with instruction-level specialization rather than a separate software application or autonomous workflow. It analyzes interview transcripts, isolates the user-designated customer speaker, and generates a structured report that summarizes the customer’s main job to be done, extracts specific problems, identifies root causes, articulates ideal outcomes, and lists product or service requirements the customer would consider “hiring.” It is explicitly configured around Jobs To Be Done, Outcome Driven Innovation, and Ash Maurya’s Running Lean methods, so its output is shaped by those frameworks rather than generic summarization. It does not claim access to usage analytics, business outcomes, or external context beyond its instructions and the transcript the user provides.

03Design Decisions

01 · Choice

Narrowed the GPT’s scope to customer interview synthesis rather than general research, ideation, or strategy support.

Why

A constrained scope increases consistency and makes the output easier to trust and compare across interviews.

Constraint

Prevents the system from drifting into broad brainstorming or unsupported business advice.

02 · Choice

Embedded specific frameworks: Jobs To Be Done, Outcome Driven Innovation, and Running Lean.

Why

These frameworks give the analysis a repeatable lens for interpreting customer language in terms of jobs, pains, outcomes, and solution criteria rather than surface-level notes.

Constraint

Enforces a structured output standard and reduces vague or purely impressionistic summaries.

03 · Choice

Required the user to identify the customer as “Speaker {#}” and instructed the GPT to draw insights only from that speaker.

Why

In interview transcripts, multiple participants can easily blur together; isolating one speaker reduces attribution errors and keeps the analysis centered on the customer rather than the interviewer or other participants.

Constraint

Prevents contamination from non-customer voices and limits unsupported inference.

04 · Choice

Mandated an exact output format with defined sections.

Why

A fixed structure makes the output predictable, easier to review, and more useful as an analysis artifact across many interviews.

Constraint

Enforces completeness across job summary, problems, root causes, desired outcomes, and buying requirements.

05 · Choice

Instructed the GPT to produce an exhaustive list of specific problems and include root causes for each.

Why

The design favors diagnostic depth over lightweight summarization, which is more aligned with validation and product discovery work.

Constraint

Pushes the model to move beyond surface complaints and identify causality, though only within what can be reasonably inferred from the transcript.

06 · Choice

Framed products and services as things customers might “hire” to get a job done.

Why

This reflects JTBD language and steers outputs toward purchase criteria and outcome expectations rather than feature wish lists.

Constraint

Keeps the analysis focused on functional progress and selection criteria, not abstract preferences alone.

07 · Choice

Explicitly told the GPT to report what it actually is and does, and not to invent outcomes, scenarios, metrics, or implementation scale.

Why

The creator prioritized credibility over portfolio inflation.

Constraint

Prevents fabricated ROI claims, made-up client contexts, or overstated system complexity.

08 · Choice

Built this as a customized GPT with instruction-level behavior rather than a multi-tool automated pipeline.

Why

[Creator: add rationale]

Constraint

Keeps the system lightweight and flexible, but means quality depends heavily on transcript quality and prompt compliance rather than external validation steps.

09 · Choice

Focused the system on extraction and structuring, not independent decision-making or product prioritization.

Why

[Creator: add rationale]

Constraint

The output supports downstream judgment but does not replace human interpretation, segmentation, or strategic choice.

04Tradeoffs & Limits

This GPT is only as good as the transcript and speaker designation it receives. If the wrong speaker is identified, if the transcript is ambiguous, or if the customer’s statements are sparse, contradictory, or low-detail, the analysis will be weak or over-inferred. It can structure and synthesize what the customer said, but it cannot verify factual claims, observe nonverbal cues, recover omitted context, or distinguish between a one-off complaint and a widespread market pattern from a single interview alone. It is also intentionally narrow. It should not be used as a substitute for quantitative validation, segmentation analysis, market sizing, pricing decisions, or product strategy without additional evidence. Because it is grounded in the designated speaker’s language, it may underrepresent broader team dynamics, interviewer influence, or contextual factors outside the transcript. AI was intentionally not used here for autonomous action, CRM updates, workflow routing, or analytics claims, which keeps the tool honest and focused but limits end-to-end automation.

05Key Insight

Useful AI systems often come from tightly constraining interpretation around one evidence source and one decision frame, not from making the model do everything.