Back to portfolio
Voice / Multimodal · Operations

Conversor

AI Conversation Editor

Drag-and-drop editor for non-technical conversation designers to build structured voice AI flows with quality checkpoints, then export to production voice AI runtimes.

01The Problem

Voice AI agents are typically built by engineers who write code or configuration files. Non-technical conversation designers (HR, recruiting, operations leads) can't directly author the flows — they're dependent on engineers to interpret their intent and implement it. The handoff between conversation design and technical execution is where nuance is lost and quality suffers.

02What the AI Does

Conversor is a visual timeline editor where users drag phases, steps, and interventions onto a canvas to build a conversation flow. The editor enforces the three-layer model (structure, behavior, assessment) and includes a Progress Gate — a quality checkpoint at the end of each phase that evaluates whether the phase achieved its purpose, not just whether questions were asked. Exports to Pipecat Flows JSON, Parlant Python SDK, and generic LLM prompt formats. Built on: TypeScript + React + Next.js (App Router), Tailwind CSS, dnd-kit (nested drag-and-drop), Zustand (state management), Supabase (planned for persistence).

03Design Decisions

01 · Choice

Visual timeline as the primary UI, not a flowchart/node graph

Why

A vertical timeline maps more naturally to how a conversation unfolds over time. Flowchart/node-graph editors create a spatial representation that obscures the temporal dimension of a voice interaction. The timeline spine makes phase progression visually obvious.

Constraint

Complex multi-branch conversations become harder to represent on a linear timeline. The assumption is that most voice agent flows are primarily linear with decision branches, not deeply parallel conversation graphs.

02 · Choice

Progress Gate as a first-class concept, not an afterthought

Why

Most conversation design tools treat a phase as "done" when all steps are asked. Conversor treats a phase as "done" when it achieves its purpose. The Progress Gate is the quality checkpoint — it defines what evidence would indicate the phase worked, and the agent is evaluated against it.

Constraint

Progress Gates require the designer to articulate what "success" looks like for a phase — which is harder than just listing steps. Some designers may resist the additional cognitive load.

03 · Choice

Intervention technique vocabulary (extensible)

Why

Designers shouldn't have to craft their own techniques from scratch. A predefined vocabulary (direct_question, open_exploration, reflective_listening, scaling, Socratic, motivational, challenge, reframe) gives designers a shared language and prevents inconsistency. The list is explicitly extensible.

Constraint

The technique vocabulary must stay synchronized with what the target runtime actually supports. A technique that exists in Conversor but not in Pipecat requires a mapping layer at export.

04 · Choice

Inline editing everywhere, no modals

Why

Modal-based editing breaks the visual continuity of the timeline. When you click to edit something in a modal, you lose sight of where it sits in the full conversation flow. Inline editing keeps the spatial context visible at all times.

Constraint

Inline editing is harder to implement well (especially for validation, keyboard nav, and focus management). It also means the data model must support partial/incomplete states — a step that's been named but not yet given a technique.

05 · Choice

Supabase for persistence (not yet wired)

Why

Supabase provides both the database and the auth layer in one service, with a JavaScript SDK that works naturally in Next.js. For a single-user v1 with expansion plans, it's the right balance of capability and simplicity.

Constraint

Supabase is a separate service that must be configured, authenticated, and deployed. The persistence layer is the main gap in the current editor — the UI is functional but nothing persists across sessions.

06 · Choice

Export to multiple runtime targets (Pipecat, Parlant, generic prompt)

Why

No single voice AI runtime has won the market. Conversor's value isn't tied to one runtime. By supporting multiple export targets, the editor remains runtime-agnostic and useful regardless of which platform gains traction.

Constraint

Each runtime target has its own schema, capability model, and limitations. The export layer requires a mapping from Conversor's canonical object model to each target's format — this is non-trivial work and is explicitly marked as pending.

04Tradeoffs & Limits

- **No persistence in v1.** The editor is a React UI with Zustand state — refresh the page and everything is gone. The "export layer" is empty. This is a significant limitation for any real use. - **Export targets are not yet implemented.** The UI is scaffolded but the actual export code (generic prompt, Pipecat JSON, Parlant Python) doesn't exist yet. - **Progress Gates are a design concept, not an enforcement mechanism.** The editor captures the gate criteria, but there's no runtime integration that actually evaluates the agent against those criteria. - **The three-layer model has cognitive overhead.** Requiring designers to think about Structure, Behavior, AND Assessment simultaneously is more demanding than just "what questions should the agent ask?" This may limit adoption among less sophisticated users. - **No version control or undo.** If a designer makes a bad edit, there's no way to recover previous states. This is table stakes for any serious editor.

05Key Insight

The hardest part of building a voice AI conversation editor isn't the drag-and-drop UI — it's deciding what "done" means for each phase. Progress Gates force that question earlier, which is the point, but it means the editor has opinions about conversation design that pure canvas tools don't.