Back to portfolio
Workflow Automation · Operations

Agentic Workflow Generator

Process Documentation → Agentic Workflow Generator

Unstructured process documentation enters; a structured, multi-layer AI agent blueprint exits — ready for stakeholder review without a consultant in the loop.

01The Problem

Organizations that want to explore AI automation of their business processes face a consistent bottleneck: translating messy, informal process documentation into a structured, actionable AI design. This work typically requires a skilled consultant to read the documentation, decompose it into tasks, map those tasks to AI capabilities, and produce a coherent proposal — a process that is slow, expensive, and hard to scale across many processes simultaneously. Without a tool like this, the gap between "we have a process document" and "we have an AI agent design" is filled by expensive human hours or simply never crossed at all.

02What the AI Does

The system performs four sequential AI tasks, each building on the last: Structures unstructured process documentation (text or uploaded documents) into a hierarchical, phase-by-step process outline using a standardized format. Converts that outline into a Jobs-to-be-Done (JTBD) job map, generating five outcome statements per job step — each framed as either a time-minimization or risk-minimization goal. Maps each job step to one or more functional AI agent designs, specifying name, description, functional requirements, technologies, and dependencies. Summarizes the full agent suite into a clean, stakeholder-readable format with functionality summaries, dependencies, and hypotheses per agent. Models used: Claude Sonnet 4.5 (claude-sonnet-4-5-20250929) at all four stages, with max_tokens: 64000. Interface: A custom chat UI (firm-aligned brand styling) that accepts both typed messages and uploaded documents (PDF, DOCX, TXT). The user attaches a process document, sends a message, and receives the final agent blueprint as a formatted response. Configuration differentiators vs. a blank Claude window: Four distinct, deeply-prompted system instructions are embedded — one per agent role — each encoding a specific professional methodology (process clarification, JTBD, functional AI design, executive summarization). The pipeline is strictly sequential: each node's output is the next node's input, enforcing a dependency chain that a single prompt cannot replicate reliably. Output formatting is explicitly constrained: no markdown # or * characters, plain readable text — optimized for copy-paste into client deliverables. Document input is handled natively via VellumDocument state, passed directly into the first prompt node.

03Design Decisions

01 · Choice

The transformation is broken into four discrete LLM calls, each with its own system prompt and role.

Why

Each transformation (clarify → JTBD → agent design → summarize) requires a different cognitive mode and output format. Combining them into one prompt risks the model collapsing or blending the frameworks. Separation enforces clean handoffs and makes each stage independently inspectable. [Creator: add rationale if there's additional context — e.g., whether this was tested as a single prompt first]

Constraint

Each node's output is the only input to the next — no skipping, no branching. This enforces the full methodology every time.

02 · Choice

The process outline is converted to a JTBD job map before AI agents are designed.

Why

JTBD provides a structured, outcome-oriented lens that forces the system to articulate what the job executor is trying to achieve rather than just what steps exist. This produces more defensible AI agent designs because each agent is justified by a measurable outcome statement. [Creator: add rationale — was JTBD a client-requested framework or a design choice?]

Constraint

Every job step must have exactly five outcome statements, each framed as either time-minimization or risk-minimization — no other formats accepted.

03 · Choice

Both the agent mapper and summarizer nodes explicitly instruct the model not to use # or * characters.

Why

The output is intended for direct use in client-facing deliverables (likely Word documents or presentations) where markdown renders as literal characters. [Creator: confirm if this is the delivery context]

Constraint

Plain text output only — readability over richness.

04 · Choice

The workflow is triggered via a chat interface that accepts both text and file uploads, rather than a form-based input.

Why

Process documentation comes in many forms and sizes. A chat interface allows the user to provide context alongside the document (e.g., "focus on the approval steps") and feels lower-friction than a structured form. [Creator: add rationale if there was a specific UX decision here]

Constraint

File types limited to PDF, DOCX, TXT — no image-only formats, no spreadsheets.

05 · Choice

The uploaded document is stored in workflow State as a VellumDocument and passed directly into the first prompt node, rather than extracted to text first.

Why

Native VellumDocument handling preserves document structure and delegates parsing to the model, avoiding a separate extraction step. [Creator: add rationale if there was a deliberate choice here vs. pre-extraction]

Constraint

Document is only used in Stage 1; subsequent stages operate on text outputs, not the original document.

06 · Choice

Four NoteNode objects are placed in the workflow canvas (as unused_graphs) describing each agent's role in plain English.

Why

These serve as inline documentation for anyone viewing the workflow in the Vellum editor — making the design self-explanatory without requiring a separate README. [Creator: confirm if these are for internal team use or client demos]

Constraint

Notes are decorative/documentary only — they have no effect on execution.

05Key Insight

When the goal is to translate a professional methodology into a repeatable AI output, the most important design decision is not which model to use — it's how to decompose the methodology into discrete, independently-verifiable transformation steps that each have a clear input, a clear output format, and a single cognitive job to do.