Agentic Workflow Generator
Process Documentation → Agentic Workflow Generator
Unstructured process documentation enters; a structured, multi-layer AI agent blueprint exits — ready for stakeholder review without a consultant in the loop.
01 — The Problem
Organizations that want to explore AI automation of their business processes face a consistent bottleneck: translating messy, informal process documentation into a structured, actionable AI design. This work typically requires a skilled consultant to read the documentation, decompose it into tasks, map those tasks to AI capabilities, and produce a coherent proposal — a process that is slow, expensive, and hard to scale across many processes simultaneously. Without a tool like this, the gap between "we have a process document" and "we have an AI agent design" is filled by expensive human hours or simply never crossed at all.
02 — What the AI Does
The system performs four sequential AI tasks, each building on the last: Structures unstructured process documentation (text or uploaded documents) into a hierarchical, phase-by-step process outline using a standardized format. Converts that outline into a Jobs-to-be-Done (JTBD) job map, generating five outcome statements per job step — each framed as either a time-minimization or risk-minimization goal. Maps each job step to one or more functional AI agent designs, specifying name, description, functional requirements, technologies, and dependencies. Summarizes the full agent suite into a clean, stakeholder-readable format with functionality summaries, dependencies, and hypotheses per agent. Models used: Claude Sonnet 4.5 (claude-sonnet-4-5-20250929) at all four stages, with max_tokens: 64000. Interface: A custom chat UI (firm-aligned brand styling) that accepts both typed messages and uploaded documents (PDF, DOCX, TXT). The user attaches a process document, sends a message, and receives the final agent blueprint as a formatted response. Configuration differentiators vs. a blank Claude window: Four distinct, deeply-prompted system instructions are embedded — one per agent role — each encoding a specific professional methodology (process clarification, JTBD, functional AI design, executive summarization). The pipeline is strictly sequential: each node's output is the next node's input, enforcing a dependency chain that a single prompt cannot replicate reliably. Output formatting is explicitly constrained: no markdown # or * characters, plain readable text — optimized for copy-paste into client deliverables. Document input is handled natively via VellumDocument state, passed directly into the first prompt node.
03 — Design Decisions
The transformation is broken into four discrete LLM calls, each with its own system prompt and role.
Each transformation (clarify → JTBD → agent design → summarize) requires a different cognitive mode and output format. Combining them into one prompt risks the model collapsing or blending the frameworks. Separation enforces clean handoffs and makes each stage independently inspectable. [Creator: add rationale if there's additional context — e.g., whether this was tested as a single prompt first]
Each node's output is the only input to the next — no skipping, no branching. This enforces the full methodology every time.
The process outline is converted to a JTBD job map before AI agents are designed.
JTBD provides a structured, outcome-oriented lens that forces the system to articulate what the job executor is trying to achieve rather than just what steps exist. This produces more defensible AI agent designs because each agent is justified by a measurable outcome statement. [Creator: add rationale — was JTBD a client-requested framework or a design choice?]
Every job step must have exactly five outcome statements, each framed as either time-minimization or risk-minimization — no other formats accepted.
Both the agent mapper and summarizer nodes explicitly instruct the model not to use # or * characters.
The output is intended for direct use in client-facing deliverables (likely Word documents or presentations) where markdown renders as literal characters. [Creator: confirm if this is the delivery context]
Plain text output only — readability over richness.
The workflow is triggered via a chat interface that accepts both text and file uploads, rather than a form-based input.
Process documentation comes in many forms and sizes. A chat interface allows the user to provide context alongside the document (e.g., "focus on the approval steps") and feels lower-friction than a structured form. [Creator: add rationale if there was a specific UX decision here]
File types limited to PDF, DOCX, TXT — no image-only formats, no spreadsheets.
The uploaded document is stored in workflow State as a VellumDocument and passed directly into the first prompt node, rather than extracted to text first.
Native VellumDocument handling preserves document structure and delegates parsing to the model, avoiding a separate extraction step. [Creator: add rationale if there was a deliberate choice here vs. pre-extraction]
Document is only used in Stage 1; subsequent stages operate on text outputs, not the original document.
Four NoteNode objects are placed in the workflow canvas (as unused_graphs) describing each agent's role in plain English.
These serve as inline documentation for anyone viewing the workflow in the Vellum editor — making the design self-explanatory without requiring a separate README. [Creator: confirm if these are for internal team use or client demos]
Notes are decorative/documentary only — they have no effect on execution.
05 — Key Insight
When the goal is to translate a professional methodology into a repeatable AI output, the most important design decision is not which model to use — it's how to decompose the methodology into discrete, independently-verifiable transformation steps that each have a clear input, a clear output format, and a single cognitive job to do.