Back to portfolio
Prompt Engineering · Finance

CSP AML Officer

AML Compliance Program Advisor

AML and sanctions compliance questions → I deliver grounded regulatory guidance with strict scope and uncertainty controls → users get clearer, safer decisions faster.

01The Problem

Financial crime compliance work carries high stakes because teams must interpret complex rules, assess risk, and communicate decisions clearly under regulatory scrutiny. Without a specialized assistant, people often rely on generic AI or manual drafting, which can increase the risk of vague advice, overstated confidence, or outputs that do not reflect the standards and boundaries required in BSA/AML and OFAC-related work.

02What the AI Does

I answer questions as a specialized AML Officer persona focused on BSA, AML, USA PATRIOT Act, and OFAC compliance. I explain, structure, draft, evaluate, and interpret compliance-related information in plain business language while staying within a narrow role definition. I am built on GPT-5.4 Thinking with access to tool-based file handling and retrieval capabilities configured in this environment. What makes me different from a blank chat window is that I am tightly instructed to operate as an experienced AML compliance leader, respond directly in first person, avoid hallucinating, stay honest about uncertainty, and ask for more relevant information rather than inventing answers. My behavior is also constrained by broader system instructions around factuality, citation practices for uploaded files, artifact handling, and current-tool limits. My description here is grounded in the configuration and rules contained in the prompt the user supplied and the system instructions governing my behavior.

03Design Decisions

01 · Choice

Narrowed the assistant to a specific professional identity: an AML Officer responsible for enterprise BSA/AML/OFAC program management.

Why

This appears designed to make responses domain-specific and operationally credible rather than generic compliance commentary.

Constraint

It keeps me anchored to a defined area of expertise and reduces drift into unrelated advisory domains.

02 · Choice

Embedded a detailed professional background, responsibilities, priorities, and industry context.

Why

This likely helps calibrate judgment, vocabulary, and the level of managerial perspective expected in responses instead of producing broad textbook explanations.

Constraint

It steers outputs toward executive-level compliance reasoning and away from shallow or purely academic answers.

03 · Choice

Required first-person, direct answers without unnecessary introductions or summaries.

Why

This was likely chosen to make the interaction feel like working with a senior in-role operator rather than a generic assistant.

Constraint

It enforces concise, decisive communication style and prevents generic AI framing.

04 · Choice

Explicitly instructed me to answer questions about specific rules or regulations by consulting the knowledge base as the source of truth.

Why

[Creator: add rationale]

Constraint

It sets a quality bar that regulatory interpretations should be grounded in provided sources rather than improvised from general model memory.

05 · Choice

Explicitly prohibited hallucination and required me to ask for more relevant information when I do not know.

Why

This reflects a clear preference for reliability over fluency in a high-risk domain.

Constraint

It sacrifices smoothness and completeness when evidence is missing, but reduces the risk of fabricated compliance guidance.

06 · Choice

Tuned the assistant toward problem solving, program assessment, regulatory interpretation, and recommendation-making.

Why

This suggests the creator wanted not just content generation, but decision support aligned to the real work of AML leadership.

Constraint

It pushes me to analyze and recommend, but within the bounds of available facts and my defined role.

07 · Choice

Included operating priorities such as regulatory scrutiny, fintech third-party risk, technological challenges, resource allocation, reputation risk, and emerging threats.

Why

This likely helps me prioritize issues the way a real AML executive would, especially when tradeoffs are involved.

Constraint

It biases responses toward risk management, governance, and practical controls rather than purely theoretical answers.

08 · Choice

Did not configure a bespoke multi-step workflow, deterministic rule engine, or dedicated compliance calculation tool within the prompt itself.

Why

[Creator: add rationale]

Constraint

I am best understood as a highly scoped conversational expert system, not an end-to-end automated case management or monitoring platform.

09 · Choice

Combined custom persona instructions with the platform’s broader system rules on truthfulness, citations, tool usage, artifact generation, and limits on unsupported claims.

Why

This creates layered control: domain behavior from the custom prompt and safety/factuality behavior from the platform.

Constraint

My outputs are shaped by both sets of instructions, which can limit flexibility but improve consistency and trustworthiness.

04Tradeoffs & Limits

I am strongest when the task is interpretive, explanatory, drafting-oriented, or analytical within AML/BSA/OFAC scope. I am weaker when the user needs authoritative legal advice tied to a current regulation set that is not actually present in the provided materials, because my custom instructions say regulatory answers should rely on the knowledge base, and I should not invent what I cannot verify. I can also produce weak output when inputs are incomplete, when a question depends on jurisdiction-specific or newly changed rules, or when the task requires access to internal policies, transaction data, investigative case files, or sanctions screening systems that I do not have. In those cases, the right behavior is not to guess. That makes me safer, but less seamless. I should not be used as the sole decision-maker for filing obligations, suspicious activity determinations, legal interpretations, or regulator-facing conclusions without human review. I am not a transaction monitoring engine, sanctions filter, or evidence repository. I do not inherently know your bank’s actual policies, exam history, customer data, or current control environment unless those materials are provided. I also should not be presented as having operational outcomes, adoption metrics, or production impact data, because I do not have access to that evidence.

05Key Insight

High-trust AI in regulated work is usually created by narrowing scope, tightening behavior, and making uncertainty handling explicit rather than trying to sound universally capable.