Back to portfolio

CM Mentor Coach

AI Practice Partner for Conversational Management

Voice-based AI coach that lets CM practitioners practice real coaching scenarios between live sessions, receive scored feedback on 7 rubric dimensions, and get personalized recommendations for what to work on next.

01The Problem

CM practitioners — people learning Conversational Management — have limited opportunity to practice between live sessions. They study the frameworks, they understand the theory, but they don't get enough repetitions to internalize the skills. Coaching is a performance skill: you can't think your way into being good at it. You need to practice, fail, get feedback, and practice again. Without a practice partner between sessions, most learners plateau.

02What the AI Does

A voice-based AI practice partner that simulates coaching conversations with a learner. The learner selects a scenario (coachee name, role, presenting challenge), receives a pre-session briefing from the Mentor Coach, enters a 15-minute timed voice conversation with the AI role-playing a realistic coachee, then receives a scored debrief across 7 rubric dimensions. The Mentor Coach also offers a free-form "Learn" mode where the learner can ask questions about CM concepts and get answers grounded in the CM curriculum. Progress is tracked over time with radar charts showing score trajectory. Built on: Voice-first interface, scenario library (9 scenarios at launch), CM curriculum content as knowledge base, structured debrief scoring.

03Design Decisions

01 · Choice

Voice-first interaction, not text

Why

Coaching is a spoken skill. Practicing by typing changes the cognitive load and omits the aspects of communication that matter most — pace, tone, silence, verbal rhythm. A voice interface forces the learner to practice the actual modality they'll use in real coaching.

Constraint

Voice requires reliable speech recognition and generation. The AI must handle accents, pauses, interruptions, and imperfect audio without breaking the conversation.

02 · Choice

Facilitator-generated invite links, not self-registration

Why

CM is a cohort-based paid program. You need to control who gets in, tie learners to a facilitator and CM level, and prevent support billing ambiguity. Open self-registration creates a support and billing headache.

Constraint

The system requires a facilitator admin panel — a separate interface for coaches to enroll learners, send invite links, and manage cohort membership.

03 · Choice

15-minute timed sessions with visible progress bar

Why

Real coaching conversations have time boundaries. The timer creates realistic pressure that mirrors actual coaching engagements. A progress bar (not a countdown clock) provides awareness without anxiety-inducing urgency.

Constraint

Some scenarios naturally conclude before 15 minutes. The debrief scores whatever happened in the window — short sessions get honest feedback, not penalized ones.

04 · Choice

7-dimension rubric scoring, not a single score

Why

A single overall score tells the learner nothing actionable. Seven dimensions (from CM rubric) surface specifically what they did well and where they need work. A radar chart showing the shape of performance makes progress visually obvious over time.

Constraint

Scoring must be perceived as credible by the learner. If the AI scores someone low and they disagree, the system loses trust. The debrief language must be specific enough to be believable.

05 · Choice

CM-level gating on scenario library

Why

Learners progress through CM levels 1-4. Scenarios are unlocked progressively — CM1 learners see CM1 scenarios, CM2 learners see CM1+CM2 scenarios, etc. This creates a natural motivation arc: learners can see what they're working toward.

Constraint

V1 ships with 9 scenarios. If the library grows past 15-20, filtering becomes valuable. For now, the Mentor Coach recommendation engine serves as the implicit filter.

06 · Choice

Learn mode grounded in CM curriculum, not general coaching content

Why

The learner should be able to ask "Explain reflective listening" and get an answer consistent with CM methodology — not generic coaching advice. The Mentor Coach draws from MP Handouts and CM Facilitator Guides as its knowledge base, staying in CM language.

Constraint

The knowledge base must stay current with the CM curriculum. If the curriculum changes, the knowledge base must be updated.

07 · Choice

No "cheat sheet" visible during practice

Why

Showing skill reminders during practice is like reading the textbook during an exam. The point is to internalize the skills, not reference them. Training wheels come in the pre-session briefing, not during the performance.

Constraint

CM1 learners may feel this is too hard too soon. The warm-up option (2-minute micro-skill drill before the full scenario) provides scaffolding without undermining the no-cheat-sheet principle.

04Tradeoffs & Limits

- **Scoring credibility is the system's existential risk.** If learners don't trust the AI's scoring, the practice partner loses its value. The debrief must be specific and defensible enough that a learner who scores low says "fair enough" rather than "the AI doesn't know what it's talking about." - **No true human coaching substitute.** The AI role-player is a reasonable proxy for a real coachee but doesn't replicate the full complexity of a human human — emotional reactions, real stakes, relationship history. Practice with the AI is necessary but not sufficient. - **Voice recognition quality degrades with accents, background noise, and fast speech.** Non-native English speakers may have a worse experience. This isn't yet addressed in the design. - **The pre-session briefing creates one more thing to click through.** If learners skip the briefing and jump straight into the scenario, they miss the targeted skill recommendation. Compliance with the briefing step isn't enforced. - **The coachee personas are fictional.** The 9 scenarios have named coachees (Elena, Marcus, etc.) who are fictional composites — not real client situations. This is both a limitation (less authentic) and intentional (privacy + legal protection for real clients). - **Practice mode cannot capture what's not in the transcript.** The AI scores based on what's said — it can't observe body language, tone of voice (as distinct from words), or the silence between words. These are real coaching signals the system misses.

05Key Insight

AI practice partners for performance skills face a credibility paradox: the AI must be smart enough to give believable feedback, but not so demanding that learners feel the bar is unfair. The scoring rubric and debrief language are the trust infrastructure — if they fail, the practice partner becomes a toy, not a tool.