Business Strategy Risk Critic
Business Strategy Risk Critic
Weak business plans → probes assumptions, evidence gaps, and downside risk → gives decision-makers a sharper basis for judgment.
01 — The Problem
Many strategy, product, and service ideas are evaluated too optimistically, with weak assumptions left untested and important risks ignored. That creates avoidable execution risk because teams can confuse confidence, novelty, or internal alignment with evidence.
02 — What the AI Does
This is a custom GPT built on GPT-5.4 Thinking that evaluates business strategies, product concepts, and service improvement ideas through a critical “black hat” lens. It identifies assumptions, flags missing evidence, surfaces downside risks, challenges unsupported claims, and suggests what information should be gathered next. It is configured to emphasize empirical validation, competitor analysis, financial scrutiny, and alternative explanations rather than broad brainstorming or encouragement. It can also use web browsing when needed for current or niche information, but its defining behavior is prompt-level evaluation logic and judgment framing rather than a bespoke external system or proprietary knowledge base.
03 — Design Decisions
Narrowed the GPT’s role to “black hat” critical evaluation rather than general business advising.
This creates a distinct function inside strategy work: stress-testing decisions, not generating enthusiasm or consensus.
Keeps outputs focused on risks, flaws, missing questions, and weak assumptions instead of drifting into generic ideation.
Anchored the GPT in Edward De Bono’s “Six Thinking Hats” black-hat mode.
The framework gives the model a recognizable reasoning posture and makes its critique legible to users. [Creator: add rationale]
Encourages consistent scrutiny and reduces ambiguity about whether the GPT should support, brainstorm, or challenge a plan.
Instructed the GPT to prioritize empirical validation over opinion.
The configuration explicitly pushes toward customer feedback, market research, competitor analysis, and financial projections instead of accepting claims at face value.
Raises the evidence bar and discourages persuasive but ungrounded responses.
Calibrated the tone as helpful but critical.
Purely adversarial critique is easy to dismiss, while overly soft critique fails to uncover risk; this balance aims to preserve usability while maintaining rigor.
Enforces directness without turning the GPT into a hostile reviewer.
Framed the GPT around questions such as “What are you missing?” and “What assumptions are fundamentally flawed?”
The creator embedded reusable diagnostic prompts that force examination of blind spots and hidden dependencies.
Produces structured skepticism instead of vague negativity.
Kept the scope at evaluation, challenge, and decision support rather than execution ownership.
The GPT is designed to assess plans and reasoning, not to operate as an end-to-end workflow or system of record. [Creator: add rationale]
Prevents overstating capability and keeps expectations aligned with what a prompt-configured assistant can reliably do.
Allowed access to tools such as web browsing when current or niche information may matter.
Strategic critique often weakens if market facts, competitor context, or recent developments are stale.
Improves factual grounding, while still making the core product the evaluation method rather than the tool stack.
Did not position the GPT as having private usage data, business outcomes, or proprietary operational insight.
The configuration and portfolio prompt both require factual honesty about what the system can actually know.
Prevents fabricated ROI claims, invented case studies, and false confidence.
Emphasized tradeoff-aware reasoning and gap finding over polished deliverables.
The main value is judgment quality, not presentation quality.
Favors substance over style and directs the model to inspect what is absent, not just improve wording.
04 — Tradeoffs & Limits
This GPT is strongest when a user already has a plan, assumption set, or proposal to interrogate. It is weaker when inputs are vague, purely political, or missing enough operational detail that critique becomes generic. Because its posture is deliberately critical, it can under-serve moments that require facilitation, coalition-building, or open-ended ideation before evaluation. It also does not independently verify every claim unless it is given evidence or uses tools to retrieve current information. That means it can identify likely gaps and faulty reasoning, but it cannot guarantee that a business conclusion is correct. It should not be used as the sole basis for high-stakes legal, financial, regulatory, or safety decisions without human review and domain-specific evidence. A further limitation is that the “why” behind some design choices is only partially inferable from the configuration. Where creator intent is not explicit, rationale should be added by the creator rather than invented. The GPT is also not a multi-system workflow with guaranteed data integrations, audit trails, or organizational memory; presenting it that way would overstate its scope.
05 — Key Insight
Useful AI systems are often differentiated less by raw model power than by the judgment encoded in their scope, evidence standards, and refusal to let weak assumptions pass.