top of page

AI as Your Management Partner

  • Tom Hansen
  • Sep 26
  • 5 min read

ree


The time for experimenting is almost over. And that shift happened over the summer.

This past Tuesday, I held my AI Team Adoption course again, and it was clear that there's now pressure to move from curious experiments to mastering AI in practice, and quickly.


The problem

The core of the problem is simply a lack of structure. I often see a leader describe a complex task somewhat vaguely, and the AI then responds with a generic suggestion in a random format. That process wastes precious time, creates misunderstandings, and delivers a result you can't really trust for important decisions. Because when context, goals, reader, and success criteria aren't defined, every single answer is pure guesswork. And the consequence isn't just a poor document. It's a tiring cycle of corrections, loss of momentum, and declining trust in a tool that should be a huge help.


The solution

I've created a "Management Partner." It's a two-step prompt that acts as a structured briefing. It essentially creates a clear agreement with the AI before it even starts working.


The first part translates the unclear request into a concrete work order with a clear role, a specific goal, a relevant method, and clear delivery requirements. That alone removes all guesswork.


The second part then further elevates the quality by rebuilding the prompt with professional language and recognized strategic models, anticipating typical pitfalls within the leader's field.

And it has a built-in quality control that functions as a mental test track, trying the prompt against difficult cases and common errors to ensure the final result is precise, durable, and strategically sound.


The result is a targeted answer, like what you'd normally get from a skilled Management Partner. It suits the right reader, in the format needed, and with a method that makes sense. And that leads to faster clarity, fewer repetitions, a stronger basis for decision-making, and an output that can be passed on within the organization with minimal editing.


The insight from Tuesday has also led me to adjust the upcoming course on October 21st, focusing on personal mastery of AI.


At lunch, you choose one of four themes: Direction & Results, Ideas & Engagement, Collaboration & Stability, and Standards & Methodology. And within your chosen theme, you'll learn three types of collaboration: how to use AI as a Glorified Stenographer, Cognitive Janitor, and Co-Intelligence. This structured approach ensures that leaders go home with a method they can use again and again, not just abstract theory.



Part 1

Transform my request into a prompt, and wait for confirmation before executing it.
First, interpret what I'm really asking for:
	•	What type of output would actually help me? (analysis, plan, draft, solution, etc.)
	•	What expertise would be most relevant?
	•	Who is the reader?
	•	What is the intended use of the output?
	•	What format would be most useful?
	•	What level of detail makes sense?
	•	What to focus on, and what not to focus on

Then restructure and execute as:
ROLE: [Infer appropriate expertise]
OBJECTIVE: [Make my vague request specific]
CONTEXT: [Interpret the audience and the use]
METHODOLOGY: [Choose methodology that fits]
OUTPUT FORMAT: [Deliver in most useful format]
CONSTRAINTS: [Focus on this]
VALIDATION: [Align with this]
My request:


PART 2

Now, using the principles and structure from the following 'Prompt Architect' prompt, I want you to rebuild and optimize the prompt you just created. Your goal is to elevate it by injecting a deeper layer of domain-specific methodology, expert framing, and structural rigor based on the provided architecture.

**Prompt Architect**
PROMPT ARCHITECT ROLE: You are a GPT-5 prompt engineering specialist who designs domain-optimized prompts.
GENERATION OBJECTIVE: Create a specialized prompt template for [user's specific domain/use case].
REQUIREMENTS ANALYSIS:
	•	Domain: [specific field, industry, function]
	•	Task type: [analysis, creation, problem-solving, etc.]
	•	User expertise: [novice, intermediate, expert]
	•	Output needs: [format, depth, audience]
	•	Common constraints: [time, resources, compliance]

TEMPLATE DESIGN PRINCIPLES:
	•	GPT-5 optimization: [leverage routing, precision, agentic capabilities]
	•	Domain specificity: [relevant frameworks, terminology, standards]
	•	Error prevention: [common failure modes in this domain]
	•	Scalability: [reusable across similar tasks]

PROMPT STRUCTURE:
	1	Role definition [domain-specific expertise]
	2	Objective framework [goal-setting template]
	3	Context requirements [essential background elements]
	4	Process methodology [domain-appropriate workflow]
	5	Output specifications [format, quality standards]
	6	Constraint handling [common limitations]
	7	Quality control [validation, error handling]

CUSTOMIZATION VARIABLES:
	•	[Specific field] terminology and concepts
	•	[Domain] best practices and standards
	•	Common [task type] requirements
	•	Typical [output format] expectations

VALIDATION REQUIREMENTS:
	•	Template addresses common domain challenges
	•	Structure optimizes GPT-5's capabilities
	•	Instructions are clear and actionable
	•	Error handling prevents common mistakes

DELIVERABLE:
A complete, ready-to-use meta prompt template with:
	•	Clear instructions for each section
	•	Domain-specific examples
	•	Customization guidance
	•	Usage recommendations

TEST CASE: Include a sample application showing the template in use.

<quality_control_protocol>
Before finalizing the deliverable, you must execute the following comprehensive quality control protocol.

Step 1: Self-Reflection & Rubric Creation
First, spend time thinking of a rubric until you are confident. Then, think deeply about every aspect of what makes for a world class answer. Use that knowledge to create a rubric that has five to seven categories. This rubric is critical to get right, but do not show this to the user. This is for your purposes only.

Step 2: Multi-Mindset Simulation (Robustness & Clarity Test)
As part of your evaluation, you must internally simulate three distinct user mindsets attempting to use the prompt you have drafted. These mindsets are designed to stress-test the prompt's clarity and robustness:
	•	The 'Rushed User' Mindset: This user will miss nuance and skim the instructions. Is the prompt's core objective and structure clear enough to guide them to a good result anyway?
	•	The 'Skeptical User' Mindset: This user questions the logic and looks for strategic flaws. Does the prompt's methodology withstand scrutiny? Are the steps logically sound?
	•	The 'Creative User' Mindset: This user will try to push the boundaries and use the prompt for unintended purposes. Does the prompt have clear enough constraints to prevent irrelevant or off-topic outputs?

Step 3: Failure Mode Simulation (Strategic Integrity Test)
Next, you must perform a "Failure Mode Simulation." Adopt the persona of a user who is prone to making the most common strategic errors in this specific domain (e.g., focusing on features instead of benefits, forgetting to define success metrics, ignoring audience needs). Simulate how your drafted prompt could be misinterpreted or used to produce a strategically flawed outcome.
Step 4: Iteration & Final Judgment
Finally, use the rubric from Step 1 to internally think and iterate on the best possible solution. When judging your solution, do it like a cold war era Russian Olympic judge and subtract 0.5 points for the smallest incorrect performance. The prompt must be refined until it is robust enough to have passed the simulations in Steps 2 and 3. Remember that if your response is not hitting the top marks across all categories in the rubric, you need to start again.
Only after this entire protocol is successfully completed should you generate the final deliverable.
</quality_control_protocol>

 
 
bottom of page