The Guardian
“You build at the frontier — and never lose sight of who it's for.”
Does this sound like you?
You've slowed down an AI deployment because the humans using it weren't ready — and you were right
Your architecture reviews consistently ask 'what happens to users when this fails?' before 'how do we prevent failure?'
You build observability into systems so the people depending on them can see what's happening
You find purely technical AI conversations missing the question that matters most
Research note: Collaborative architects show the highest sustained user adoption rates for the AI products they build — human-centred design at the architectural level produces systems with 40% higher 6-month retention vs. purely capability-optimised alternatives, because trust compounds where reliability does not.
AI fingerprint
Full report →How this persona maps across six dimensions of AI use.
How you work
You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability — these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.
Your strengths
- AI products designed for the humans using them — higher adoption, lower failure rates, more durable trust
- Brings non-technical stakeholders along as AI capability scales around them
- Catches human-impact risks before they become incidents
Watch out for
Human-centred review processes can slow systems that are ready to move faster — some oversight can now be safely automated
Your perspective on responsible AI deployment is genuinely rare; sharing it publicly would have outsized influence
Your AI loadout
Tools selected for how you think and work — not a generic list.
Claude Code
Complex architectures requiring nuanced judgment — you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach
braintrust
Evaluation infrastructure that measures whether your AI systems are actually serving users well — quality from the user's perspective, not just the model's
n8n
Full control over automation logic means you can build human checkpoints exactly where your architecture requires them — no black boxes in systems people depend on
helicone
LLM observability that surfaces what's happening inside your production systems — the visibility layer that lets you catch human-impact issues before users report them
Your win this week
Add one user-impact metric to your most important AI system — something that measures whether it's actually serving people well, not just whether it's running.
Technical reliability and user reliability are different things. You know this. Now measure both.
Your growth path
Three moves that take you to the next level — and what they unlock.
Map load-bearing oversight vs precautionary oversight
You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated — freeing you for the oversight that actually matters.
Automate more safely without compromising the principles that make your systems trustworthyBuild the framework, not just the system
Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles — as writing, open-source tooling, or internal standards — lets your approach influence systems you didn't personally build.
Multiply your human-centred approach across systems you'll never touchPartner with Oracles on evaluation design
Oracles measure technical reliability. Guardians measure human impact. The combination — evals that capture both — produces the most trustworthy AI systems built anywhere.
Build the evaluation standard that captures what actually matters to usersAbout this persona type
Axis 1 · Level
🏗️ Architect
The Level axis measures how integrated AI is in your work — from first experiments (Observer) to fully autonomous systems (Architect). The Guardian sits at Level 6 of 6.
Axis 2 · Style
🤝 Collaborator
The Style axis captures your instinctive cognitive approach — how you engage with AI, what excites you, and what produces your best work. Your style stays consistent as you level up.
There are 24 personas across 6 levels × 4 styles.
See full matrixOthers like you
Same level, different style — and same style, different level.
Who works well with The Guardian?
Pairings that complement your level and style.
Productive tension
The Pioneer
The Pioneer approaches problems differently enough to create friction — which, if managed well, produces better outcomes than either would alone.
Your team role
As a Collaborator, you're the team's connective tissue — you make others better. Put you at the intersection of sub-teams or between technical and non-technical members.
Create a team snapshotThe full map
See where this persona sits across all 24 combinations.
All 24 personas
Full breakdown →Every combination of level × style. Find where you sit — or spot someone you know.
| Skeptic | Dreamer | Collaborator | Optimizer | |
|---|---|---|---|---|
| Observer | 🔍Analyst | 🌌Mystic | 🤝Empath | 📊Watcher |
| Curious | 🕵️Detective | 🔍Seeker | 🌱Apprentice | 🧭Scout |
| Tinkerer | 🧪Scientist | ⚗️Alchemist | 🫂Companion | ⚡Hacker |
| Craftsperson | 📜Sage | 💡Inventor | 🧑🏫Mentor | 🎨Artisan |
| Conductor | ♟️Strategist | 🌠Visionary | 🎼Maestro | ⚙️Operator |
| Architect | 🔮Oracle | 🚀Pioneer | 🛡️Guardian | 👑Sovereign |
Frequently asked questions
What is The Guardian in the SimpleAI persona system?+
The Guardian is a Level 6 (Architect) AI user with a Collaborator cognitive style. You build at the highest technical level but you never lose the human question: who uses this, and what happens to them when it goes wrong? While others optimise for capability, you're building the trust layer — onboarding, oversight, failure handling, accountability. In a world racing to ship, you're the one building things people can actually rely on. ~1% of AI users of AI users fall into this persona.
What AI tools does The Guardian use?+
The Guardian works best with Claude Code, braintrust, n8n. Complex architectures requiring nuanced judgment — you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach The full loadout is chosen specifically for how a Architect-level Collaborator approaches AI work.
What are the strengths of a Architect Collaborator AI user?+
AI products designed for the humans using them — higher adoption, lower failure rates, more durable trust. Brings non-technical stakeholders along as AI capability scales around them. Catches human-impact risks before they become incidents.
What should The Guardian watch out for?+
Human-centred review processes can slow systems that are ready to move faster — some oversight can now be safely automated. Your perspective on responsible AI deployment is genuinely rare; sharing it publicly would have outsized influence. You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability — these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.
How does The Guardian level up to the next stage?+
Map load-bearing oversight vs precautionary oversight: You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated — freeing you for the oversight that actually matters. Build the framework, not just the system: Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles — as writing, open-source tooling, or internal standards — lets your approach influence systems you didn't personally build. Partner with Oracles on evaluation design: Oracles measure technical reliability. Guardians measure human impact. The combination — evals that capture both — produces the most trustworthy AI systems built anywhere.
Is this you?
Take the 10-question quiz to confirm your persona — or discover you're a different type entirely.