All 24 personas
🛡️

The Guardian

You build at the frontier — and never lose sight of who it's for.

🏗️ Level 6 · Architect🤝 CollaboratorThe Human-Centred Architect~1% of AI users

Does this sound like you?

You've slowed down an AI deployment because the humans using it weren't ready — and you were right

Your architecture reviews consistently ask 'what happens to users when this fails?' before 'how do we prevent failure?'

You build observability into systems so the people depending on them can see what's happening

You find purely technical AI conversations missing the question that matters most

Research note: Collaborative architects show the highest sustained user adoption rates for the AI products they build — human-centred design at the architectural level produces systems with 40% higher 6-month retention vs. purely capability-optimised alternatives, because trust compounds where reliability does not.

AI fingerprint

Full report →

How this persona maps across six dimensions of AI use.

Depth8/10
Analysis9/10
Creation8/10
Speed8/10
Automation9/10
Breadth9/10

How you work

You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability — these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.

Your strengths

  • AI products designed for the humans using them — higher adoption, lower failure rates, more durable trust
  • Brings non-technical stakeholders along as AI capability scales around them
  • Catches human-impact risks before they become incidents

Watch out for

Human-centred review processes can slow systems that are ready to move faster — some oversight can now be safely automated

Your perspective on responsible AI deployment is genuinely rare; sharing it publicly would have outsized influence

Your AI loadout

Tools selected for how you think and work — not a generic list.

Top pick
claude-code

Claude Code

Complex architectures requiring nuanced judgment — you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach

BR

braintrust

Evaluation infrastructure that measures whether your AI systems are actually serving users well — quality from the user's perspective, not just the model's

n8n

Full control over automation logic means you can build human checkpoints exactly where your architecture requires them — no black boxes in systems people depend on

HE

helicone

LLM observability that surfaces what's happening inside your production systems — the visibility layer that lets you catch human-impact issues before users report them

Your win this week

Add one user-impact metric to your most important AI system — something that measures whether it's actually serving people well, not just whether it's running.

Technical reliability and user reliability are different things. You know this. Now measure both.

HEhelicone

Your growth path

Three moves that take you to the next level — and what they unlock.

1

Map load-bearing oversight vs precautionary oversight

You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated — freeing you for the oversight that actually matters.

Automate more safely without compromising the principles that make your systems trustworthy
2

Build the framework, not just the system

Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles — as writing, open-source tooling, or internal standards — lets your approach influence systems you didn't personally build.

Multiply your human-centred approach across systems you'll never touch
3

Partner with Oracles on evaluation design

Oracles measure technical reliability. Guardians measure human impact. The combination — evals that capture both — produces the most trustworthy AI systems built anywhere.

Build the evaluation standard that captures what actually matters to users

About this persona type

Axis 1 · Level

🏗️ Architect

The Level axis measures how integrated AI is in your work — from first experiments (Observer) to fully autonomous systems (Architect). The Guardian sits at Level 6 of 6.

Axis 2 · Style

🤝 Collaborator

The Style axis captures your instinctive cognitive approach — how you engage with AI, what excites you, and what produces your best work. Your style stays consistent as you level up.

There are 24 personas across 6 levels × 4 styles.

See full matrix

Others like you

Same level, different style — and same style, different level.

Who works well with The Guardian?

Pairings that complement your level and style.

Your team role

As a Collaborator, you're the team's connective tissue — you make others better. Put you at the intersection of sub-teams or between technical and non-technical members.

Create a team snapshot

The full map

See where this persona sits across all 24 combinations.

Frequently asked questions

What is The Guardian in the SimpleAI persona system?+

The Guardian is a Level 6 (Architect) AI user with a Collaborator cognitive style. You build at the highest technical level but you never lose the human question: who uses this, and what happens to them when it goes wrong? While others optimise for capability, you're building the trust layer — onboarding, oversight, failure handling, accountability. In a world racing to ship, you're the one building things people can actually rely on. ~1% of AI users of AI users fall into this persona.

What AI tools does The Guardian use?+

The Guardian works best with Claude Code, braintrust, n8n. Complex architectures requiring nuanced judgment — you use it for the parts that need reasoning, not just execution, and you've defined the human review points it must reach The full loadout is chosen specifically for how a Architect-level Collaborator approaches AI work.

What are the strengths of a Architect Collaborator AI user?+

AI products designed for the humans using them — higher adoption, lower failure rates, more durable trust. Brings non-technical stakeholders along as AI capability scales around them. Catches human-impact risks before they become incidents.

What should The Guardian watch out for?+

Human-centred review processes can slow systems that are ready to move faster — some oversight can now be safely automated. Your perspective on responsible AI deployment is genuinely rare; sharing it publicly would have outsized influence. You build at the highest technical level but you design from the human experience outward. Onboarding, oversight, failure handling, accountability — these aren't afterthoughts in your architectures, they're load-bearing. Your systems last because they account for how people actually use technology: unpredictably, emotionally, inconsistently.

How does The Guardian level up to the next stage?+

Map load-bearing oversight vs precautionary oversight: You've correctly built human review into your systems. Now distinguish between oversight that's genuinely necessary and oversight that's habit. The former should stay; the latter can be safely automated — freeing you for the oversight that actually matters. Build the framework, not just the system: Your perspective on human-centred AI architecture is rare and underrepresented. Publishing your design principles — as writing, open-source tooling, or internal standards — lets your approach influence systems you didn't personally build. Partner with Oracles on evaluation design: Oracles measure technical reliability. Guardians measure human impact. The combination — evals that capture both — produces the most trustworthy AI systems built anywhere.

🛡️

Is this you?

Take the 10-question quiz to confirm your persona — or discover you're a different type entirely.