AI agent personas: service design, updated
A practical guide to mapping AI agents and humans together, before autonomy makes your UX blurry.
Humans have company: agent users
AI agents have joined human users inside digital ecosystems. That shift makes parallel optimization for both user types table stakes, especially for products moving toward Level 3 to Level 5 autonomy.
This is where nuanced design thinking pays off. Agentic systems raise new questions about oversight, permissions, and control; the good news is we already have proven ways to model complexity without losing the human-centered plot.
Empathy maps and personas still do what they have always done: they turn ambiguity into clarity. Now they do it across a blended ecosystem of humans, agents, and the products that connect them.
In my work as a product strategy consultant, I’m currently helping build an agentic AI platform designed to analyze, verify and remediate user experience issues before they become fuel for churn. That means adapting the same service-design empathy and persona techniques I’ve used for years; then extending them to represent the interplay between humans, agents and product surfaces, with explicit lines of visibility and control.
This post shares practical approaches that scale. They also leave room for what matters most in agentic systems: evaluation, training and governance that stay grounded in user intent.
From here, the pattern is clear. Agents have evolved from assistants to semi-autonomous participants that hold state and make choices. They also introduce new failure modes. The symptoms are familiar: confusion, loss of control and “why did it do that?”
Nuanced UX thinking is how product teams tame complexity with clarity. Here’s what extending the craft looks like today.
Download the companion PDF, a reusable blueprint for building human and AI agent archetypes, complete with templates and source links. It’s FREE!
Empathy maps: same grid, smarter labels
Classic empathy maps translate messy qualitative inputs into patterns you can act on.
Chris Butler’s ML-era framing still holds: focus on what a system does, senses, says, thinks, and what we might be tempted to call “feels.”
For AI agents, two quadrants transfer cleanly: Says and Does. And two subtle evolutions keep the exercise useful while avoiding anthropomorphism:
Thinks → Thinks/Infers (or simply Infers)
Feels → Senses (or simply Senses)
Use these definitions as guardrails…
Says: what the agent outputs, including refusals, tool results, and error states
Does: actions it takes, including API calls, retrieval, updates, and escalation
Infers: underlying heuristics and decision logic; tag as inferred
Senses: the input signals it processes, including context, telemetry, and feedback
One discipline makes this powerful: Says + Does = observed; Infers + Senses = inferred (when evidence supports it). Cluster the evidence. Then crystallize it into bios, quotes, motivations and frustrations.
Personas: demographic for humans, system for agents
Human personas need demographics because those attributes shape constraints and intent. Your template should reflect that structure habitually.
Agent personas need the functional equivalent to demographics: the LLM(s), feedback mechanisms, training data and more. These fields explain why an agent behaves the way it does, how it learns and how safe it is to trust.
This is also where UX and model design must meet. System prompts, tools and evaluations are part of the experience. Treat them like product surfaces.
A playbook for Level 3–5 autonomy
The autonomy ladder is a design choice. The framework from Feng, McDonald and Zhang describes five levels, defined by the user’s role: operator, collaborator, consultant, approver and observer.
If you’re building toward Levels 3 to 5, take this approach:
Revisit your critical journeys each time an agent enters the flow.
Spot the lonely moments: the cognitive, logistical, and creative burden points.
Assign agent strengths to those moments, with explicit permissions and reversibility.
Bring it to life: narrate the “before/after” so stakeholders feel the value.
Then balance autonomy with control. UXmatters nails the core tension: as systems become more agentic, design must define the rules of engagement, including transparency, oversight and safe intervention.
Closing inference: name your new users
Personas turn scattered signals into shared direction. They translate governance into plain language.
Whenever agents are part of your product’s social graph, give them names, constraints, goals and pain points. Then design the shared interface as if both humans and agents must succeed.
Your future UX depends on it.