Jun 11, 2025
Designing for the Invisible User
Learn how Agent Experience (AX) reshapes UX by designing for AI agents. Discover principles and examples for building human-centered, AI-ready systems.
Author


Book a call
Editor’s Note: This blog is adapted from a talk by Robin Mathew, designer at GeekyAnts. In this insightful session, he introduced the concept of designing for AI agents, or invisible users, who interact with systems on behalf of humans. Drawing from examples like Spotify’s AI DJ and tools like Notion, he explored how AX, or Agent Experience, challenges designers to think beyond screens and toward structured, system-level thinking.
Hello, I am Robin. I work as a designer at GeekyAnts, and today I am talking about something that I believe is going to be increasingly important in the way we build products—designing for the invisible user.
When we talk about users, we usually think of people—real human beings tapping on screens, navigating flows, and providing feedback. But there is a new kind of user quietly entering our systems: AI agents. These are not just assistants like Gemini or the one inside Notion. They are systems that read our interfaces, interact with our content, make decisions, and act on behalf of people. And as designers, we need to start thinking about how we build for them.
Who Is the Invisible User?
Let me explain what I mean. Imagine an AI that reads your interface and decides whether to summarise a document, draft an email, or even schedule a meeting. The user is still human, but the agent becomes a middle layer. It needs to understand your product deeply enough to make smart, context-aware decisions.
This is where the concept of AX, or Agent Experience, comes in. AX is not a replacement for UX. It is an extension of it. It asks us to consider what the system looks like, not only to humans but also to machines that must interpret and act within it. Just like we design screens and flows for people, we now need to structure our systems so they are readable, navigable, and actionable for agents.
AX Is Not a New Discipline
To be clear, AX is not separate from UX. We are not removing people from the equation. We are expanding our thinking to include another actor—the AI agent. The end user still matters just as much. AX helps us bridge the gap between the system, the agent, and the human user.
This is not the first time UX has expanded like this. Developer Experience (DX), for example, encouraged us to think about how developers interact with APIs and documentation. Similarly, AX makes us consider structured logic, metadata, and system clarity. We begin to ask questions like: Can the agent access the right data? Does it have the context to make a good decision? Is it interacting with a stable, repeatable system?
The Example of Spotify AI DJ
One of the clearest examples of this kind of thinking is the Spotify AI DJ. On the surface, it seems like a smarter playlist. But under the hood, it is doing something much more sophisticated. It uses rich behavioural data, like your listening habits, time of day, and song skips, to curate music that matches your mood. That is possible because of deep tagging (genres, moods), contextual awareness, and the ability to adapt over time.
The design work behind that is not visual. It is structural. It is about organising data in a way that the AI can interpret and act on. That is what AX asks us to consider.
Principles of AX Design
There are a few key principles that guide how we design for agents:
- Human-Centred First: AX still begins with human users. The agent exists to serve them.
- Agent Accessibility: The agent must have access to well-documented APIs, structured metadata, and context-aware surfaces.
- Contextual Alignment: An agent cannot make good decisions without understanding the context in which it operates, including visual, semantic, and situational factors.
- Predictable Patterns: Safe, repeatable flows (like payments) must be clearly defined for agents to act reliably.
- Agent Differentiation: The system must know whether it is serving a person or an agent.
The Role of the Designer Is Changing
As agents become more integrated into products, our role as designers is shifting. We are not just screen thinkers anymore. We are system thinkers.
You may have seen emerging roles like “Intent Architect.” This is someone who defines how an agent should behave in a specific use case, like summarising a blog post or recommending actions based on user input. It requires us to think about structure, hierarchy, interaction, and response patterns in a much deeper way.
This is not just about adding one more design deliverable. It is about designing for intent, not just appearance.
A Small Workshop, A Big Realisation
At the end of my talk, I walked everyone through a short activity. We tried to reverse-engineer how an AI agent might behave inside Notion when summarising a document. Where should it step in? When should it trigger? How should it signal that the summary came from the agent and not the user? And how should the user confirm or correct their actions?
Through this, we began to see the complexity of AX and how human it still is. We are not designing for AI in isolation. We are designing systems where AI helps people, quietly, efficiently, and often invisibly.
The Takeaway
AI is not the end of design—it’s the beginning of a new kind of critical thinking.
As designers, we must recognise that not all users are visible. Increasingly, some of them are agents—working silently in the background. When we design with that reality in mind, and still keep human needs at the centre, we create systems that feel seamless, intelligent, and surprisingly human.
AX is still a new space, and this talk only scratches the surface. There’s a layered complexity in implementation, APIs, and development workflows. Here, I’ve focused primarily on the design lens—but the conversation has to start somewhere.
Related Articles.
More from the engineering frontline.
Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

May 7, 2026
The AI native Enterprise Evolution | Saurabh Sahu
Explore Saurabh Sahu’s insights on AI-native enterprise, AI gateways, model governance, agentic SDLC, and workspace.build for scalable AI adoption from thegeekconf mini 2026.

May 5, 2026
The Next Era of AI Builders: Building Autonomous Systems for Frontier Firms — Pallavi Lokesh Shetty
Discover Pallavi Shetty’s view on the next era of AI builders, covering autonomous systems, trusted agents, data quality, and frontier firms from thegeekconf mini 2026

May 5, 2026
The Autonomous Factory: Architecting Agentic Workflows with Clean Code Guards | Akash Kamerkar
Akash Kamerkar’s thegeekconf mini 2026 talk explores the ACDC framework for building safer agentic workflows with clean code guards, sandbox testing, and AI-driven software development.

May 4, 2026
OpenClaw: Build Your Autonomous Assistant | Deepak Chawla
Discover how Deepak Chawla explains OpenClaw for building autonomous AI assistants through data preparation, knowledge bases, AI engines, and agent automation.

May 4, 2026
From Prompt Chaos to Production AI: Spec-driven Development for AI Engineers | Vishal Alhat
Learn how Vishal Alhat’s thegeekconf mini 2026 session explains spec-driven development and how AI engineers can move beyond prompt chaos to build production-ready applications.

Feb 12, 2026
The Enterprise AI Reality Check: Notes from the Front Lines
Enterprise leaders reveal the real blockers to AI adoption, from skill gaps to legacy systems, and what it takes to move beyond the first 20% of implementation.