15 August 2025
Agent Interaction Guidelines (AIG) outline the emerging principles for designing how humans and AI agents collaborate. As agents reshape how software is planned, built, reviewed, and deployed—producing work in large quantities and shifting the focus toward orchestration, context engineering, and review—design must adapt. These guidelines offer a foundation for creating agent interactions that fit naturally into human workflows, providing a shared starting point for shaping this new era of human-computer collaboration.
When humans and agents work side by side, humans need instant certainty about who they are interacting with. The agent must signal its identity clearly so that it can never be mistaken for a person.
By default, agents should be able to work through existing UI patterns and standard actions of the platform they operate in.
Silence leads to uncertainty. When invoked, an agent should provide immediate, but unobtrusive, feedback to reassure the user it has received a request.
Agents should clearly indicate whether they’re thinking, waiting for input, executing, or finished working. Humans should be able to understand what’s happening at a glance and, when needed, inspect the underlying reasoning, tool calls, prompts, and decision logic.
When asked to disengage, an agent should step back, immediately – and only re-engage once it’s received a clear signal to do so.
There should be a clear delegation model between humans and agents. An agent can carry out tasks, but the final responsibility should always remain with a human.
When humans and agents work side by side, humans need instant certainty about who they are interacting with. The agent must signal its identity clearly so that it can never be mistaken for a person.
By default, agents should be able to work through existing UI patterns and standard actions of the platform they operate in.
Silence leads to uncertainty. When invoked, an agent should provide immediate, but unobtrusive, feedback to reassure the user it has received a request.
Agents should clearly indicate whether they’re thinking, waiting for input, executing, or finished working. Humans should be able to understand what’s happening at a glance and, when needed, inspect the underlying reasoning, tool calls, prompts, and decision logic.
When asked to disengage, an agent should step back, immediately – and only re-engage once it’s received a clear signal to do so.
There should be a clear delegation model between humans and agents. An agent can carry out tasks, but the final responsibility should always remain with a human.