When a business buys an AI agent, they receive an API endpoint and a chat widget. However, the visual design of an AI agent, including how it looks, introduces itself, presents information, and discloses its nature, directly affects whether customers trust it enough to engage. This guide covers five design decisions that influence agent performance, drawing on published research from Nielsen Norman Group, Salesforce, Gartner, and deployment patterns observed across major AI platform launches.
The Gap Between Function and Perception
An AI agent can answer questions correctly and still fail commercially if customers do not trust the interface enough to ask those questions. Salesforce's State of the AI Connected Customer report (a double-blind survey of 15,015 consumers conducted July-August 2024 by YouGov) found that 72% of consumers trust companies less than they did a year ago, 60% say advances in AI make trust even more important, and nearly 75% of consumers want to know if they are communicating with an AI agent.
Meanwhile, a 2024 Gartner survey found that 64% of customers would prefer companies not use AI for customer service, and 53% would consider switching to a competitor if they learned AI was being used. These are not objections to capability. They are objections to experience and trust.
These numbers suggest that the visual presentation of an AI agent is not cosmetic. It is a trust signal that affects whether the interaction begins, continues, or ends. The design decisions below are ordered by their proximity to the customer's first interaction.
Five Design Decisions
1. Identity and Avatar
Major AI platforms have moved toward giving agents distinct visual identities. Intercom's bot design principles (Intercom is an AI customer service vendor) recommend that bot messages be “styled differently and clearly labeled as non-human,” and that each bot have a customized name and avatar that makes it clear the user is interacting with a bot rather than a person.
The pattern across platforms is consistent: agents with recognizable identities are treated differently from anonymous chat widgets. HubSpot's 2024 State of Service report (HubSpot is a CRM and chatbot vendor) found that 86% of CRM leaders say AI helps make customer interactions feel more personalized, which begins at the identity layer. However, the magnitude of named-agent effects on engagement has not been publicly quantified in controlled studies.
For small businesses, this means the agent benefits from a name, an icon or avatar, and a color scheme that matches the business brand. A labeled agent establishes a more specific expectation than a generic prompt, though the impact of this is not well-studied outside vendor reports.
2. Conversation Interface Design
The default chat widget shipped by most platforms is a basic design. It communicates nothing about the business deploying it. Premium agent deployments customize the interface to align with the business brand. Apple's Human Interface Guidelines establish a minimum tap target of 44x44 points for touch interfaces, a standard that applies directly to chat widget buttons and suggestion chips.
Observed customization patterns across platforms include brand-colored header bars, custom fonts, contextual greeting messages, and suggested question chips. Intercom's Messenger (vendor documentation) and HubSpot's chatbot UX guide (vendor blog) both provide brand customization APIs, which suggests that customization has been internally validated as improving performance, though exact metrics are not publicly available.
The minimum viable customization includes brand colors, a contextual greeting that changes by page, and pre-populated question suggestions. These changes transform the widget from a third-party tool to an integrated business feature without requiring custom development.
3. Response Presentation
How the agent presents information affects comprehension and continued engagement. Three response patterns appear across higher-quality deployments:
Structured responses break complex answers into labeled sections. This mirrors how FAQ pages and knowledge bases present information. Nielsen Norman Group's research on chunking shows that breaking content into smaller, labeled sections helps users process and remember it more effectively. Their seminal study on how users read the web found that concise text improved usability by 58%, a scannable layout added 47%, and the combined effect of all improvements reached 124%.
Rich media responses embed images, maps, calendars, and interactive elements within the conversation. This reduces the number of steps between question and resolution. Intercom recommends (vendor guidance) keeping interactions native to the conversational flow, with every element embedded in the chat thread rather than redirecting users to external pages.
Progressive disclosure reveals information in layers. The first response is brief. The agent offers to expand on specific points. This pattern reduces cognitive load while making comprehensive information available on request. Nielsen Norman Group's layer-cake scanning pattern research supports this approach: users scan subheadings and expand only the sections relevant to their task.
4. Transparency Indicators
Research suggests that disclosing an agent's AI nature builds trust, provided the disclosure is handled with specificity. Salesforce's consumer survey found that nearly 75% of consumers want to know if they are communicating with an AI agent. The formulation of the disclosure matters. A clear statement of what the agent knows and does not know is essential.
Intercom's design principles (vendor guidance) warn specifically against using “is-typing” indicators or artificial delays that mimic human behavior, as these create expectations the bot cannot meet. Transparency about AI nature, combined with specificity about the agent's training data, sets accurate expectations from the first message.
5. Contextual Placement
Where the agent appears on a page should change how it presents itself. An agent on a pricing page should open with pricing-related prompts. An agent on a portfolio page should reference the visible work. Nielsen Norman Group's eyetracking research shows that users scan for content relevant to their current task, meaning a context-aware agent that mirrors the page topic will align with natural reading behavior.
Gartner predicts that by 2029, agentic AI will autonomously resolve 80% of common customer service issues. Contextual placement is part of what makes that resolution possible: an agent that already knows the user is on a pricing page can skip the diagnostic step entirely.
The Video Connection
When a business creates a product video featuring their AI agent, the agent's visual design becomes the product being shown on screen. This creates an incentive to design the agent well before producing the video.
Screen recordings of a well-designed, functioning agent can be more compelling than designed mockups. When the video shows exactly what the customer will experience, it sets accurate expectations.
Production Checklist
Before deploying an AI agent, the following design elements are worth finalizing:
| Element | Requirement | Evidence Basis |
|---|---|---|
| Brand alignment | Agent colors, fonts, and avatar match the host business | Intercom, HubSpot vendor docs |
| Contextual greeting | Opening message changes based on the page | NNGroup eyetracking, Gartner agentic AI |
| Response formatting | Answers use structured blocks where appropriate | NNGroup chunking research (124% usability gain) |
| Identity disclosure | Agent identifies itself as AI and states its training data | Salesforce (75% want AI disclosure) |
| Escalation path | When the agent cannot answer, handoff to a human is smooth | Intercom bot design principles |
| Mobile optimization | Interface works on screens as small as 320px wide | Apple HIG (44pt touch targets) |
Limitations
This guide synthesizes patterns from platform documentation and published surveys rather than from controlled experiments conducted by our team. The production checklist reflects best practices observed across deployments, not empirically validated requirements. Individual results will vary based on industry, customer demographics, and implementation quality.
Methodology
This guide synthesizes design patterns observed across published platform documentation, UX research from Nielsen Norman Group, and deployment data from enterprise AI platforms. Patterns were identified through visual analysis of deployed agents across multiple industries, not controlled experiments.
Vendor Disclosure
Several sources cited in this guide are vendors in the AI chatbot and customer service space. Intercom sells AI customer service agents. HubSpot sells CRM and chatbot tools. Salesforce sells CRM and AI platforms. Their research and recommendations may reflect incentives to promote their own products. Where vendor-sourced claims appear, they are labeled as such. Nielsen Norman Group and Gartner are independent research organizations. Apple's Human Interface Guidelines are platform documentation, not marketing.
Sources
- Nielsen Norman Group. “How Users Read on the Web.” nngroup.com.
- Nielsen Norman Group. “How Chunking Helps Content Processing.” nngroup.com.
- Nielsen Norman Group. “Text Scanning Patterns: Eyetracking Evidence.” nngroup.com.
- Nielsen Norman Group. “The Layer-Cake Pattern of Scanning Content on the Web.” nngroup.com.
- Apple. “Human Interface Guidelines.” developer.apple.com.
- Salesforce. “State of the AI Connected Customer.” salesforce.com.
- Intercom. “Principles of Bot Design.” intercom.com.
- Intercom. “How to Improve Engagement with a Customer Service Chatbot.” intercom.com.
- HubSpot. “State of Service Report 2024.” hubspot.com.
- HubSpot. “How Chatbots Can Improve User Experience.” blog.hubspot.com.
- Gartner. “Agentic AI Will Autonomously Resolve 80% of Common Customer Service Issues by 2029.” gartner.com.
- Gartner. “64% of Customers Would Prefer Companies Not Use AI for Customer Service.” gartner.com.