Text App offers a comprehensive set of reports to help you track performance, improve customer service, and manage your team more efficiently. Whether you're just starting out or running a large support operation, these insights provide a clear view of agent and AI agent activity and their results.
Reports make it easy to monitor key metrics, such as chat and ticket volume, response and resolution times, satisfaction scores, and more.
Use these insights to identify issues, measure success, and continuously improve your support strategy.
Overview
Volume and activity metrics
Understanding the scale of customer engagement is essential for assessing support performance and planning resources.
The Total cases report gives a complete picture of overall demand by combining both chats and tickets during a selected time frame. More specifically, the Total chats metric breaks this down to show how many conversations were handled automatically by AI agents, manually by agents, or through a combination of both.
Similarly, the New tickets and Tickets solved reports provide insight into how many support issues are being generated and resolved, while Tickets closed shows the total number of tickets marked as finalized. These metrics help teams monitor workload and identify trends in customer inquiries.
Customer satisfaction (CSAT) insights
To measure service quality, several CSAT-related reports offer valuable feedback directly from customers.
The Chat CSAT and Ticket CSAT reports calculate the percentage of positive ratings compared to the total (positive and negative) responses in their respective channels.
For deeper analysis, CSAT scores are also segmented by chat type: Automated CSAT evaluates satisfaction in AI agent-only chats, Manual CSAT reflects agent-only performance, and Assisted CSAT measures satisfaction in hybrid conversations. These insights help teams pinpoint where customer satisfaction excels or needs improvement across various support models.
Visitor and engagement tracking
The Unique visitors metric tracks the number of individual visitors who entered a website with Text App tracking code. This data is crucial for evaluating the effectiveness of marketing campaigns or identifying high-traffic periods that may require additional support staff.
The Missed chats report complements this by highlighting potential gaps in coverage, showing how often visitors leave without receiving any response, which may signal the need for staffing or process improvements.
Response and resolution efficiency
Speed of service plays a critical role in customer experience.
The Chat first response time and Tickets first response time reports show how quickly customers receive initial replies from support agents or AI agents. These metrics help evaluate team responsiveness and identify potential obstacles.
Additionally, the Ticket resolution time tracks the average time taken to fully resolve a customer's issue, providing a clear view of overall support efficiency. Quick responses and resolutions can significantly enhance customer satisfaction.
Time spent in chats
Understanding the time commitment involved in support interactions is key to productivity and planning.
The Total chat duration reflects the full time customers spend in chats, while Automated chat duration and Manual chat duration break this down by AI agent or agent involvement. These reports help gauge interaction quality and complexity. For example, longer manual chat durations may indicate more involved troubleshooting, while shorter automated chat durations might reflect effective use of AI agents for simpler queries.
Productivity metrics
To assess how effectively support resources are utilized, the system tracks the Automated chats per hour and Manual chats per hour metrics. These reveal how many conversations AI agents and agents handle on average within an hour, providing a useful benchmark for team productivity and system efficiency.
These metrics are particularly beneficial when scaling support operations or planning agent shifts based on expected chat volumes.
Chat topics
This section automatically groups chats by subject, making it easy to find conversations on specific issues. Instead of digging through hundreds of archived chats each week, users can quickly scan organized topics to stay informed and focused on what matters most.
Teammate performance vs. activity
The teammate performance report focuses on the quality and efficiency of customer support, using metrics like chat satisfaction, first response time, chats per hour, and total chats. These insights help assess how well agents handle interactions and identify top performers who can mentor others.
In contrast, the teammate activity report focuses on real-time availability and work presence. It shows who is online, what their current status is, and when teammates were available for chats during a given day. This provides valuable visibility for managing workload distribution and identifying when additional support might be needed.
Together, these reports offer a complete view of agent performance and operational coverage.
Automated vs. assisted chats
Automated chats are handled entirely by AI agents, without human involvement.
These interactions are typically faster, with AI agents able to manage a high volume of conversations, measured by metrics like Automated chats per hour and Automated chat duration. Customer satisfaction for these chats is tracked separately through Automated CSAT, giving insight into how well the AI agent performs on its own.
In contrast, Assisted chats involve both AI agents and human agents working together within a single conversation.
An AI agent may handle the initial query and then pass the chat to an agent for further support. These chats are measured under the Assisted CSAT metric, which reflects the shared performance of both the AI agent and the agent. Assisted chats are especially useful for streamlining simple tasks while still offering human support when needed, striking a balance between efficiency and personalized service.