💼 The Company
Worten is one of the largest and most recognized retail brands in Portugal, specializing in technology, consumer electronics, and home appliances. Present in the daily lives of millions of people, it combines a strong network of physical stores with a robust digital platform, delivering a simple, accessible, and fully integrated shopping experience.
The goal is to refine the AI chat that centralizes and integrates all store information to support sellers nationwide.
🔍 Analysis
📍 Scenario
The AI Chat was launched with limited visibility and no structured adoption strategy, resulting in low user engagement and underutilization of the feature. To address this, the business, together with the Product Owner and Product Manager, aimed to understand the reasons behind the low adoption, including discoverability issues, unclear value, reliance on the legacy tool, or usability friction.
The Product
The AI Chat was created to centralize critical store information in a single intelligent tool, empowering frontline teams with fast, reliable access to operational, commercial, and product insights. By reducing information fragmentation and dependency on multiple systems, it supports better decision-making and enables sellers to focus on delivering a consistent, high-quality customer experience across all stores.
Given that the AI Chat was intended to replace an existing solution and support daily workflows, the challenge went beyond usage metrics. It required a deeper understanding of user behavior and resistance to change to guide refinement decisions and enable more consistent, value-driven adoption
Objective
Analyze real user interaction data to assess the effectiveness of the AI Chat. Identify usability or engagement issues using both quantitative data (analytics) and qualitative insights (screen recordings). Understand user frustrations, behaviors, and expectations during chat interactions. Generate actionable insights to improve: The quality and accuracy of AI responses The conversational tone and overall user experience
🛠️ Resources
Tools 1st round
-
Google Analytics
-
Microsoft Clarity
Tools 2nd round
-
Consent Forms
-
Video Recording
-
Voice Recording
-
Microsoft Forms
-
Google Spreadsheet
-
Figma
Team
-
UX Research
-
Team Leader
My Role
-
Ux Researcher
Time
-
Overall: 1 month and 15 days
🎯 Approach
A mixed-methods approach (qualitative + quantitative) was used:
📚 Methodology
-
• Qualitative and Quantitative Analysis
-
• Behavioral Analysis
-
• Conversation Heuristic Evaluation
🔄 The Process
Using Microsoft Clarity and Google Analytics, real user behaviors throughout the use of the tool were analyzed. This analysis revealed an unexpectedly poor performance for an Artificial Intelligence chat, especially considering that this type of solution is now widely associated with ease of use, fast response times, and a low learning curve.
📈 1st Outcome results
-
• High number of Dead Clicks (14.2%) and Quick Back Clicks (7.9%).
-
• Many users submitted vague or incomplete queries, leading to generic AI responses and frustration.
• The Chat ranked 5th in click-through rates, indicating moderate engagement but also significant room for improvement.
💡 Solutions Proposed in 1st round
-
• UX Writing improvements to guide users during interactions.
-
• Enhanced AI capabilities for more contextual, empathetic responses.
👣 Next Steps
-
• Developed a questionnaire and conducted nationwide user interviews in the stores to gather further insights.
-
• Delivered a prioritized action plan for refining both AI behavior and user experience design.
🔃 2nd round
Demographic Data
Field research was conducted across multiple regions of the country, gathering insights from users aged 18 to 50. This geographic diversity enabled the observation of real-world behaviors and ensured that the resulting design solutions addressed both regional and demographic variations.
This approach was essential to capturing a broad spectrum of user profiles, store environments, and usage patterns. The research also accounted for key behavioral variables, such as tenure in the store (measured in months or years), to better understand how experience level influences daily interactions and workflows.
Interviewing users at both ends of the age spectrum helped surface insights that are often missed when focusing solely on average users. Older participants highlighted usability barriers and challenges related to technology adoption, while younger participants emphasized the importance of speed, efficiency, and more advanced digital features.
Questionnaire
Combining questionnaire responses with behavioral insights from Microsoft Clarity and Google Analytics, and validating them against real user experiences, proved to be the most impactful and decisive stage of the research.
Key UX Issues
-
Responses frequently lack contextual grounding in real store scenarios and are delivered without a consistent tone of voice, resulting in lower perceived usefulness and credibility.
-
Response variability is limited, with poor use of synonyms, alternative phrasing, and natural language patterns.
-
Learnability & adoption.
-
Lack of consistency across omnichannel experiences
-
Cognitive load
-
The interface provides insufficient guidance on the chat’s capabilities and limitations, increasing trial-and-error behavior.
AND MORE
🧠 Solutions Proposed
-
Train the AI to provide step-by-step guidance, not just generic summaries.
-
Apply the heuristic "Match Between the System and the Real World" to make interactions feel more natural and familiar.
-
Add buttons like “Was this helpful?” to collect feedback and improve learning.
-
Recognize variations the portuguese language, not only portugueses from Portugal.
-
Refine tone of voice to feel more empathetic and informative.
-
Suggest rephrased queries or alternative options when no result is found.
Image for illustrative purposes only.
✅ Conclusion
The chat has strong potential as an internal support tool for retail stores, but usability issues, such as unclear, generic responses, inconsistent experiences across devices, and outdated content were impacting trust and efficiency.
Using UX research methods like user interviews, microsoft clarity analysis, and usability testing, I identified key pain points and refined the chatbot’s conversation flows, tone, and response accuracy. Through continuous validation and performance tracking, I reduced bias, improved task completion, delivered a more intuitive, consistent, and trustworthy AI experience aligned with both user and business needs.