Design an intervention to collect users' feedback on Alexa's proactive experiences

Project Brief

This project was part of an Amazon-Sponsored Externship mentored by Amazon's UX Researcher, Monica Chan. I worked alongside 6 more UX designers under Monica's guidance to successfully execute this project in 16 weeks.

Designing a feedback system empowering users to refine Alexa's proactive actions for greater accuracy and trust.

Case Study

Timeline: 16 weeks

My role: Product Designer

  • Developed majority of the activities for the workshop

  • Generated prompt cards and supporting materials

  • Co-hosted the workshop

UI Explorations

  • Assisted in user interface explorations

  • High-fidelity mockups for usability testing

  • Design iterations and final designs

Research

  • Secondary research - Competitor analysis

  • User Interviews - Total 3

  • Reddit users - Text insights

Where I made a difference

Personal Contributions

Context

Before we jump in, lets set the context

What are proactive actions?

Proactive Actions

Actions that are triggered without the user’s input are called Proactive actions. Devices with this feature identify patterns in the user’s routine and suggest or perform actions accordingly. In Alexa, these are referred to as HUNCHES.

Today, thanks to predictive and proactive features like Hunches, one in four smart-home interactions is initiated by Alexa — enabling users to enjoy seamless assistance without needing to take action themselves. This number is is only expected to grow as home automation advances.

Confused, you have no idea what happened and why

Suddenly, Alexa turns the light off at 10pm based on daily pattern

Imagine you're pulling a late-night work session

But where does user's feedback come in?

Why is User's feedback important?

More User Feedback

Fewer Incorrect Actions

Improved User Trust

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

Now that we know the context, let's dive into the project

How does Alexa collect feedback?

Let's begin with what we know

Current mediums

  • Alexa phone app - A simple yes/no

  • Alexa Echo show - Digital and voice UI

Research 1

With this in mind, my team and I began exploring opportunities within this design space. We conducted extensive secondary research on…

Secondary Research

  • Digital Ethnography - Essential for tailoring proactive content to diverse cultural contexts, ensuring a more personalized and engaging user experience.

  • Literature Review - To study user agency and appropriate language

  • Conversational Feedback and Voice Command - Integral to enhancing user engagement and satisfaction with proactive experiences by incorporating conversational feedback mechanisms

  • Consumer Insights - Provides valuable data for refining proactive content strategies, aligning them with consumer expectations and preferences

  • Behavioral Psychology - Offers insights into user motivations, cognitive processes, and emotional responses, contributing to the creation of proactive experiences that align with user behavior and preferences.

  • Competitor Analysis - To learn about existing solutions and to identify gaps

Meanwhile, we began searching for participants—individuals who own an Alexa (or any home assistant) and actively use its proactive features. Sounds simple, right?

Jane Doe

"I don't know where to find Hunches"

But we ran into a problem. Everyone we found said this

Jane Doe

"Proactive experiences? What are those?

John Doe

"What are Hunches?"

John Doe

"So that's why the lights went out the other day! I thought it was a bug"

Jane Doe

"Hunches?"

John Doe

"Where can I find Hunches?"

Turns out, people are not aware of Hunches and where to find it in the app

How can we gather feedback for a feature if no one knows the feature even exists?

Problem 1

Problem space 1

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

In the current model, Hunches activities are buried within the settings, requiring users to navigate four levels deep in the hierarchy. This makes Hunches difficult to locate and the reason for lack of awareness within users.

So how do we solve this?

Solution

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

We made updates to the home screen, making Hunches more visible

A new "Hunches" section now appears on the home screen to alert users of actions initiated by Alexa

A new tag introduced in the device tile will inform users of proactive actions performed on it

Iterations and Design Rationale

(Click to expand)

After making our accessible users aware of Hunches, diary studies and interviews were conducted to understand their proactive experience, and how they preferred to be notified and provide feedback. A few User familiar with hunches were also identified from Reddit and interviewed.

Moving on to Hunch alerts

  • User Interviews - Total 12

  • Reddit Users - Interviews and Chat (Total 5)

  • Co-Design Workshop - 4 Participants

  • Diary Study - 2 Participants over 3 weeks

Important Insights

  • Users generally prefer notifications to be semi-proactive, except for critical situations, where they prefer more intrusive notifications

  • Users do not prefer to provide verbal feedback or engage in verbal conversations for long

  • Users tend to give feedback only when something doesn't work as expected

  • Users were more likely to give feedback when the impact of feedback was transparent

Basically, to receive feedback, Users first need to be alerted

"Users do not prefer to provide verbal feedback or engage in verbal conversations for long"

I was thrilled at the start of the project, eager to dive into Voice User Interface and craft conversational designs for users interacting with Alexa verbally. However, the insights we gathered from the majority of users suggested us verbal interaction wasn't the best option. Though we had a lot of research and preliminary ideation on VUI,

We had no choice but to pivot.

Moving on to Hunch alerts

Co-Design

Workshop

So how to notify Users?

Problem space 2

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

Currently, Hunch notifications are disabled across all devices, causing users to miss proactive actions' updates. Enabling them applies to all devices, potentially overwhelming users with intrusive alerts from irrelevant devices, leading to notification fatigue.

So how to notify Users?

Solution (1 of 3)

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

When enabling devices for Alexa's hunches, users can easily set alert types for each selected device without extra steps. This ensures they receive notifications only for critical alerts, avoiding unnecessary notifications from irrelevant devices.

How do we solve this?

Iterations and Design Rationale

(Click to expand)

It's time to collect Feedback

How can we collect contextual feedback from users without forcing using?

Current feedback collection method

Problem space 3

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

Alexa needs more detailed feedback to improve its future actions. The current model just asks for a simple "yes" or "no" which might not provide enough context improve future actions

It's time to collect Feedback

If a User clicks no, then the User is given a choice to provide additional feedback

Solution (1 of 2)

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

Added a two-step contextual feedback system. Hunches are based on assumptions that Alexa makes based on user's daily patterns. This solution makes the the assumptions transparent and asks which assumption was wrong

Iterations and Design Rationale

(Click to expand)

The Final Step

The Final Step

When we tested our feedback mechanism, the users were quite happy with the transparency and making Alexa more context-aware. But it lead to another problem. They said:

So how can we encourage our users to provide feedback for a prolonged period of time?

"Giving feedback a couple of times doesn’t sound too bad, but I honestly can’t see myself doing it for weeks or months"

Measuring Impact of the feedback

Solution

Alexa needs to inform users about the actions and collect feedback to determine if any action was performed incorrectly. This process helps Alexa improve its future actions and build user trust.

Research suggested users were more likely to give feedback when the impact of feedback was transparent. To encourage this, we made the process transparent by showcasing Alexa taking actions based on the feedback previously given.

Hunches on home screen

Select medium

Let's recap by looking at the Flow of the app

This is what me and my team was able to achieve in 16 weeks

This is what me and my team was able to achieve in 16 weeks

Overall Impact

Key Outcomes

  • Enhanced Communication: Users reported a significant reduction in frustration as proactive actions were better communicated and perceived as less intrusive.

  • Improved Accessibility: he introduction of widgets as a passive interaction mode was met with enthusiasm, allowing users to engage with notifications seamlessly and on their own terms.

  • Greater User Satisfaction: The refined experience increased user confidence in Alexa’s Hunches feature, resulting in stronger alignment between user expectations and device performance.


The project also received strong support from our industry sponsor, who expressed confidence in the solution’s ability to improve user interactions with Alexa.

After conducting more than 4 Usability Testing

Overall Impact

The Design Process

We followed a non-waterfall approach

Amazon externship

  • Unique Opportunity - Collab with a UX Researcher from Amazon

  • Learnings - Problem solving, user-centric solutions, efficiency, agile methods

Personal Significance

What kept me on the edge of the seat

  • Conversational Design - New and exciting domain

  • IoT devices - A deeper understanding of how they function

  • Exploration - Brainstorming for exciting and innovative ideas for virtual assistants and IoT devices

Smart Devices

Reflections

Working in big teams and Communication

At first, team communication was poor due to the large size and unfamiliar faces, leading to missed meetings and incomplete tasks. I learned that assigning ownership was the best solution. We divided roles based on interests, and each person took responsibility of getting everyone onboard for their respective tasks. Over 2-3 weeks, this approach improved communication and collaboration began happening naturally. I really liked this method and plan to use it in future large teams.

Maintaining Interest when things change

Initially, the project focused on Voice Interface design, which I was excited about, but research revealed voice wasn’t ideal, forcing us to pivot. This was demotivating, especially with limited design flexibility since we were adding to Amazon’s app. However, I found inspiration in the challenges of improving even the simplest aspects. I realized that despite changing priorities, pushing to enhance the project reignited my interest. This taught me that in the industry, things can change quickly, and the key is to adapt and make the most of the current scope.

Learning Opportunities

Monica Chan

(Amazon's UX Researcher)

The Team

Thank you

  • Context

  • Research 1.0

  • Problem 1

  • Research 2.0

  • Problem 2

  • Problem 3

  • Problem 4

  • Recap

  • Impact