The next generation in-car voice system for Mercedes-Benz.
I spearheaded the complete redesign of the in-car voice system for the 2019 Mercedes-Benz system release. As experienced by many users, in-car voice systems have commonly been robotic, command-based, and limited. This redesign propelled this next generation system towards more organic speech interactions and understandable, supporting on-screen interactions.
The final product can be seen in the 2019 A-Class models.
role: UX, INTERACTION & VISUAL DESIGNER
Timeline: 2 years
Revolutionized in-car voice systems to behave more naturally, and closer to the consumer's mental model of voice assistants.
I started by pinpointing the current issues between speech dialogues and the corresponding interactions. My team defined principles for both digital and dialogue interactions. From there I dove deeper into the needs of the user defining use cases, creating flows, prototyping, testing, creating final screen PSDs, and visualizing the voice feedback look & feel.
why use screen content to support the dialogue?
dialogue & interaction flows
This project was a result of a joint effort from our speech team, development team, and my team (design). Our speech colleagues provided my team with some predetermined dialogue flows which we assessed from a user experience perspective. As a result, we proposed major dialogue changes based off our findings.
Above you can see an example of a provided dialogue flow. If the finding of a command came back negative, the entire interaction would cancel. I suggested that we provide a resolution by continuing the dialogue interaction with the user to offer a completion of the initial command. This "resolution" of interactions was adopted for the rest of the dialogue system.
The information architecture above is a result of that collaboration. It showcases the final version of the primary speech interaction flow with the resulting on-screen feedback.
layouts and on-screen interactions
I defined the on-screen interactions to work primarily for voice & touch, secondarily for rotary dial. For example, the tiled results are designed in such a way that the user can simply say "next page" or swipe to the next page; either way both interactions result in a paging transition to the next set of results. Concepts were user tested to confirm usability and understandability.
These wireframes outline the assets & interactions that are needed throughout the voice UI as defined by myself and two other design colleagues. These particular screens are visual responses to dialogue interactions including: the voice specific area, tiled results, map results, messages, and weather.
I created all of the heroscreens and layouts for the entire voice system within Photoshop: over 40 different screens, including 18 individual backgrounds for the weather screens to indicate different weather patterns for day and night (visible below).
Additionally, I defined the main feedback states to keep the user informed on where s/he is in the interaction.
- Waiting for Input
- User Speaking
An example can be seen in the video below.
how it works
This was an early prototype that illustrates the basic interaction flow that I defined, as well as highlights the wave states that I designed.