π§ Oops I'm out of salt π§ ~
π Chicken pod not closed π ~
π€ magnetic stirrer not in place π€ ~
π§ Oops I'm out of salt π§ ~ π Chicken pod not closed π ~ π€ magnetic stirrer not in place π€ ~
Context
This is a short excerpt from when I was working on a counter-top cooking robot. I was brought in when the Beta trials were being planned. The goal of this trial was to let the usersβ handle the robot in their βeveryday livesβ for about 2 weeks. Hence the team wouldnβt be present to resolve errors with code as it was done in the alpha trials; where the users made a meal with the team present.
My role
I collaborated with the industrial designer and mechanical engineers to design and test various interactions. In this page I discuss how i designed for the errors that the user may encounter. Firstly I separated them into βResolved by the Userβ and βResolved with supportβ.
The error testing and interaction design helped increase task completion to 71% and helped make the US-based Beta trials a success.
Question
How might we design error interactions in a way that feels intuitive to our users?
Upon talking to the designers and engineers I understood that there are two types of errors.
1. Error that require action and could be fixed by the user, by taking an action.
For instance if the robotβs unable to move the stirrer ring, and it says βCheck if the magnetic connector is plugged in to its connectorβ
2. Errors that only inform the user about the appliance not being usable for now.
Eg. The dispenser is unable to unload Pod A. Please wait for support team to contact you about next steps.
Error flows for the cooking robot
Previously error debugging was done internally, and the distinction in error types didnβt matter. But now each error documented had to be analysed to see if/ how to guide the user to do the task. A lot of the team knew these instinctively: I sat down with the mechanical engineer, the developers team and the industrial designer to map out some of the error resolutions.
1st round of companion app designs
Usability testing of the robotβs error interactions
Using the screens above, I conducted usability tests with 6 internal users. I tested the interactions and the UX copy.
Tasks:
I picked out users who were not directly involved in the physical aspects of the robot. I asked them to perform tasks that weβve so far expected errors in, like disconnecting the magnetic stirrer and asking them to go over the error flow, turning off power supply, and so on.
Some insights:
How sometimes a type I error, after the user takes required actions, could move to type II error.
Having the instructions on screen would require the user to keep coming back to the screen (but voice was set up for a later round)
Prepare for emergency/ any danger to the environment by communicating this on the companion app
Speaking in terms of Error I and II wasnβt helpful since the user just thinks of βwhat action to take nextβ so clarifying if input was needed or not, was important especially if the user is away from the product (like in office) and needs to plan for dinner.
(other insights not revealed for confidentiality)
User flow
This outlines how a user would interact with the cooking robotβs errors.
How we implemented feedback into designs:
These insights were fed into information design for the new screens:
State what the machine was doing when this happened, since it could be a long time before the user sees it.
Clearly distinguish what the error is: having this stay constant throughout different error messages
Separate out where user input is needed: Having a checklist could be more helpful than listing instructions
Section where users press resume once the recipes are done
2nd round of companion app designs
Considering confidentiality of the product, Iβve revealed minimum information.