πŸ§‚ Oops I'm out of salt πŸ§‚ ~

πŸ— Chicken pod not closed πŸ— ~

πŸ€– magnetic stirrer not in place πŸ€– ~

πŸ§‚ Oops I'm out of salt πŸ§‚ ~ πŸ— Chicken pod not closed πŸ— ~ πŸ€– magnetic stirrer not in place πŸ€– ~

Context

This is a short excerpt from when I was working on a counter-top cooking robot. I was brought in when the Beta trials were being planned. The goal of this trial was to let the users’ handle the robot in their β€˜everyday lives’ for about 2 weeks. Hence the team wouldn’t be present to resolve errors with code as it was done in the alpha trials; where the users made a meal with the team present.

My role

I collaborated with the industrial designer and mechanical engineers to design and test various interactions. In this page I discuss how i designed for the errors that the user may encounter. Firstly I separated them into ’Resolved by the User’ and β€˜Resolved with supportβ€˜.

The error testing and interaction design helped increase task completion to 71% and helped make the US-based Beta trials a success.

Question

How might we design error interactions in a way that feels intuitive to our users?

Upon talking to the designers and engineers I understood that there are two types of errors.

1. Error that require action and could be fixed by the user, by taking an action.

For instance if the robot’s unable to move the stirrer ring, and it says β€˜Check if the magnetic connector is plugged in to its connector’


2. Errors that only inform the user about the appliance not being usable for now.

Eg. The dispenser is unable to unload Pod A. Please wait for support team to contact you about next steps.

Error flows for the cooking robot

Previously error debugging was done internally, and the distinction in error types didn’t matter. But now each error documented had to be analysed to see if/ how to guide the user to do the task. A lot of the team knew these instinctively: I sat down with the mechanical engineer, the developers team and the industrial designer to map out some of the error resolutions.

1st round of companion app designs

Usability testing of the robot’s error interactions

Using the screens above, I conducted usability tests with 6 internal users. I tested the interactions and the UX copy.

Tasks:
I picked out users who were not directly involved in the physical aspects of the robot. I asked them to perform tasks that we’ve so far expected errors in, like disconnecting the magnetic stirrer and asking them to go over the error flow, turning off power supply, and so on.

Some insights:

  • How sometimes a type I error, after the user takes required actions, could move to type II error.

  • Having the instructions on screen would require the user to keep coming back to the screen (but voice was set up for a later round)

  • Prepare for emergency/ any danger to the environment by communicating this on the companion app

  • Speaking in terms of Error I and II wasn’t helpful since the user just thinks of β€˜what action to take next’ so clarifying if input was needed or not, was important especially if the user is away from the product (like in office) and needs to plan for dinner.

(other insights not revealed for confidentiality)

User flow

This outlines how a user would interact with the cooking robot’s errors.

How we implemented feedback into designs:

These insights were fed into information design for the new screens:

  1. State what the machine was doing when this happened, since it could be a long time before the user sees it.

  2. Clearly distinguish what the error is: having this stay constant throughout different error messages

  3. Separate out where user input is needed: Having a checklist could be more helpful than listing instructions

  4. Section where users press resume once the recipes are done

2nd round of companion app designs

Considering confidentiality of the product, I’ve revealed minimum information.