In-car interaction design

Francois Jordaan
3 min readJul 7, 2016

I recently went to a fascinating IxDA (interaction design) meetup about in-car interaction design. Here’s a quick summary:

1. Driver distraction and multitasking

Duncan Brumby teaches and researches in-car UX at UCL. He described various ways car makers try to provide more controls to drivers whilst trying to avoid driver distraction (and falling foul of regulations).

I think most of us are sometimes confused by car user interfaces (UI), and with the advent of the “connected car”, are likely to be more confused than ever.

Ever wondered what those lights on your dash mean? Confusing car UI by Dave

Modern in-car UIs take different approaches. Most cars use dashboard UIs with or without touchscreens. Apple’s CarPlay takes this approach. Then there are systems like BMW’s iDrive which has a dashboard display but a rotary controller located next to the seat, meant to be used without looking. This avoids the inaccuracy of touchscreens due to the vehicle’s speed or bumpy roads. (So iDrive makes more sense on the autobahn, whereas touchscreen UIs make more sense when you’re mostly stuck in traffic.)

Brumby mentioned that the Tesla’s giant touchscreens are not popular with drivers, as their glare is unpleasant when it’s dark, and app interfaces often change as a result of software updates.

The other major problem is that even interfaces you don’t have to glance at (e.g. audio interfaces, so fashionable at the moment) still cause cognitive distraction — research has confirmed what many of us instinctively know, that you are less attentive when you’re on a phone call, even when using hands-free. (See UX for connected cars by Plan Strategic.) And of course audio interfaces (Siri and the like) are never 100% accurate they way they are in advertisements. Imagine having to deal with its misheard mistakes in the message you were trying to send, whilst driving.

Reduction in reaction times — RAC research 2008. From UX for connected cars by Plan Strategic

(Why, you may ask, is a hands-free phone conversation more distracting than a conversation with passengers in the car? People inside the car can see what the driver is seeing and doing. People instinctively modulate their conversation to what’s happening on the road, and drivers rely on that. A person on the other end of the phone can’t see what the driver is seeing, and doesn’t do that, unwittingly causing greater stress for the driver.)

2. ustwo: Are we there yet?

The talk by Harsha Vardhan and Tim Smith of ustwo (versatile studio that also made Monument Valley, and who hosted the event), or more specifically ustwo / auto, was more interesting, even though I started off quite skeptical. They’ve published Are We There Yet? (PDF) which is their vision / manifesto of the connected car, which got quite a bit of attention. (It got them invited to Apple to speak to Jony Ive.) It’s available free online.

But what I found most interesting was their prototype dashboard UI — the “in-car cluster” — to demonstrate some of the ideas they talk about in the book. It’s summarised in this short video:

This blog post pretty much covers exactly what the talk did, in detail — do have a read. The prototype is also available online. (It’s built using Framer JS, a prototyping app I’ve been meaning to try out for a while.)

As I said, I started off skeptical, but I found the rationale really quite convincing. I like how they distilled their thinking down to the essence — not leading with some sort of “futuristic aesthetic”. They’ve approached it as “what do drivers need to see” — and that this could be entirely different based on whether they’re parked, driving or reversing.

[First published: February 2016 on the Isotoma blog.]

--

--