Wednesday, April 10, 2013

On SXSW 2013: Vanishing Interfaces, Wearable Tech, & AI's.

Interfaces.  Interfaces everywhere.
Less than six years ago the Apple iPhone blew our minds with a new way to think of something we thought we knew really well:  the cellphone.  A couple years later, tablets crashed the party, giving us a big, rich interface to browse and connect with while leaned back in our recliners.  Technologists like myself have been scrambling all the while to find the best methods to utilize the capabilities of these new interfaces ever since.

The last year has presented us with the vanguard of the Wearable Tech revolution.  Faced with products coming out of Kickstarter and perhaps most prominently, the Google Glass project, the equation is about to become very complex.  How we think of technology is going to radically change, again, over the next five years.  With it, how our devices interact with one another and how we build our applications for them will need to evolve significantly.  One talk and one panel at SXSW underscored how we need to begin thinking and designing if we want to stay sane while managing a system of devices and inputs: Golden Krishna's The Best Interface Is No Interface , and How AI Is Enhancing the User Experience.

Krishna spoke specifically to the idea that we need to eliminate as many interfaces as we can in clever ways to enhance the User Experience.  The AI panel emphasized the changes coming to the User Experience as predicted by products like Siri and Google Now.  Together, they paint a picture of how building with an eye to streamlining interfaces with AI modules will build a new future for us - a future that is going to be increasingly filled with various devices.

We Are Interface Happy
It seems that every time I'm involved in the design of a new product we focus an especially great amount of time on the interface.  What buttons will go where, how big text fields will be, in what order they present, colors.  Which is good, right?  We want a great UX, so getting that interface right is key.

When you have a single interface, like a desktop website, this is fairly easy to maintain and design.  Add a mobile version and it becomes a little more complicated.  Add a 10'' tablet interface.  Now add in an interface for a TV, 7'' tablet, and maybe even a car dashboard.  Each adds a level of complexity and restrictions on space and sensors that may be present.

Now add in wearable tech:  smart watches, Glass, and even arm-bands, each with their own style of interface.  We will make ourselves crazy attempting to maintain a small solar system of devices, never mind maintaining applications for all of them.
The LG Smart ThinQ Refrigerator.
The first obvious question here is:  do you need to be in all of those places?

I would actually adjust that question a bit to:  what functions do your products have that best fit on those interfaces?

Eliminate the functions that don't make sense and streamline the functions that do.  In fact, automate as much of the process as you can!  This is something we're all already familiar with.  The most commonly used version is data caching so that a user doesn't have to enter their contact or credit card information over and over again on a particular site.

Here's a great example of this:  Amazon's 1-click purchase.  Why re-include all of these interfaces between what I want to buy and the actual purchase?  Automate it and make it seamless.

But how does this concept apply to all of these other devices?  We're used to thinking this way with web pages, but what about with physical activities?

The Evolution of AI
Seems like a bit of a stretch, I know, but this is coming.  The vanguard is already here in Siri and Google Now.  Each of these represents a type of agent that knows a few things about us.  Google Now will tell you, without your asking, the time it takes to get home from work.  Siri and Google Now both will take your voice input and perform actions that would normally be fairly complicated through a series of interfaces.

Google Now in action.
Let's look closer at the Google Now "time to get home" use case.  Normally, I would have to open up a browser, enter in some data, then pull up directions from work to home.  Depending on the service I'm using, that may or may not reflect current traffic conditions.  So to do that, I might have to pull up street cameras and observe what the traffic flow looks like.  There's quite a few steps in that process.  Nothing crazy, but there's a bit of work.

Since Google Now does this already, those intermediary steps are gone and the result is automatically presented for me.  This eliminates decent amount of interface.

Siri's voice recognition allows similar interface-elimination.  You can say to it "schedule a meeting at 4 o'clock on April 16th".  Normally, you'd have to open your calendar app, swipe to your desired date, then time, then tap, then enter in some meeting details.  This voice recognition eliminates all of that work.

Obviously, Siri and Google Now are not Skynet.  Or Johnny Five.  But they are clever bits of programming that represent a personal agent.  Voice processing allows them to take a set of spoken instructions and convert that to an action.  The better the voice recognition tech becomes, the better the agents will be able to perform and the more interfaces they will be able to eliminate.

A smaller scale example of this is the Nest thermostat.  The simple "AI" in this device learns your patterns and then begins to adjust the temperature in your home based upon what inputs you have historically given it.  No more getting up in the middle of the night to adjust it.  Sure, it has an app interface you can use to control it as well as an interface on the device, but the "AI" makes those redundant and simply a back-up plan.

What you don't want to have your phone become.

AI Meets No UI
But let's bring this back to earth.  Not all of us have access to complex and robust voice recognition libraries and a network of camera-equipped cars.  Many of us are, however, in a position to collect or analyze large sets of data.

I'm not sure about you, but we often have a situation where we're trying to figure out what's for lunch.  So let's say we want to build an app that suggests, every day, where to go for lunch.

It's easy to start with something like creating a mobile app that just shows you a map of the area with restaurants pinned.  The next logical addition are reviews from Yelp or Google Places.

But we still have an interface, right?  How can we eliminate it?  Why do I even need to pull my phone out of my pocket?

How about we have an AI, which pays attention to what restaurants you like to eat at, how much you spend, when you ate at them, and what types of food you like.  Then, to eliminate any sort of interface, this app sends you a text around the time you normally go to lunch with its suggestion.  Perhaps it even includes a suggestion of things to order at that restaurant.  If you use something like a wearable health monitor, maybe it can even read your general mood and suggest that one place you love to go to get away from work stress.

Maybe your smart phone's app sends the suggestion to your smart watch or your Google Glass.

App collects data, sends output to wearable tech.
Sure, an interface can still exist behind this.  You can open the app and adjust parameters or inputs, but these interfaces become supporting elements, not the primary interface element.  Data that's collected becomes the primary input, done automatically.

Let the robots do the work!

Its about Streamlining
So here are the brass tacks:  in just a few years, we're going to be surrounded by devices.  They're all going to have an OS, varying screen sizes, and use cases where they make a huge difference and others where they don't.  We have to be savvy in figuring out which interfaces makes sense to go on a device and which functions can be easily performed in that form factor.

Are you going to read a book on a smart watch?  No.  But you might set a reminder to do so.  Especially if you can talk into it ala Dick Tracy, right?  All it has to do is interface with your phone to gain access to something like Siri or Google Now.

More importantly, we need to look for ways to leverage our data and the sensors built into these devices to eliminate UI's.  Whether this is through a series of a "AI's" or sensors is going to be a decision to be made by us product designers.

We will have to learn how to use a brain-device, like a smart phone or desktop computer to centralize control over a system of devices.  These will control our interfaces with watches, tablets, thermostats and the like.  Application of AI-type of technology will give us numerous opportunities to streamline tasks that are tedious.

Are you prepared?  Are you ready to start thinking of ways to kill the UIs you don't need?  Are you ready to start identifying which you need and which you don't?