Everyday technology is hurtling into the realm of science fiction, even magic, with new devices that are as surprising and delightful as they are useful. Developers and designers are running hard to keep up with this warp-speed pace of tech innovation, and for now, mobile devices are at the forefront. But what’s next? Trends are emerging at the hazy edges of the tech universe that hint at the future of computer interfaces, including computers without interfaces at all. Learn how to prepare for that future now.
Designer Josh Clark, author of “Tapworthy,” takes you on an expedition of this final frontier. Learn how the iPhone and other sensor-rich devices have changed how we approach computing, and explore how we can better design for sensors. Learn how more and dumber machines will make us smarter, and how our current work lays the groundwork for a future of social devices. Along the way, you’ll see how games lead the fleet, how robots can help us build our software, and why post-PC computing is about far more than phones and tablets. Finally, understand why Apple is ideally positioned to lead the way to this future, going boldly where no geek has gone before.
We are hurtling into an era of science fiction… right now! But what’s keeping Josh up at night is what’s next? What’s going to happen post-mobile?
Let’s look into the future and think about what it means for the work we do and the way we work…
First where are we now? Mobiles were the first truly personal computer – not just because they’re always on us, but they know so much about us and they have so many sensors about what’s happening to us. GPS, audio, video, motion detector, gyro… yet mobile is often considered the “companion” or light computing experience.
The question is not how to strip down an experience – not to do less on a mobile, but how do we do more on a mobile? How can we push the boundaries? How can we make use of information ghosts – all the information about us, about the events we’re in.
We need to think beyond proximity – mapping things that are nearby has been a big focus so far. How else can these devices help us in our lives?
eg.
“Shopper” app which rearranges your shopping list according the store you’re in.
“Into_Now” is like Shazam for TV, so you can figure out which season and episode you’re watching.
Then there’s augmented reality, where we can add visuals to things around us. So far it’s been fairly gimmicky in its implementation, but there have been some compelling examples, eg. Skinvaders which makes your face into the game environment.
“Word Lens” is a little more practical, which uses OCR to live translate signs into different lanuages – even keeping the font.
This is really significant because it moves the interface off the device. The UI has to be designed for the environment around us, rather than the device screen.
“Table Drum” turns any surface into a drum kit. These apps are replacing more traditional input methods.
“Anytouch” uses the camera to turn anything into a game controller.
Of course anyone with a Kinect knows you don’t even need objects to be controllers. Sometimes the best touch interface is no touch at all. There are other sensors that offer a more natural interaction than touch – touch is just the first mature sensor. Voice controls are still early days – but they are just around the corner, we can see them coming. Siri has opened up expectations.
Then there are the combinations to consider – we tend to think of designing for touch, OR speech, OR gesture… but they will develop together. Perhaps we will use a gesture to tell a device to listen to what you say next.
Gesture + Speech = MAGIC
Then there is the truly free-form world of custom inputs and sensors. The medical world is doing a lot in this area. There are experiments going on with embedding blood pressure sensors in the body, even trying to download their collected data via touch (that is, put your finger on a sensor and the data is transferred).
You can get a sensor that turns plants into touch inputs! Although it seems kind of crazy, why not access your calendar using the bamboo on your desk?
Farmers are trialling monitors on cows that detect when they’re in heat and texts the farmer. Cue the jokes…
Mirroring – sharing data between devices – turns dumb devices into smart ones. Link your sensor-laden phone with your traditional television. Then you have things like the Samsung TV that uses voice and gesture instead of a remote control.
“Everything around us could potentially become smart devices – it’s always toasters and fridges for some reason – but what I really don’t want is all of them on different OSes with different UIs to learn. I don’t need more devices and OSes in my life!”
We get into the era of remote control.
Games often lead the way in this area – eg. Games that turn your ipad into a controller for what’s happening on the TV. (“metal storm” plane game)
Innovation is going on in proprietary zones, standards bodies are terrible places for innovation… but lots of things are being done that way before they get standardised (which will help them spread without fragmentation..?)
Microsoft is almost an underdog but they understand the importance of the ecosystem, so they’re doing things like “Smart Glass” which makes your phone into a controller.
Then pushing further into the future – migrating interfaces. Where the interface adapts to where you are. The most common example is plugging your phone into the car, so the call comes through your car… but you can unplug the phone and continue your call just on the phone. The phone was handling the call all the while, but its control surface migrated to the car.
Putting a Siri button on steering wheels, while a sad example of being stuck in proprietary solutions… is a sign we’re moving towards more powerful migratory interfaces.
Corningware makes all the glass on touch devices… they made a concept video where they really tried to make it real; and show how it really would work in your life. Example of a bedroom mirror that’s a huge display for your tablet.
This is what Microsoft is trying to do with Windows 8 – make an OS that works across a wide range of devices. This is something we will all have to design for in the next few years, it’s a challenge that’s waiting for us to tackle it.
Much of Corningware’s video is just not possible due to the cost of the materials. But higher demand usually leads to lower cost; so when might we really see this in the world?
Bill Buxton says 20 years from conception to widespread use. So the things we’ll be using in five years have been in the lab for fifteen years! So we can look into the future now…
(slides stop showing…. “uh, we’ve lost my slides – this is also part of the future!”)
Flipping contexts becomes very fun, very sexy! You can use a grab gesture at your tv and a touch on your phone, and it feels like you’re putting screenshots onto the phone – it brings natural physical gestures to our devices.
“Siftio” game cubes are little devices that are aware of each other and also connect back to your main PC.
Just in time, not just in case.
Our PCs are just in case – everything is on there just in case we need them. Even our phones are like that with apps. But Siftio cubes just download the little bit of software they need at the time, then discard that to make room for the next thing. Think of The Matrix where you download the knowledge you need right there and then.
(shows Matrix clip of downloading the ability to fly a helicopter)
“Gratuitous Matrix clip…. you’re welcome”
Next we get passive devices. Things that are smart enough to just do what they do, passively – that is, without your direct intervention. An example is the “Nest” thermostat which is smart enough to detect you’re home, use wifi to check the weather, etc. It’s a smart-dumb device.
The device itself is very simple, fairly dumb, but connects to smarter devices when they are available. We think devices are going to get ever more powerful; but the truth is we’re going to have a lot more dumb devices, that do less. That do one simple thing.
How do we design for this future? We can’t be future proof, but we can be future friendly.
“metadata is the new art direction” – Ethan Resnick @studip101
We need to use metadata as one of our most important tools. We need to structure our data well, describe it well, set up an API for our content. Let the robots do the work! Metadata gives the machines information about how to format the content appropriately.
Example: newspaper. We know the importance, the editorial judgement, based on the layout. But how do we get that information out of the InDesign file? Do you just have an Editor for every possible stream and device? The Guardian did it with metadata. They put the editorial priority into their content and let each stream order appropriately – the iPad edition could be an entirely different layout but show the priority well for that stream.
Presentation deprecates! Our work goes out of date, yet that’s what we tend to focus all our attention on… as tom coates would say “your product is not a website!”. The individual containers of our services and content are not the product.
We need to look beyond the application we’re working on today, to look at the big picture.
As Designers we need to work together across the whole stack; backenders need to design the content storage to cope with multiple displays.
What do we do?
“We have the best jobs in the world! The coolest jobs in the world in the most exciting time in the history of technology!” Think about the near future as well as how to bring this to your current work. Make something amazing!