«We’re all here to share what’s possible with software, with the web, with digital interfaces. To see what we can do now, and what our ...»
Where no geek has
We’re all here to share what’s possible with software,
with the web, with digital interfaces.
To see what we can do now,
and what our responsibilities to our users are,
especially with the wide range of devices available to us.
I thought a good way to frame that is also to look ahead a bit.
To look at what mobile is making possible in the very near future.
Hurtling into an era of science ﬁction RIGHT NOW.
For me, a lot of that started with iPhone.
1st time: childhood ideas of future actually became reality.
A personal computer. In your pocket. Packed with sensors.
Devoted last 5 years to mobile software.
What we can do with it. How to design it.
What it lets us do that’s truly new and diferent.
There’s a lot to think about there.
But lately, the stuf that’s been keeping me up is:
Lots of talk about post-PC. That’s where many of us are entrenched.
I’m interested in post-mobile.
What comes after mobile as we know it?
Where is all of this headed?
So let’s get started.
Friends, I propose an expedition.
Where no geek has gone before.
Exploring the hazy edges of the technology universe for emerging tech trends changing the way we interact with devices, with information.
We’re looking to the future to see how we should think about our work now.
You are here We’re start our mission with where we are now.
[next] What it is that this new era of computing gives to us that we haven’t had before?
Days of disco.
iPhone and the smartphones that came after:
ﬁrst true personal computers.
Not because always with us, though that’s part of it.
SUPERPOWERSSensors are the superpowers.
It’s because they’re packed with sensors.
Our smartphones and tablets hold so much personal info, but it’s the sensors that give rich context and insight to that info and tasks.
I mean, look what they can do.
GPS, camera, audio, touch, light detector accelerometer, compass, a GYROSCOPE.
Mobile is often considered companion, lite.
Wrong. The question is not how to strip down an experience.
Not less, more.
These devices can do more than the desktop.
Often we use sensors for immediate proximity, maps.
Where to eat dinner nearby. When the next train at my station is leaving.
But I encourage you to think beyond just geography with these sensors.
Not just what’s nearby but what’s in front of you.
Key value of both examples: saving input efort.
I don’t have to say where I am.
I don’t have to say what I’m watching.
Sensors do the work for me.
[ And for the visually impaired, audio provides an alternative interface.
Except that it’s not an interface that’s tuned just for those with disabilities.
It’s an interface for everyone.
The popularity of audio and speech and gesture for the mainstream is a huge opportunity for the accessibility community.
Instead of talking about designing for the blind, you can talk about designing for speech, for sensors, for environment. ]
To add new layers of understanding and insight, by adding new visuals to what we can see with our puny human eyes and ears.
I think we have to be careful to use it correctly, because it’s something that can quickly become a gimmick.
A solution in search of a problem.
There are a lot of bad implementations of augmented reality.
Something that looks cool, but that you wouldn’t actually ﬁnd useful day to day.
There are a few areas, though, that have been quite compelling.
The ﬁrst is games.
Word lens is an augmented reality app that uses computer vision and optical character recognition to translate text in real time from one language to another.
We’re used to augmented reality as being entirely visual, camera based. Table Drum takes a diferent approach.
Table Drum is a drum machine app. There are lots of those.
What makes Table Drum diferent is that it doesn’t force you to use the screen. Pushes its interface into the world around you, so that you can actually “play” the table in front of you.
[next] The developers call it augmented audio.
The software uses its sensors to push beyond the screen.
Every object in front of you is suddenly a sensor, an input.
Do you see the commonalities between these apps?
Most of these examples are replacements for traditional input.
We think of iOS devices as touch interfaces, and that’s awesome. Lots of possibility.
But crafty use of onboard sensors means you don’t have to interact directly at all.
[twitter]Table Drum uses sensors to shift interface of screen, into environment. Whole desk is a drumset. http://bit.ly/tabledrum[/twitter] AnyTouch is a prototype probject that uses camera vision to turn any object or surface into a game controller.
The whole world is the app’s interface.
http://vimeo.com/43108191 Who needs physical objects at all.
That’s the case with natural gesture of course.
You know this stuf from Kinect But: emerging open projects with far more accuracy and sensitivity.
[twitter]Who needs a screen? Anytouch prototype uses camera vision to turn any object into game controller. http://bit.ly/anytouch[/twitter] Leap Motion leapmotion.com Leap Motion http://leapmotion.com Incredibly detailed accuracy even within 1cm of motion.
And of course there’s an API, like Kinect, where you can integrate this stuf into your own application.
All these built-in, pre-coded gestures to push, squeeze, mold, rotate.
[coarse gestures instead of ﬁne-tuned control of fussy, precise interactions.
great possibilities for those with motor control issues] Launches this year, distributed these gizmos to 12,000 developers to seed the market ASUS, the computer maker, has committed to integrating them into laptops and desktops.
Intel likewise says it’s going to start integrating its own Kinect-style gadget into its computers.
So this stuf is slated to go mainstream this year.
Devices and UI that require no interaction with the device itself.
I’ve been occupied with touch interfaces for the past ﬁve years.
[twitter]Next-gen Kinect: Leap Motion is a remarkable kit for turning your app into Minority Report. http://leapmotion.com[/twitter] http://www.theverge.com/2012/12/18/3779310/leap-ships-10000-developer-units-paving-the-way-for-a-2013-launch http://www.theverge.com/2013/1/3/3830394/leap-motion-asus-pc-deal
But I’m realizing...
How can we save the user from tapping into the screen at all?
That’s the point of barcodes, of computer vision recognition, of speech recognition.
One of the powerful things about sensors is that we can communicate with machines in new ways. Often in more human ways.
Touch is just the ﬁrst, it’s ﬁnally mature.
But obviously speech and natural gesture are ready to pop, too.
All of this stuf is still a little unreliable.
Siri is in beta.
Kinect great for games, but wouldn’t want to run a nuclear power plant with it.
These are the things that we’re going to have to start designing interfaces for. For speech, for gesture, for facial recognition.
The combinations are especially intriguing.
Speech and gesture will necessarily develop together.
Say a word to tell the machine start to pay attention to your gestures.
When you start to combine speech and gesture, you know what you get?
Stuf gets really interesting as engineers create custom sensors for the interfaces. Let devices talk to any arbitrary object.
Healthcare ﬁeld especially innovative in turning phones and tablets into inexpensive medical devices.
http://www.popsci.com/bown/2012/innovator/proteus-digital-health-feedback-system Proteus Digital Health Feedback System Pill that doubles as a radio, so it can track whether you take it.
Sensor itself about the size of a grain of sand.
Same stuf you ﬁnd in a vitamin.
Copper and magnesium hit your gastric acid, turning this thing into a battery.
Works like a potato battery Transmits a snippet of code to a patch you wear on your stomach., which relays to your phone or tablet via bluetooth.
Pill’s serial number, manufacturer, and ingredients.
saves that data to the cloud.
Whoa, right! This is some groovy Star Trek stuf here!
Like I said, we are living in a world of science ﬁction.
---Advanced pulmonary disease.
Sensor inside the artery near the heart, detects blood pressure changes.
Touch a sensor and it downloads the info THROUGH THE SKIN.
Shows the patient some information and tips, relays the data to the doctor.
---As sensors become more advanced, there’s more stuf we can do with our DATA, with our content.
So that’s amazing.
But taken to extremes, it can also get a little silly.
Here’s a Mickey Mouse idea, honest to god, that Disney Research came up with last month.
It’s called Botanicus Interacticus.
PLANT UI! IMAGINE THE POSSIBILITIES!
...and then send the farmer a text.
And these texts can go out in French, German, or Italian.
Cow love speaks all languages.
Forget internet of things, I want my internet of bovines.
We have cows texting when they’re in heat. That’s basically how I use SMS, too.
So we have this growing variety of sensors sharing data in all kinds of ways with all kinds of devices.
At the same time touchscreen devices are making the digital world more physical, sensors are making the physical world more digital.
Objects—and cows!—generating and sharing data.
So I would say that the big marker of where we are now are personal sensors. Things that make sense of our immediate environment.
Your device is not only a sensor for input...
but a broadcaster.
So if we embark on our mission with a quick hop...
We get to mirroring. For the most part, that simply means screen sharing.
That’s the whole idea behind Airplay.
Mirror your screen via Apple TV to your television set.
To share photos, to share videos.
That begins to make it social.
It shares it display with other devices -- and, ultimately, with other people.
So, it’s not just a sensor.
Your device broadcasts content to dumb devices.
It becomes the sensor and receiver for those dumb devices, like TV.
Apple’s been a leader here.
But go ﬁgure, not everyone wants to build products that mirror Apple devices.
Google working on its own Airplay-like technology, which it says it will make open to work with any device.
http://gigaom.com/video/google-open-airplay-alternative/ Samsung, one of the few hardware actually making a proﬁt, would prefer to remain independent of Apple stuf.
And so at the same time as we see this early mirroring trend, we’re also seeing a strong trend toward making more devices smart, independent of other hardware.
Like Samsung’s Smart TV.
A whole internet-enabled operating system, with speech and gesture.
Xbox built right into the tv.
And for forever we’ve been hearing about smart kitchens.
Futurists seem to be obsessed with the idea of smart refrigerators and smart toasters.
At CES this year, Samsung introduced its T9000 refrigerator with LCD and Evernote integration.
Maybe there is a place for smart appliances. But probably not as a browser or evernote client or twitter client.
But here’s the thing.
I DON’T WANT MORE OPERATING SYSTEMS IN MY LIFE.
I already have too many.
Too many patterns to learn, too many technologies, too many contexts.
I’m not sure that more smart devices is actually what I want.
So how do I get from here...
How do I make peace with all these devices and screens?
Part of maintaining sanity is limiting the number of smart interfaces we have to deal with.
Have a handful of smart devices that can control all the dumb devices in our lives.
Drive everything, for example, with my phone or tablet.
Airplay lets you do that, too, though you see it far more rarely. Here’s one example.
[next] Metalstorm Wingman uses the iPad to ﬂy your plane on the tv.
Beyond mirroring, obviously.
This is a generic device acting like a purpose-built controller or remote, working with a dumb device, the TV.
Ecosystem is crucial here.
That’s why Apple is so well suited for this era of what we’re up to.
You have to have the devices, the software, the operating system and the API that can work together.
Innovation always happens in proprietary arenas.
That’s why eras of change are so messy and fragmented.
You can’t innovate with standards.
Standards come about after innovation settles enough to choose a solution.
Not always the best solution wins, but there’s still a solution.
Until we arrive at standards for doing this stuf, it happens in proprietary ecosystems.
Microsoft is very interested in trying to break Apple’s grip here.
So this month, they introduced something they call SmartGlass.
Lets you control your xbox with your phone and tablet, whether it’s a Windows phone, iOS, or Android.
That’s why they unveiled Surface.