«We’re all here to share what’s possible with software, with the web, with digital interfaces. To see what we can do now, and what our ...»
The ﬁrst time ever that Microsoft is getting into the hardware side of the personal computer business.
That’s why Google bought Motorola.
That’s why I wouldn’t be surprised if Microsoft buys Nokia.
That’s why Amazon makes the Kindle.
It’s not enough to have a service.
It’s not enough to have software.
It’s not enough to have hardware.
To win in this innovation stage where ecosystem matters, you have to have all three.
Eventually this stuf will settle into standards.
The web will be central here.
For now, the race is in private ecosystems.
Migrating interfaces So that’s pretty much the state of the system as we know it.
This is where mainstream tech is pushing at the frontier.
In all of these cases, though, single smart device controlling the display.
Either its own display or a remote display on a dumb device.
But let’s push that and go further out in our expedition.
[next] Here’s where things get interesting...
An important element of the near future will be more ambiguous control.
Shared control among devices.
Primary shifting from one device to another depending on your context.
We know this concept already from the good old car phone.
[twitter]In device-to-device comms, remote control beats mirroring. But SHARED control that migrates among devices is even better.[/twitter] Say you’ve got a car with a bluetooth phone system so that when you get a call on your mobile...
It comes through over your phone’s speakers.
Your CAR starts ringing.
You answer without taking your eyes from the road.
The car itself is your interface.
And you continue your conversation.
And then you grab the phone, mid conversation, turn of the car and continue your call, seamlessly on the phone itself.
Do you see what happened there? [next] The interface itself shifted from one device to another.
The phone is still driving all the logic throughout, but interaction happened elsewhere while you were driving.
The car had over control of the call, even though it’s coming through the phone.
This invisible shift of control from one device to another.
[twitter]Migrating UI: In paired car phone, UI shifts from phone to car (the car rings!) and back to phone. Whoa.[/twitter] Moving further in that direction with the new Eyes Free feature in iOS6, where a Siri button is actually getting integrated into steering wheels.
Again, the logic is in one place, the phone, but the interface migrates.
Drive the phone from your car.
The speciﬁc hardware interface follows your context.
[equally important: speech rising as a mainstream interaction.
it’s always been important as an output for the visually disabled.
but wow, suddenly everyone might be using a variation of the screen reader.
imagine the opportunities to convince those who have dismissed accessibility work as too niche an audience. suddenly there’s a big audience for speech. and accessibility advocates can take advantage.] Talked about Apple’s ecosystem.
And there’s opportunity there.
To ﬂip control from one Apple device to another.
Here, two iPad’s sharing control over the single game.
And, perhaps more interesting, two iPhones holding your letter tiles and then playing them to the iPad board in the middle.
And there you have it: the world’s most expensive board game.
But frankly the most interesting stuf is yet to come.
But it may not be a long wait.
[twitter]Scrabble with iPad and two iPhones. The world’s most expensive board game. http://j.mp/OISaY3[/twitter] Corning is a glass company.
Obviously very invested in the success of screens everywhere, provided those screens are made of glass.
Concept video: A Day Made of Glass. They wish.
Normally, I don’t much like concept videos, a little too speculative.
Often wishful thinking.
But this one unusual because it was very clear about what’s possible and when.
So, let’s start with this girl using a tablet in her bedroom.
[twitter]Corning’s “Day Made of Glass” shows plausible near future: lots of dumb UI sharing control w/smart tablet: http://j.mp/day-glass[/twitter] Oh, is that all?
OH... is that all that’s missing?
Just the operating system. And apps that seamlessly scale and transfer.
That’s a tall order, of course.
But wow, the hardware is here.
Now we just have to ﬁgure out how to code for it.
This is more than just mirroring, if you’ll forgive the pun.
This is the tablet giving its interface control to the mirror.
I can have just one smart CPU with all my apps and content with me at all times, and then just call up a screen wherever I am.
Corning thinks that could be just about anywhere.
Forget the tricorder. Now we’re getting into holodeck territory.
Surfaces that morph into whatever we want them to be, controlled by the devices we carry with us every day.
Now, this stuf actually exists in labs. We can do this now.
This is already with us, just not yet on the market.
But not at much scale and not in anything resembling an afordable price.
This is Bill Buxton. He’s at Microsoft Research.
He was one of the pioneers who invented the touchscreen in 1982.
He posits that it takes 20 years from the moment an idea is conceived in a lab to when it can go big in the mass market.
That means, all the breakthrough technologies ﬁve years from now were invented 15 years ago. Waiting for us.
They’re already with us, if we just look around.
So. Let’s follow this thread of social devices, sharing control and content.
How do we create or imagine interfaces that deal with this.
Well, game systems are already being pretty creative there.
Let’s take a look at Nintendo’s Wii U, which has a touchscreen built into its controller.
In another game, the controller provides a second screen with additional info and ﬁne-tuned control.
And it can also drive a web browser, like Microsoft’s Smart Glass.
Wow, ﬂipping content from place to place is pretty sexy, right?
In fact, it’s even sexier with natural gesture.
This is my friend Aral Balkan.
Content sharing hack with a kinect, a mac, and an iphone.
In both examples, Wii U and Aral’s hack, not so much a matter of migrating control but sharing it.
Partner devices, equal peers, each with a role.
Often that’s what we’ll be looking for. Easy ways to swap info between devices.
It actually was pretty easy with Palm devices in the late 90s, just beam contacts and meetings to another device. Somehow got a bit harder, but I’m sure we’ll ﬁx that.
Let’s look at another vision of how that sharing might work.
[twitter]My pal @aral hacked together Kinect, Mac, TV, phone for amazing demo of moving data among devices http://bit.ly/grab-magic[/twitter] Gesture prototypes bitly.com/proto-gestures Three years ago, a few design students put together a video of paper prototypes for how smart devices might interact. And so they looked at what screen-based gestures might evolve.
I edited it down and thought you might ﬁnd it fun.
-That’s pretty charming, right, but seems kind of far-fetched.
Shaking content from one to another, just sliding stuf from one screen to another?
Actually, we’ve already got it.
http://vimeo.com/7055121 Jenny Redd, Kenny Hopper, Nicholas Riddle California College of the Arts, 2009
The game runs on your PC.
Connects to these guys via bluetooth.
They are the interface for the game that runs on the PC.
They can detect each other’s proximity.
They have accelerometers: tip and tilt content inside them, pour it from one to another.
It’s a complete little ecosystem, made for games.
Also, the little cubes download the software they need only when they need it.
This is also an element that may be part of the future.
Stephanie Rieger says that the web as we know it right now is a “just in case” web.
Shove everything in the attic.
Our phone are clogged with apps. We have to garden them, weed them out.
But what about machines that download the software as they need them.
Grab the software, use it, discard it.
We have this with music and movies increasingly.
Keep the media at bay until we need it.
Starts to get pretty Matrix.
Machines that are smart enough to grab software when they need it.
What I mean by this is devices that just do their work and talk to one another without even needing us to intervene.
If devices are already smart enough to talk to each other, share control, then they can also start doing that on their own.
We’re incidental to their behavior. We’re just the legs that bring the devices into proximity.
The Nest thermostat.
Proximity sensors to know when you’re home.
Internet connection to get the weather outside.
Talks to apps and website.
Remote control and reporting.
And it’s a THERMOSTAT.
A relatively dumb device but wow, fully loaded, and in constant communication.
These loaded dumb devices are an important piece.
Fuelband. Wear it on your wrist. Tracks your activity throughout the day.
Not super smart. Just gives these points for your motion.
But then it talks to your phone or your computer whenever it comes into contact.
It pairs when it can, communicates when it can.
It also has its own display.
Doesn’t do much.
But it could. Those other devices could information back.
You could see news headlines on the thing.
A relatively dumb computer with a dumb sensor with a dumb display.
But that talks on its own to the machines around it.
Lumo is a sensor you strap onto your waist and it buzzes if you’ve got bad posture.
That’s its own simple interface, a screenless gizmo.
But it also talks to your phone. And it has a developer API so you can build your own apps to monitor human movement.
Speaking of dumb devices.
At CES this month, the Hapifork was unveiled.
Monitors how much and how quickly you eat.
But in fact, devices are just as likely to get dumber.
And that’s a good thing.
More dumb devices doing work for us, talking to the handful of smart devices we actually interact with.
[twitter]Future isn’t just smart devices. It’s a network of dumb, passive devices, too: Nest, Fuelband, Lumo etc.[/twitter] Philips makes a new wiﬁ-enabled lightbulb called Hue.
They connect to a little wiﬁ hub that you can in turn control via web or app.
World’s most expensive replacement for the clapper.
But that’s not all it does.
Choose colors from photos and the light will change to match the hue.
This isn’t just happening in the home.
In summer 2011, City of Westminster in London approved upgrade of 14000 lights to internet-controlled lighting. They can actually dim the city’s lights from an iPad.
They expect to save 1M pounds in the district.
As component parts become cheaper —wiﬁ in a lightbulb— it becomes trivially inexpensive to connect just about anything.
Often WITHOUT a screen—like a lightbulb or the Lumo— or with a very simple screen like Fuelband or Nest... or get this...
http://www.westlondontoday.co.uk/wlt/content/ipads-make-street-lights-smarter Visa and MasterCard both have credit cards with screens and keypads.
Visa unveiled theirs in Europe in 2010.
MasterCard unveiled theirs in Singapore in November 2012.
Idea is to make online payments more secure by ensuring you’re holding the actual card, not stolen card numbers.
Basically for creating one-time passwords.
You make up a numeric password and type it into the card.
And then you type that same number into a web interface.
But wait a second. If our content is going to show up on little displays, how do we deal with that.
Hey, how does your website look on a 10-character LED display on a credit card or a fuelband.
Or by speech, how does it sound?
Right now, we’re badly equipped for this. Our content is a mess, not ready for the multidevice present, let alone the future.
How do we design content and experiences for devices we haven’t even imagined yet?
If we can’t know the future, we can’t be future-proof, but we can at least be future-friendly.
And frankly, that means starting with our content.
It’s the underlying service that we need to provide, focus on the format of the raw content.
Chunked up, well described, stripped of presentation, so that even dumb devices can take the content they need and present it appropriately.
If we can do a better job of structuring our content, describing it, so that we can have meaningful APIs that let devices grab just the content they want, that’s creative control in an uncertain world.
Because w can let the robots do the work.
Metadata gives the machines information about how to pick and choose content, format it appropriately.
And a lot of times, we can have the robots manage that metadata, too.
[twitter]Content strategy is for all of us: structured content, editorial workﬂow, smart API critical to future mix of social devices.[/twitter] Guardian.
Newspaper layout = editorial judgment.
Placement/size of articles provides semantic meaning.
Primary, secondary, tertiary tiers. [point out] [slow] THERE IS CONTENT IN DESIGN CHOICES.