I just watched the video, and typed my reactions, as I had them; no idea if this will be of interest.
* * *
People still travel for meetings?
By plane?
Wait, someone is driving the car?
Thats not very productive.
There are bellhops?
Why are there still bellhops in the future?
What do they do?
Why is the screen so small?
Why have a screen, if you have those perfect augmented reality glasses?
'Creating reply interface'? We still have to wait for computers?
There's still global poverty, and benefit concerts? When these people have all that fancy tech?
Damn it.
Copy and Paste is still around?
Kids are still taught long division? Why? Why do they use a pencil?
Also, won't the future be one of neural interfaces? Isn't there something wrong with interfacing two electrical signal processing machines (brain + computer) via all these muscles and optical sensors and so on?
I know there's a lot of science to be solved first; but surely the future of interfaces is that they are invisible, and built in to us?
I guess it's meant to depict a not so distant future. Either that, or you could criticise the Office team for not having the imagination to take it any further.
Kids are still taught long division? Why?
Same reason people learn it today, same reason we didn't stop the second the slide rule was invented or cheap calculators became available 20 years ago or whenever. Learning simple algorithms is important for future development.
What struck me is how lonely the video felt. People hardly talked to each other. The only emotion exchanged between people was with the mom and daughter making apple pie, and that was through the screen.
How is driving a car not a productive task? There can be cases when people "want" to drive out of their preferences and passion. Can we actually classify some random and not-so-appealing work to us as non productive?
I can't help but think that the success of the iPhone and iPad has caused a big step back in usability among devices that try to copy them.
Two similar examples:
Garmin's newer aircraft GPS units have touch screens instead of knobs and buttons. The iPad has proven very popular among pilots. I can see why Garmin would decide that "touch is the future." But, while I'm flying an airplane, for my money I'd rather have knobs to grab and twist, and buttons to push and feel.
Tesla's new Model S uses one huge touch screen for its in-dash interface. Surely, if you want to change your music's volume or turn on air conditioning while driving, it's harder to hit touch targets that are Pictures Under Glass than to grab and twist a knob.
I would counter those sentiments with the ZuneHD experience. Most people haven't used one, but touch isn't vision focused. With the Zune in my pocket, I can unlock, skip a track, stop, play, change the volume (on the screen, not a rocker on the side). I can also try to shuffle/unshuffle, but that isn't as easy.
I was actually amazed that Apple completely missed the ability to use touch when you weren't looking at the device.
This is very interesting, and relevant since I just found out the air traffic control display at the local airport in my hometown has been replaced with, to quote one guy, "a big ipad"
They're up in the air about it because now, instead of knobs and switches, informed by paper maps and pencils, they've got this fancy, 'high-tech' touch screen display - 3x5 feet of shiny glass.
There was a minor storm in a teacup about it, because it leads to a much greater potential for accidents, bad landings and congestion (it's a very busy airport)- but because it cost millions, nobody can go back to good ol' pencil, paper and brain.
I appreciate your point and prefer physical controls when driving, but obviously one major advantage of the described approach is that the same space can be used to control different things - e.g., swap out the sat-nav area for air-con controls, put up weather or traffic reports if the music is switched off, etc.
1) "targets" are a part of tactile interfaces, in touch interfaces we may shed the need for accurate targets (i.e. gestures can be made anywhere on a screen)
2) Confidence that comes from the physical response from tactile systems may just be an anachronism after a decade or two of built up trust in touchscreen systems
The hands are huge because so much of your brain is devoted to skillfully moving and precisely sensing things with your hands.
Your hands are basically the focus of the human body in interacting with the environment around it.
This explains how moving from buttons and sensors to a touch-sensitive experience is a major and hard-to-explain qualitative difference.
It also underscores the great point made here, that we can make devices far more suited still to the primary way we're designed to interact with the world.
Another major problem with "research visions" like this is that they portray a thoroughly "bourgeois" future. We know already that in order for every human on this planet to have basic needs taken care of, highly consumptive 1st world lifestyles like the one portrayed in the video will need to be replaced. If you've ever built anything, you'll know that it takes an immense amount of resources to obtain that kind of polish. I know that some designers like clean, shiny things, but perpetuating the meme that the future won't be characterized by rough-edges is escapist if not simply irresponsible. If we don't imagine a future for ourselves that involves patterns of behavior that are conducive to conservation of resources and supply-chain+community resilience, then I'm afraid that the only people using tools other than shovels and guns will be a super-elite living in fortified micro-cities (so perhaps it's accurate after all).
For those sympathetic to the argument of the OP, you may be interested in Bill Buxton's papers on bi-manual interaction. Bill is a huge (and early) proponent of this point of view (that computer interfaces should make full use of the capabilities of the human body): http://www.billbuxton.com/papers.html#anchor1442822
Disclaimer: I used to work in the group that produced this video.
You have to remember, that while the "wow" factor (to some folks) is the screens and form factors, this video is made by a group in Office - the things they're really researching and trying to demonstrate is the vision of how your personal information and your "work" information (i.e. your social circles, your coworkers, your job) interact with each other.
How can context really be used effectively with productivity in an office setting? Context is this huge term here - device form factor, the people you're with, the things you're doing, where you're at; there is a ton of information available to apps / services now about who you are, what you're doing, etc - what are scenarios in which that information is actually combined and put to good use?
They really should've made the Director's Commentary to go along with this, there's a lot of research and data behind this video along with the special effects.
Man, Bret Victor seems to have some of the most consistently interesting and inspiring articles I've seen.
I suppose he's just pointing out one area of the future to think about, but I wish he'd mentioned other ideas. I think voice and language, in particular, have some of the most room to grow to make interfaces more intuitive.
edit to add: Along this line, I've often wondered if it'd be worth learning Lojban to interact with the computer more easily. Supposedly the language is perfectly regular and well suited to that sort of thing, but I don't know for sure.
It could be easier to teach humans Lojban than computers English (or however many other languages).
Computers can probably learn English as soon as they have learned Lojban. (Does that make sense? Assume a program that can understand Lojban. Then write a program, in Lojban, that understands the basic English grammar and the major exceptions. Then add the ability to deduce minor exceptions.)
The iPad is really, really awesome. But. All that's really changed is that they've added an extra finger. (sure there are three- and four-finger gestures but those just boil down to a different kind of single-finger gesture)
Sadly, we're probably going to have to wait for the advent of supersubstances that can dynamically reconfigure their physical characteristics before we get beyond the finger-and-eye, which I doubt will happen in my lifetime (tears).
The author makes a very valid point, and it would be quite interesting to see what kinds of tactile UI designs might be achieved. But I think there's an important distinction to be made in how we build tools to solve physical problems and how we build tools to solve conceptual ones.
Apart from purely remediative technologies such as Braille, I can't think of any technology from any era of human history in which conceptual information has ever been conveyed via the tactile sense. There have never been tactile clocks, tactile books, or any kind of tactile language. When human minds attempt to import ideas from the outside word, they use the eyes and ears, not the hands.
There's certainly a real problem with the UIs presented in the MS video, but it's not that they're visually-oriented. It's that they're designed to appeal to the eyes themselves, and fail to encode information in a way that's optimally suited to the mind. The aesthetics of the UIs in that video are stunningly beautiful, but I have no idea from looking at them how I would use them as tools; each notification, dialog box, and prompt for input seems fine in isolation, but when I try to conceptually 'zoom out' and understand how each function integrates into a workflow that allows me to apply my capabilities toward fulfilling my needs, I'm completely at a loss.
There seems to be an unfortunate trend toward pure visual aesthetics in the software industry today - perhaps a cargo-cult attempt to emulate some of Apple's successes - and MS seems to be suffering from it almost as badly as the Ubuntu and Gnome folks.
You should check out the book The Myth of the Paperless Office. They report research where they gave folks tasks like writing a summary of several magazine articles and one group did it all on a computer and the other did it on paper and they watched how people actually worked. There was a lot of subtle physical interactions in the paper group, such as moving different articles closer and farther away on the table that the computer group tried to do analogues of and failed because of the limitations of the medium. So it's not just the eyes and ears.
The article focuses on "everyday object" manipulation, but he's right about technology too: there are a wealth of common HCI tools that glass cannot accommodate.
- The textual keyboard remains one of the fastest methods of text entry. It can be used without looking, offers high bandwidth, affords both serial and chorded inputs, and works well for precise navigation in discrete spaces, like text, spreadsheets, sets of objects like forms, layers, flowcharts, etc.
- MIDI keyboards are comparable, but trade discrete bandwidth for the expressiveness of pressure modulation.
- The joystick (and associated interfaces like wheels, pedals, etc) are excellent tools for orienting. They can also offer precise haptic feedback through vibration and resistance.
- The stylus is an unparalleled instrument for HCI operations involving continuous two dimensional spaces. It takes advantage of fine dexterity in a way that mice cannot, offering position, pressure (or simply contact), altitude, angle, and tip discrimination.
- Trackballs and mice are excellent tools for analogue positional input with widely varying velocities. You can seek both finely and rapidly, taking advantage of varying grips. Trackballs offer the added tactile benefits of inertia and operating on an infinite substrate.
- Dials, wheels. A well-made dial is almost always faster and more precise than up-down digital controls. They offer instant visual feedback, precise tuning, spatial discrimination, variable velocities, can be used without looking, and can be adapted for multiple resolutions.
- Sliders. Offers many of the advantages of dials--smooth control with feedback, usable without looking--but in a linear space. Trades an infinite domain for linear manipulation/display, easier layout and use in flat or crowded orientations.
And these are just some of the popular ones. You've got VR headsets for immersive 3d audio and video, haptic gloves or suits, sometimes with cabling for precise pressure and force vector feedback, variable-attitude simulators, etc. There are weirder options as well--implanted magnets or electrode arrays to simulate vision, hearing, heat, taste, etc...
Dedicated interfaces can perform far better at specific tasks, but glass interfaces offer reconfigurability at low cost. That's why sound engineers have physical mixer boards, writers are using pens or keyboards, artists are using Wacom tablets, nuclear physicists are staring at fine-tuning knobs, and motorcyclists are steering with bars, grips, and body positioning; but everyday people are enjoying using their ipad to perform similar tasks.
Glass isn't going to wipe out physical interfaces; it's just a flexible tool in an expanding space of interaction techniques. More and more devices, I predict, will incorporate multitouch displays along dedicated hardware to solve problems in a balanced way.
I HATE, HATE, HATE the way more and more cars are doing away with the dial. Dials are awesome. Especially in cars.
I can operate the dial while driving and not risk killing someone. Replacing dial controls with flush push buttons, or worse consolidated touchscreens that control virtually everything means either I don't get any control of my radio/AC/etc when I am moving or I risk running someone down because I'm too preoccupied dealing with the shitty no-affordance monstrosity of a UI you replaced a perfectly great thing with.
> A well-made dial is almost always faster and more precise than up-down digital controls.
Eight years ago I bought a microwave oven for my apartment that had a digital knob. It's a physical knob hooked to the timer, but since it's digital, it accelerates. Below one minute, each "notch" increases the time with 5 seconds, but as you go higher, each notch adds more and more time to the total until it starts adding 5 minutes per notch.
It's a fantastic input method for setting a timer since it's tactile - you feel each notch where the time changes, it's deterministic - 30 seconds is always a quarter of a turn, 2 minutes is always a bit more than one turn, 7 minutes is always a bit more than two turns, etc, and it's superior to an analogue timer - it accelerates, you get more precision in the lower ranges and less precision in the higher ranges.
A few years ago I moved and had to buy a new microwave. Except I couldn't find one with a digital dial, all manufacturers had switched back to the shitty input method of +/- buttons again because.. I don't know. Fashion?
It makes me furious when people and companies make interfaces that are clearly inferior existing alternatives for no good reason at all. But most consumers don't care, and here we are. :-/
It seems to me that the touchscreen, because of the reconfigurability you point out, is the lowest common denominator of input. It makes a lot of sense for a highly portable device, because you can pack many configurations into a small device.
I guess what I lament is the richness or input resolution you give up. And if general computing trends more toward this lowest common denominator, we're faced with an input paradigm that's impoverished in every vector.
Which, as I've stated before, only really affects the input part of it, and not the consumption part.
Yes – what this ultimately comes down to is the question of malleability of interface versus specificity of interface.
Capacitive touch screens offer an incredible amount of malleability compared to what we're used to in the history of user interfaces. Within two dimensions, there's simply no limit to what we can do with them. They can be reconfigured infinitely, and yet that infinity is of a low cardinality compared to the infinite number of possible user interfaces.
This difference in cardinality tricks us into thinking it can do anything.
As you say, there are still many applications for which purpose-built interfaces are vastly preferable. Until some sort of science-fiction nano-structure can increase the cardinality of the solution space, the application of malleable interfaces will have distinct limits.
The Kinect is a great example of using the most expressive form of interaction we have - our entire body. It's the right idea. The latest updates even do facial recognition (identify muscle movement). Too bad it doesn't scale to smaller devices/cramped objects. I wonder how small kinect-like tech has to get before it makes its way into smart phones, etc.
However, I think voice might be a more expressive medium there, even better than touch. Imagine being able to detect sarcasm, inflection, accent, etc!
Funny you should mention MIDI keyboards. Many musicians are now seriously using iPads to play music. For a good example check out ProjectRnL on YouTube. They use a bunch of apps, some developed by Jordan Rudess (keyboard virtuoso and keyboardist for the band Dream Theater) that let you play things you can't easily play with a conventional keyboard. He managed to replicate a lot the features of the Continuum Fingerboard (an expensive "continuous keyboard" controller) with some iPad apps. The main thing missing is, as you stated, pressure modulation. I'm convinced though that something like that will be coming to tablets not too long from now.
And one more thing, voice control. Apple has a knack for refining old ideas and making it mainstream. Voice controlled computing has been available for years. Perhaps in the future many of the computer interactions for the non techie consumer will accomplished without using their fingers, swiping.
Oki doki, so given plausible future technology let's try to brainstorm a solution that addresses the issue of tactility in interaction, I give you the ... drumroll KinBall (Kinectic ball). The Kinball would essentially be a wireless ball, like a small juggling sack (the smaller ones) that you could interact with to control devices.
So the Kinball would have the following features
* Gyroscope/acceleratometer so it knows which side is up and how fast it's being moved and where it is.
* Sensors so it can feel where it's being squeezed/pressed and how hard
* Some kind of detecting mechanism for when two balls (cough) are touching each other.
* Ability to vibrate in different frequencies and also only partially on different parts of the ball
So with a device like that you now would have to come up with a gesture language, some ideas
* If the future allows it, ability to change color
* Holding the ball and moving the thumb over it is "cursor mode, pressing in that mode would be clicking (and you could "click" and hold for submenus )
* Similarily swiping your tumb over the ball would be the swipe gesture
* Pinch-squeeze could be a specific gesture, perhaps combined with a gesture (like spritzing cookies :)
* If you hold the ball in the whole hand and move it from your chest and forward you could simulate resistance by varying the frequency of the vibrating to "feel" interface element
* you could roll the ball in your hand forwards and backwards, for instance for scrolling
* Double the balls, double the fun. With two balls you could perhaps do interesting things with the distance between them and again simulate resistance by vibrating the balls as you bring them closer to each other
* Social balling, you could touch someone elses ball (ahem) to transfer info, files etc
* You could have the ball on your desk and it could change color or pulse in different colors for different notifications.
This kind of interface would have some interesting features. You get tactile feedback and most gestures are pretty natural. You don't have to get smudge marks on your screens. The ball is pretty discrete and hardly visible in your hand. Heck with a headset (for getting information, like reading smss) you could just get away with a ball and the headset and skip the device altogether for some scenarios.
On the other hand it's another accessory you can lose and a ball in your pants might not be the best form factor.
Anyways, if Apple introduces the iBall you know where you read it first
Another avenue for substantial progress in interface design, in the same vein as the article's proposition, is tactile feedback, available today from vendors such as Senseg:
This technology has the capacity to bring us beyond "pictures under glass", and seems ready for integration in today's devices, with proper OS and API support.
I could see combining an e-ink display with this kind of tactile feedback surface to replace the user-exposed lower half of a laptop with a device capable of contextual interfaces. Something like this would offer great potential benefits to the user, with no apparent drawbacks.
The OP is rehashing the concepts around pervasive or ubiquitous computing: the notion that computing will expand out to meet us in tangible products, as opposed to being solely accessed on dedicated computing devices.
There's been much more than "a smattering" of work in this area. Lots of really smart industrial designers and engineers have been working on these ideas for quite some time. I personally based my Industrial Design degree thesis around these concepts almost 12 years ago. Hiroshi Ishii’s Tangible Media Group at the MIT Media Lab comes to mind. The Ambient Devices Orb was a well-covered, if early and underdeveloped, attempt to bring a consumer pervasive computing device to market.
These products are here today and will continue to emerge. A recent example would be the thermostat from Nest Labs, a device that beautifully marries the industrial design of Henry Dreyfuss’ Honeywell round thermostat with a digital display, the tangible and intangible interfaces working seamlessly in concert.
Yeah, I was going to bring up the Media Lab and TMG myself (I was a solder monkey there for a while as an undergrad). Go look at some of the stuff they have on their webpage.
http://tangible.media.mit.edu/
Love the effort put into the presentation of this blog post.
Although I personally love all the shiny finger gestures, must agree that this "vision" is only a sexy marketing trick and contains very little actual innovation, and probably even less actual innovation that Microsoft will actually build in the near future, or the long future.
As per the abundance of motors skills that we have, it would indeed be lovely to have those utilized in the future, along with voice and vision, all combined in some complexly simple and elegant way of interacting. Baby steps at a time?
I see the article as pointing to incremental improvements by advocating fine motor manipulations with feedback over touching-glass-and-seeing-the-result (which is in ways harder than hitting a keyboard and at least getting some kinesthetic feedback). But I think we need to consider things in greater generality:
1) Interface designers seem universally fixated on designs that are visually and touch/kinesthetically oriented. What's missing in this is language. In a lot of ways this winds-up with interfaces which indeed look and feel great on first blush but which become pretty crappy over time given that most sophisticated human work is tied up with using language.
2) Even the touch part of interaction seldom considers what's ergonomically sustainable. Pointing with your index finder are fabulously intuitive to start with but is something you'd get really annoyed at doing constantly. There are lots of fine motor manipulations will get hard time as well.
This was similar to my own reaction [0], that these concept videos don't look far enough forward.
And, maybe because I'm just a born contrarian, as the world moves toward touch-based direct-manipulation paradigms, I've personally been moving toward a more tactile, indirect paradigm. I recently bought a mechanical-switch keyboard, for example, that I'm growing more and more fond of every day. I've also started looking for a mouse that feels better in the hand, with a better weight, and better tactility to the button clicks.
The lack of tactility in touch screen keyboards has always been especially annoying to me. There's just so much information there between my fingers and the keys. I mean, there's an entire state -- the reassuring feeling of fingers resting on keys -- that's completely missing.
I accept the compromise in a phone, something that needs to fit in my pocket so I can carry it around all the time. But this makes me lament the rise of tablet computing. This is the sort of place that I refer to when I talk about tablets privileging consumption over production.
I don't think the problem is relegated to UI hardware, though. I think part of what's holding back a lot richer and more meaningful social interaction online is the fact that current social networking paradigms map better to data than to human psychology. It's the parallel problem of fitting the tool to the problem, but not the user.[1]
I'm not sure I agree with the direction he points to (if I understand him correctly). Making our digital tools act and feel more like real, physical objects is akin to 3D skeuomorphism. It's like making a device to drive nails that looks like a human fist, but bigger and harder. Better, I think, to figure out new ways to take advantage of the full potential of our senses and bodies to manipulate digital objects in ways that aren't possible with physical objects. And, please, Minority Report is not it.
"""these concept videos don't look far enough forward."""
If you look at any 'visions of the future' from the last 150 years, their visions retrospectively look silly and naive. From a forward looking perspective, though, would a true vision of the future make sense to a person seeing it? Maybe a clunky 'TV + rotary telephone' 60's vision of videophones would be more understandable/realistic/visionary than the sight of an iphone with a forward facing camera...?
Incremental steps...there's no point in looking too far forward or you wont get anywhere.
I really dislike the lack of tactile feedback on touch devices, as well.
There's technology out there to /provide/ tactile feedback on these devices - but it hasn't been rolled out to anything consumer yet, that I'm aware of.
Excellent point on the skeuomorphism, though I'm not sure what the alternative is. He mainly focuses on all these things we don't have to learn to know how to use, but for any kind of sophisticated functionality I think some learning will be required and it's more about making it flexible and easy to learn, and utilizing additional channels for information in both directions.
I'm actually surprised no one has mentioned Rainbows End by Vernor Vinge yet. He actually presents a vision of a very natural, expressive near future UI.
While he doesn't go into technical details about everything, he does describe interacting with "Ubiquity" through small gestures throughout the body, whether small shrugs or interacting with hands. Further, he touches on the issues surrounding flat interfaces and even the virtual 3D interfaces.
I would submit that part of the problem is that nobody really has a clue how to use your hands. Are we going to have a thingy for every subtask that we want to do? Are our computer workstations going to resemble carpenter workbenches? Probably not, if for no other reason that lack of cost effectiveness. We've got something like the Wiimote as pretty much the epitome of hand-based interaction, but it's not very precise for anything that isn't a game.
I don't mean this as a criticism of the post, I mean it as a stab at an explanation. It is a good point and I've been complaining about the primitive point and grunt interfaces[1] we've had for a while, but it's not even remotely clear where to go from here without (touchscreens are only an incremental point & grunt improvement over mice, you get a couple more gestures at the c another huge leap in processing power and hardware, at the minimum encompassing some sort of 3D glasses overlay for augmented reality or something.
[1]: The mouse is point & grunt. You get one point of focus and 1 - 5 buttons (including the mousewheel as up, down, and click). For as excited as some people have been about touchscreens, they're only a marginal improvement if they're even that; you still have only a couple kinds of "clicks", and you lose a lot on the precision of your pointing. Interfaces have papered that over by being designed for your even-more-amorphous-than-usual grunting, but when you look for it you realize that touchscreens are a huge step back on precision. They'll probably have a place in our lives for a long time but they are hardly the final answer to all problems, and trying to remove the touchscreen and read vague gestures directly has even bigger precision problems.
the problem with revolutionary user interfaces is that nobody knows how to use them. when you see a "picture under glass" of a piano keyboard, you know that in order to make noise you tap the keys. if your interface is a minor incremental change from the status quo, it doesn't require education.
this vision of the future isn't just cool, it's relatable. anybody can look at the products displayed there and think "hey, i know how to use that". if you dream up some amazing new tactile user experience, it might be revolutionary but will people understand it?
Has there been any interaction research done on using something like a stress ball as an interaction device for digital environment? In my imagination a ball would have standard accelerometers and gyroscopes, but in addition fine-grained sensing capabilities to sense different kind of grips. It could also provide tactile feedback.
Music tech has been wrestling with novel controllers including several like the kind you speak of for a while now: http://www.nime.org. Take a look through some of those conference notes for ideas. This is from 2003: http://nime.org/2003/nime03_program.html
Actually, we (me and a student) have built exactly such a device this year. Did not have time to finish and evaluate it yet, however.
Some papers on grasp interaction with spherical objects:
Tango: www.cs.mcgill.ca/~kry/pubs/iser/iser.pdf
Graspables: www.media.mit.edu/~vmb/papers/chi09.pdf
Grasp Sensing for HCI (my paper, more generic): http://www.medien.ifi.lmu.de/pubdb/publications/pub/wimmer20...
Well, this is supposed to be a video about the future of interaction design and not the future in general. But I have two points that I want to say:
- The future technology should help the man kind be independent. It doesn't need to make you rich, but just do your own thing. I don't like that someone is driving my car or waiting me in the airport. I'll prefer that they play music or baseball.
- We don't need high tech gadget and assistance. Get out of your computer and go see the world. There are hundred of millions of people that are diabetic around the world. Go and solve that, billions and may be trillions of $$ are there.
Brief, we don't need touch screens everywhere in the future. We don't need valets, actually having them is worse for the man kind. There are huge scales problems like disease and famine and joblessness that need to be solved.
[+] [-] feral|14 years ago|reply
* * *
People still travel for meetings? By plane?
Wait, someone is driving the car? Thats not very productive.
There are bellhops? Why are there still bellhops in the future? What do they do?
Why is the screen so small? Why have a screen, if you have those perfect augmented reality glasses?
'Creating reply interface'? We still have to wait for computers?
There's still global poverty, and benefit concerts? When these people have all that fancy tech? Damn it.
Copy and Paste is still around?
Kids are still taught long division? Why? Why do they use a pencil?
Also, won't the future be one of neural interfaces? Isn't there something wrong with interfacing two electrical signal processing machines (brain + computer) via all these muscles and optical sensors and so on? I know there's a lot of science to be solved first; but surely the future of interfaces is that they are invisible, and built in to us?
[+] [-] mappu|14 years ago|reply
[+] [-] melissamiranda|14 years ago|reply
[+] [-] baali|14 years ago|reply
[+] [-] shin_lao|14 years ago|reply
[+] [-] sravfeyn|14 years ago|reply
[+] [-] sravfeyn|14 years ago|reply
[deleted]
[+] [-] kirillzubovsky|14 years ago|reply
[+] [-] mrshoe|14 years ago|reply
Two similar examples:
Garmin's newer aircraft GPS units have touch screens instead of knobs and buttons. The iPad has proven very popular among pilots. I can see why Garmin would decide that "touch is the future." But, while I'm flying an airplane, for my money I'd rather have knobs to grab and twist, and buttons to push and feel.
Tesla's new Model S uses one huge touch screen for its in-dash interface. Surely, if you want to change your music's volume or turn on air conditioning while driving, it's harder to hit touch targets that are Pictures Under Glass than to grab and twist a knob.
[+] [-] pedalpete|14 years ago|reply
I was actually amazed that Apple completely missed the ability to use touch when you weren't looking at the device.
[+] [-] josscrowcroft|14 years ago|reply
They're up in the air about it because now, instead of knobs and switches, informed by paper maps and pencils, they've got this fancy, 'high-tech' touch screen display - 3x5 feet of shiny glass.
There was a minor storm in a teacup about it, because it leads to a much greater potential for accidents, bad landings and congestion (it's a very busy airport)- but because it cost millions, nobody can go back to good ol' pencil, paper and brain.
[+] [-] prawn|14 years ago|reply
[+] [-] tsunamifury|14 years ago|reply
1) "targets" are a part of tactile interfaces, in touch interfaces we may shed the need for accurate targets (i.e. gestures can be made anywhere on a screen)
2) Confidence that comes from the physical response from tactile systems may just be an anachronism after a decade or two of built up trust in touchscreen systems
[+] [-] zach|14 years ago|reply
http://www.jwz.org/blog/2004/12/sensory-homunculus/
The hands are huge because so much of your brain is devoted to skillfully moving and precisely sensing things with your hands.
Your hands are basically the focus of the human body in interacting with the environment around it.
This explains how moving from buttons and sensors to a touch-sensitive experience is a major and hard-to-explain qualitative difference.
It also underscores the great point made here, that we can make devices far more suited still to the primary way we're designed to interact with the world.
[+] [-] sukuriant|14 years ago|reply
[+] [-] tern|14 years ago|reply
Some more silly videos:
- Nokia (w/ AR goggles): http://www.youtube.com/watch?v=A4pDf7m2UPE
- Cisco Songdo City: http://www.youtube.com/watch?v=f1x9qU-Sav8 (this one's real!)
For those sympathetic to the argument of the OP, you may be interested in Bill Buxton's papers on bi-manual interaction. Bill is a huge (and early) proponent of this point of view (that computer interfaces should make full use of the capabilities of the human body): http://www.billbuxton.com/papers.html#anchor1442822
[+] [-] xpaulbettsx|14 years ago|reply
You have to remember, that while the "wow" factor (to some folks) is the screens and form factors, this video is made by a group in Office - the things they're really researching and trying to demonstrate is the vision of how your personal information and your "work" information (i.e. your social circles, your coworkers, your job) interact with each other.
How can context really be used effectively with productivity in an office setting? Context is this huge term here - device form factor, the people you're with, the things you're doing, where you're at; there is a ton of information available to apps / services now about who you are, what you're doing, etc - what are scenarios in which that information is actually combined and put to good use?
They really should've made the Director's Commentary to go along with this, there's a lot of research and data behind this video along with the special effects.
[+] [-] szany|14 years ago|reply
[+] [-] losvedir|14 years ago|reply
I suppose he's just pointing out one area of the future to think about, but I wish he'd mentioned other ideas. I think voice and language, in particular, have some of the most room to grow to make interfaces more intuitive.
edit to add: Along this line, I've often wondered if it'd be worth learning Lojban to interact with the computer more easily. Supposedly the language is perfectly regular and well suited to that sort of thing, but I don't know for sure.
It could be easier to teach humans Lojban than computers English (or however many other languages).
[+] [-] sounds|14 years ago|reply
Computers can probably learn English as soon as they have learned Lojban. (Does that make sense? Assume a program that can understand Lojban. Then write a program, in Lojban, that understands the basic English grammar and the major exceptions. Then add the ability to deduce minor exceptions.)
[+] [-] ender7|14 years ago|reply
The iPad is really, really awesome. But. All that's really changed is that they've added an extra finger. (sure there are three- and four-finger gestures but those just boil down to a different kind of single-finger gesture)
Sadly, we're probably going to have to wait for the advent of supersubstances that can dynamically reconfigure their physical characteristics before we get beyond the finger-and-eye, which I doubt will happen in my lifetime (tears).
[+] [-] Gormo|14 years ago|reply
Apart from purely remediative technologies such as Braille, I can't think of any technology from any era of human history in which conceptual information has ever been conveyed via the tactile sense. There have never been tactile clocks, tactile books, or any kind of tactile language. When human minds attempt to import ideas from the outside word, they use the eyes and ears, not the hands.
There's certainly a real problem with the UIs presented in the MS video, but it's not that they're visually-oriented. It's that they're designed to appeal to the eyes themselves, and fail to encode information in a way that's optimally suited to the mind. The aesthetics of the UIs in that video are stunningly beautiful, but I have no idea from looking at them how I would use them as tools; each notification, dialog box, and prompt for input seems fine in isolation, but when I try to conceptually 'zoom out' and understand how each function integrates into a workflow that allows me to apply my capabilities toward fulfilling my needs, I'm completely at a loss.
There seems to be an unfortunate trend toward pure visual aesthetics in the software industry today - perhaps a cargo-cult attempt to emulate some of Apple's successes - and MS seems to be suffering from it almost as badly as the Ubuntu and Gnome folks.
[+] [-] gigamonkey|14 years ago|reply
[+] [-] aphyr|14 years ago|reply
- The textual keyboard remains one of the fastest methods of text entry. It can be used without looking, offers high bandwidth, affords both serial and chorded inputs, and works well for precise navigation in discrete spaces, like text, spreadsheets, sets of objects like forms, layers, flowcharts, etc.
- MIDI keyboards are comparable, but trade discrete bandwidth for the expressiveness of pressure modulation.
- The joystick (and associated interfaces like wheels, pedals, etc) are excellent tools for orienting. They can also offer precise haptic feedback through vibration and resistance.
- The stylus is an unparalleled instrument for HCI operations involving continuous two dimensional spaces. It takes advantage of fine dexterity in a way that mice cannot, offering position, pressure (or simply contact), altitude, angle, and tip discrimination.
- Trackballs and mice are excellent tools for analogue positional input with widely varying velocities. You can seek both finely and rapidly, taking advantage of varying grips. Trackballs offer the added tactile benefits of inertia and operating on an infinite substrate.
- Dials, wheels. A well-made dial is almost always faster and more precise than up-down digital controls. They offer instant visual feedback, precise tuning, spatial discrimination, variable velocities, can be used without looking, and can be adapted for multiple resolutions.
- Sliders. Offers many of the advantages of dials--smooth control with feedback, usable without looking--but in a linear space. Trades an infinite domain for linear manipulation/display, easier layout and use in flat or crowded orientations.
And these are just some of the popular ones. You've got VR headsets for immersive 3d audio and video, haptic gloves or suits, sometimes with cabling for precise pressure and force vector feedback, variable-attitude simulators, etc. There are weirder options as well--implanted magnets or electrode arrays to simulate vision, hearing, heat, taste, etc...
Dedicated interfaces can perform far better at specific tasks, but glass interfaces offer reconfigurability at low cost. That's why sound engineers have physical mixer boards, writers are using pens or keyboards, artists are using Wacom tablets, nuclear physicists are staring at fine-tuning knobs, and motorcyclists are steering with bars, grips, and body positioning; but everyday people are enjoying using their ipad to perform similar tasks.
Glass isn't going to wipe out physical interfaces; it's just a flexible tool in an expanding space of interaction techniques. More and more devices, I predict, will incorporate multitouch displays along dedicated hardware to solve problems in a balanced way.
[+] [-] georgemcbay|14 years ago|reply
I HATE, HATE, HATE the way more and more cars are doing away with the dial. Dials are awesome. Especially in cars.
I can operate the dial while driving and not risk killing someone. Replacing dial controls with flush push buttons, or worse consolidated touchscreens that control virtually everything means either I don't get any control of my radio/AC/etc when I am moving or I risk running someone down because I'm too preoccupied dealing with the shitty no-affordance monstrosity of a UI you replaced a perfectly great thing with.
[+] [-] henrikschroder|14 years ago|reply
Eight years ago I bought a microwave oven for my apartment that had a digital knob. It's a physical knob hooked to the timer, but since it's digital, it accelerates. Below one minute, each "notch" increases the time with 5 seconds, but as you go higher, each notch adds more and more time to the total until it starts adding 5 minutes per notch.
It's a fantastic input method for setting a timer since it's tactile - you feel each notch where the time changes, it's deterministic - 30 seconds is always a quarter of a turn, 2 minutes is always a bit more than one turn, 7 minutes is always a bit more than two turns, etc, and it's superior to an analogue timer - it accelerates, you get more precision in the lower ranges and less precision in the higher ranges.
A few years ago I moved and had to buy a new microwave. Except I couldn't find one with a digital dial, all manufacturers had switched back to the shitty input method of +/- buttons again because.. I don't know. Fashion?
It makes me furious when people and companies make interfaces that are clearly inferior existing alternatives for no good reason at all. But most consumers don't care, and here we are. :-/
[+] [-] joebadmo|14 years ago|reply
It seems to me that the touchscreen, because of the reconfigurability you point out, is the lowest common denominator of input. It makes a lot of sense for a highly portable device, because you can pack many configurations into a small device.
I guess what I lament is the richness or input resolution you give up. And if general computing trends more toward this lowest common denominator, we're faced with an input paradigm that's impoverished in every vector.
Which, as I've stated before, only really affects the input part of it, and not the consumption part.
[+] [-] mortenjorck|14 years ago|reply
Capacitive touch screens offer an incredible amount of malleability compared to what we're used to in the history of user interfaces. Within two dimensions, there's simply no limit to what we can do with them. They can be reconfigured infinitely, and yet that infinity is of a low cardinality compared to the infinite number of possible user interfaces.
This difference in cardinality tricks us into thinking it can do anything.
As you say, there are still many applications for which purpose-built interfaces are vastly preferable. Until some sort of science-fiction nano-structure can increase the cardinality of the solution space, the application of malleable interfaces will have distinct limits.
[+] [-] psychotik|14 years ago|reply
However, I think voice might be a more expressive medium there, even better than touch. Imagine being able to detect sarcasm, inflection, accent, etc!
[+] [-] kisielk|14 years ago|reply
[+] [-] inkaudio|14 years ago|reply
[+] [-] mattiask|14 years ago|reply
So the Kinball would have the following features
* Gyroscope/acceleratometer so it knows which side is up and how fast it's being moved and where it is.
* Sensors so it can feel where it's being squeezed/pressed and how hard
* Some kind of detecting mechanism for when two balls (cough) are touching each other. * Ability to vibrate in different frequencies and also only partially on different parts of the ball
So with a device like that you now would have to come up with a gesture language, some ideas
* If the future allows it, ability to change color
* Holding the ball and moving the thumb over it is "cursor mode, pressing in that mode would be clicking (and you could "click" and hold for submenus )
* Similarily swiping your tumb over the ball would be the swipe gesture
* Pinch-squeeze could be a specific gesture, perhaps combined with a gesture (like spritzing cookies :)
* If you hold the ball in the whole hand and move it from your chest and forward you could simulate resistance by varying the frequency of the vibrating to "feel" interface element
* you could roll the ball in your hand forwards and backwards, for instance for scrolling
* Double the balls, double the fun. With two balls you could perhaps do interesting things with the distance between them and again simulate resistance by vibrating the balls as you bring them closer to each other
* Social balling, you could touch someone elses ball (ahem) to transfer info, files etc
* You could have the ball on your desk and it could change color or pulse in different colors for different notifications.
This kind of interface would have some interesting features. You get tactile feedback and most gestures are pretty natural. You don't have to get smudge marks on your screens. The ball is pretty discrete and hardly visible in your hand. Heck with a headset (for getting information, like reading smss) you could just get away with a ball and the headset and skip the device altogether for some scenarios.
On the other hand it's another accessory you can lose and a ball in your pants might not be the best form factor.
Anyways, if Apple introduces the iBall you know where you read it first
[+] [-] WiseWeasel|14 years ago|reply
http://senseg.com/experience/senseg-demos
This technology has the capacity to bring us beyond "pictures under glass", and seems ready for integration in today's devices, with proper OS and API support.
I could see combining an e-ink display with this kind of tactile feedback surface to replace the user-exposed lower half of a laptop with a device capable of contextual interfaces. Something like this would offer great potential benefits to the user, with no apparent drawbacks.
[+] [-] RyLuke|14 years ago|reply
There's been much more than "a smattering" of work in this area. Lots of really smart industrial designers and engineers have been working on these ideas for quite some time. I personally based my Industrial Design degree thesis around these concepts almost 12 years ago. Hiroshi Ishii’s Tangible Media Group at the MIT Media Lab comes to mind. The Ambient Devices Orb was a well-covered, if early and underdeveloped, attempt to bring a consumer pervasive computing device to market.
These products are here today and will continue to emerge. A recent example would be the thermostat from Nest Labs, a device that beautifully marries the industrial design of Henry Dreyfuss’ Honeywell round thermostat with a digital display, the tangible and intangible interfaces working seamlessly in concert.
[+] [-] Symmetry|14 years ago|reply
[+] [-] kirillzubovsky|14 years ago|reply
Although I personally love all the shiny finger gestures, must agree that this "vision" is only a sexy marketing trick and contains very little actual innovation, and probably even less actual innovation that Microsoft will actually build in the near future, or the long future.
As per the abundance of motors skills that we have, it would indeed be lovely to have those utilized in the future, along with voice and vision, all combined in some complexly simple and elegant way of interacting. Baby steps at a time?
[+] [-] cpr|14 years ago|reply
[+] [-] joe_the_user|14 years ago|reply
1) Interface designers seem universally fixated on designs that are visually and touch/kinesthetically oriented. What's missing in this is language. In a lot of ways this winds-up with interfaces which indeed look and feel great on first blush but which become pretty crappy over time given that most sophisticated human work is tied up with using language.
2) Even the touch part of interaction seldom considers what's ergonomically sustainable. Pointing with your index finder are fabulously intuitive to start with but is something you'd get really annoyed at doing constantly. There are lots of fine motor manipulations will get hard time as well.
[+] [-] joebadmo|14 years ago|reply
And, maybe because I'm just a born contrarian, as the world moves toward touch-based direct-manipulation paradigms, I've personally been moving toward a more tactile, indirect paradigm. I recently bought a mechanical-switch keyboard, for example, that I'm growing more and more fond of every day. I've also started looking for a mouse that feels better in the hand, with a better weight, and better tactility to the button clicks.
The lack of tactility in touch screen keyboards has always been especially annoying to me. There's just so much information there between my fingers and the keys. I mean, there's an entire state -- the reassuring feeling of fingers resting on keys -- that's completely missing.
I accept the compromise in a phone, something that needs to fit in my pocket so I can carry it around all the time. But this makes me lament the rise of tablet computing. This is the sort of place that I refer to when I talk about tablets privileging consumption over production.
I don't think the problem is relegated to UI hardware, though. I think part of what's holding back a lot richer and more meaningful social interaction online is the fact that current social networking paradigms map better to data than to human psychology. It's the parallel problem of fitting the tool to the problem, but not the user.[1]
I'm not sure I agree with the direction he points to (if I understand him correctly). Making our digital tools act and feel more like real, physical objects is akin to 3D skeuomorphism. It's like making a device to drive nails that looks like a human fist, but bigger and harder. Better, I think, to figure out new ways to take advantage of the full potential of our senses and bodies to manipulate digital objects in ways that aren't possible with physical objects. And, please, Minority Report is not it.
[0]: http://news.ycombinator.com/item?id=3184216
[1]: More here: http://blog.byjoemoon.com/post/11670022371/intimacy-is-perfo... and here: http://blog.byjoemoon.com/post/12261287667/in-defense-of-the...
[+] [-] anjc|14 years ago|reply
If you look at any 'visions of the future' from the last 150 years, their visions retrospectively look silly and naive. From a forward looking perspective, though, would a true vision of the future make sense to a person seeing it? Maybe a clunky 'TV + rotary telephone' 60's vision of videophones would be more understandable/realistic/visionary than the sight of an iphone with a forward facing camera...?
Incremental steps...there's no point in looking too far forward or you wont get anywhere.
[+] [-] warfangle|14 years ago|reply
There's technology out there to /provide/ tactile feedback on these devices - but it hasn't been rolled out to anything consumer yet, that I'm aware of.
I can't wait for it:
http://thenextweb.com/apple/2010/10/29/electrostatic-feedbac...
[+] [-] bglusman|14 years ago|reply
[+] [-] Riverbed|14 years ago|reply
http://www.kickstarter.com/projects/740785012/touchfire-the-...
[+] [-] Dysiode|14 years ago|reply
While he doesn't go into technical details about everything, he does describe interacting with "Ubiquity" through small gestures throughout the body, whether small shrugs or interacting with hands. Further, he touches on the issues surrounding flat interfaces and even the virtual 3D interfaces.
And that's just the UI side of the novel.
[+] [-] jerf|14 years ago|reply
I don't mean this as a criticism of the post, I mean it as a stab at an explanation. It is a good point and I've been complaining about the primitive point and grunt interfaces[1] we've had for a while, but it's not even remotely clear where to go from here without (touchscreens are only an incremental point & grunt improvement over mice, you get a couple more gestures at the c another huge leap in processing power and hardware, at the minimum encompassing some sort of 3D glasses overlay for augmented reality or something.
[1]: The mouse is point & grunt. You get one point of focus and 1 - 5 buttons (including the mousewheel as up, down, and click). For as excited as some people have been about touchscreens, they're only a marginal improvement if they're even that; you still have only a couple kinds of "clicks", and you lose a lot on the precision of your pointing. Interfaces have papered that over by being designed for your even-more-amorphous-than-usual grunting, but when you look for it you realize that touchscreens are a huge step back on precision. They'll probably have a place in our lives for a long time but they are hardly the final answer to all problems, and trying to remove the touchscreen and read vague gestures directly has even bigger precision problems.
[+] [-] notatoad|14 years ago|reply
this vision of the future isn't just cool, it's relatable. anybody can look at the products displayed there and think "hey, i know how to use that". if you dream up some amazing new tactile user experience, it might be revolutionary but will people understand it?
[+] [-] dirtyaura|14 years ago|reply
Has there been any interaction research done on using something like a stress ball as an interaction device for digital environment? In my imagination a ball would have standard accelerometers and gyroscopes, but in addition fine-grained sensing capabilities to sense different kind of grips. It could also provide tactile feedback.
[+] [-] pandakar|14 years ago|reply
[+] [-] raphman|14 years ago|reply
Some papers on grasp interaction with spherical objects: Tango: www.cs.mcgill.ca/~kry/pubs/iser/iser.pdf Graspables: www.media.mit.edu/~vmb/papers/chi09.pdf Grasp Sensing for HCI (my paper, more generic): http://www.medien.ifi.lmu.de/pubdb/publications/pub/wimmer20...
[+] [-] csomar|14 years ago|reply
- The future technology should help the man kind be independent. It doesn't need to make you rich, but just do your own thing. I don't like that someone is driving my car or waiting me in the airport. I'll prefer that they play music or baseball.
- We don't need high tech gadget and assistance. Get out of your computer and go see the world. There are hundred of millions of people that are diabetic around the world. Go and solve that, billions and may be trillions of $$ are there.
Brief, we don't need touch screens everywhere in the future. We don't need valets, actually having them is worse for the man kind. There are huge scales problems like disease and famine and joblessness that need to be solved.