top | item 1406791

The Future of UI

118 points| chuhnk | 16 years ago |youtube.com | reply

83 comments

order
[+] sdfx|16 years ago|reply
These new UI concepts are always awe inspiring, but what is really the benefit of using them?

One of the most popular demonstrations is browsing through a bunch of photos on a huge screen with large gestures. While this looks quite impressive it's inaccurate when performing specific tasks (e.g. the color-selection planes in the video), the gestures are limited and you have to learn somewhat unintuitive ones apart from the more obvious "select", "move left" and "zoom" gestures. His more real world example (the table and the globe) didn't quite work, but what he could show us wasn't a step up from using a mouse.

An other favorite is the "physical elements on a table" example. This works reasonable well but his examples again were not convincing. Using it as a wind tunnel without being able to rotate it in three dimensions? Calculating the shadows of buildings?

But what's holding us back? Processing power? Cost? Hardware requirements? Or a general lack of use cases, of areas where this really makes sense?

[+] hop|16 years ago|reply
Their incentive is to make it look cool, wow an audience, and bring more grant money to the MIT Media Lab. Contrast this with a company that puts the pieces together and ships a useful product - like the iPad UI...

I always thought crazy concept cars were a waste of time and resources for car companies. If they instead focused on massive in-house iteration (like Apple's 10-3-1 prototyping process), better cars would be brought to market.

[+] DannoHung|16 years ago|reply
I don't know. Why did it take so long for touch screen interfaces to become huge when almost everyone loves them?

On the other hand, why are command line interfaces still the most efficient way for experts to interact with a system?

[+] levesque|16 years ago|reply
There is benefit for a small subset of applications, applications which can gain from having 3d input. Most of the stuff he does in that demonstration is not of this category. But interfaces like that can be very nice for architecture (or computer assisted design) and 3d data visualization (medical or other). Gaming is also bound to be an interesting application. It will be nice to see what Microsoft comes up with for natal as well.

For general computing I am not sure if there is a use for this kind of interface, this stuff takes a lot of space and is long to setup while we are going the other way - towards smaller and more portable computers.

Also, let us not forget that gestural interfaces are very tiresome and it would be hard to use one for a long period of time.

[+] fjabre|16 years ago|reply
Everybody seems to be stuck on hand gestures and arm movement for the future, but while this looks cool I wonder just how comfortable it is to keep you arms waving about like that for hours on end. Also, it's hard to argue that 3D is always superior to 2D when presenting information. In some cases 2D is more than sufficient.

I also wonder why there isn't more talk about Brain–computer interfaces. It seems to me that the most natural UI is one that can be navigated just by thinking. It might be little Borg-like but I can't imagine HCI going in any other direction long-term.

*http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interfac...

[+] inimino|16 years ago|reply
Physical inactivity (sitting at your desk all day moving only your fingertips) is implicated in many of the health problems most hackers are likely to suffer from and eventually die from. I think many of us would welcome the opportunity to stand up and move around a bit while still getting our work accomplished, even if it wasn't the only interface we used.

BCIs, on the other hand, would mean you can potentially finish that software module, or at least read your email, while simultaneously going for a run ...as long as you don't run into a tree or the road.

[+] gojomo|16 years ago|reply
The way things are going, the first mass-market UI that can be "navigated just by thinking" will probably require users to think in mandarin.
[+] Groxx|16 years ago|reply
I've generally thought that too, gestures (especially when your arm is involved) are hugely tiring if you have to do them for a long time on anything larger than an iPhone. And 3D imposes extra thinking. Subtle 3D could work - we have minimal already, with layered windows and shadows and "3D buttons" - but nothing drastic.

And my hopes too are for brain interfaces, though I think I've got a pretty good idea of the difficulty inherent in that. I can hope, right?

[+] musclman|16 years ago|reply
Sure, it looks cool, but it appears to require a lot of slow and inefficient physical movement to accomplish the most basic of tasks. Imagine a bunch of cubical works sitting at their desktops waving their arms around trying to navigate their computer's file system :-)
[+] mechanical_fish|16 years ago|reply
Keep in mind that, like most of the things designed at the Media Lab, this was built to look really good in demos. The movements are big and inefficient so they will show up well on stage and on camera.

In the real world the movements might be much more subtle. Certainly no broader than, say, American Sign Language, which is quite analogous to what we are reinventing here.

[+] josh33|16 years ago|reply
All technology feels lousy out of its time. He's giving the hackers/entrepreneurs the tech. It's up to us to create some valuable/entertaining/necessary functions out of it.
[+] ryanjmo|16 years ago|reply
Imagine the repetitive stress injury. My body hurts from just using the wii.
[+] EAMiller|16 years ago|reply
Agreed, every time I see this my arms feel tired.
[+] tgandrews|16 years ago|reply
The moving physical objects makes sense, touch makes sense. Learning a series of gestures to manipulate data on a screen makes less sense to me than a mouse.

The mouse is movement and control, the gestures need you to hold your hands in strange positions and move within a field of view (what the camera can see). An improved design will need to be intuitive.

[+] Scriptor|16 years ago|reply
The brain is able to make itself think that external tools are mere extensions of the body, so it's no surprise that the mouse has been successful. I wonder if the lack of any tactile response in these interfaces hinders them somewhat or if it's all the same for the brain.
[+] jules|16 years ago|reply
What would make sense I think it just using your hands on the table instead of using a mouse. Touchpads do some of this but they don't work very well and the surface is usually very small.
[+] johnthedebs|16 years ago|reply
"It has to be for every human being...It's been 25 years. Can there really be only one interface? There can't."

I love that, and I totally agree. I think what we're going to see in the future is applications that primarily use the interfaces they're best suited for with fallbacks to other less well-suited interfaces.

Is this going to be the future for every application? No way. The same way touch-based interfaces aren't the future for every application. But (as with any demo) he's only scratching the surface here and I believe that a UI which matches the way we already think about things has some huge implications.

[+] treblig|16 years ago|reply
At the end he mentions "[In 5 years time,] when you buy a computer, you'll get this."

I think that's about 5 years too soon on that one. Incredible demonstrations though... awesome to see science fiction become reality.

[+] ams6110|16 years ago|reply
Sigh. In 5 year's time I'm quite certain I'll still be spending 90% of my time in Emacs or a shell, just like I was doing 5, 10, 15, 20 years ago.
[+] stcredzero|16 years ago|reply
All you need is a large display and a webcam. Isn't Microsoft already doing this for gaming? The enhanced resolution Wii controller with a large flatscreen already has most of this capability.
[+] bruceboughton|16 years ago|reply
I find the conclusion of the talk hard to swallow: that these sort of interfaces will be common in the computer you buy 5 years from now.

Why? This goes against the current major trend in the industry: mobilisation / pocket-isation. It is inconceivable that our built environments will have the required sensors, projectors, etc. to enable these interfaces. Even more than that, our computing is becoming ever more mobile. Computing has to fit our environment, not the other way round.

Maybe this stuff is the future, but it's certainly not the near future and I didn't really see much value in the interfaces demoed.

Then again, the point of R&D is to discover what doesn't work as much as what does and you can't do that without realising your ideas.

[+] daralthus|16 years ago|reply
This is still 2d. You can't interact in 3d if it is just projected in 2d. He is just pointing and flying. You don't have to fly between documents if you have a real 3d augmented reality. Apps would be like real 3d objects, you can touch them, manipulate their shapes like you do it with your keyboard or a door-knob, the only difference is that they won't be from real material. So they won't have phisical boundaries, if not programmed that way. (It just depends on the needs.) I want to make a demo on it, is there somebody who want to join?
[+] bruceboughton|16 years ago|reply
It's quite a scary thought that with a truly 3D/AR computing experience, we might not be able to tell which elements of our environment are Real World and which are virtual.

Imagine a virus that injects fake flooring into your vision where instead there is a 40ft fall!

[+] Tycho|16 years ago|reply
I'm not so sure about all'a'dat (although I do remember thinking the stuff in Minority Report was awesome, years ago) but I can see a need for 3D/depth augmentation of standard desktop interfaces. I want to be able to tuck windows away 'in the distance' or twist them round to a slanted pane (so they take up a fraction of the space but are still more or less visible/legible). I also want to step in and out of '3D mode' when making ER diagrams or UML diagrams and such, for when there's too many criss-crossing lines.

Undoubtedly these things have already been tried (I saw a nice Linux demo somewhere, years ago, with 3D windows) but I'd like them standardized, and touch-operated. It'd make a big difference IMO.

Periodically when using a cluttered interface I mutter 'this is why the need for 3D is so great' and my colleagues laugh at me. But I'm only half joking.

[+] koeselitz|16 years ago|reply
"We didn't have networks at all at the time of the MacIntosh's introduction." Seriously? Seems like that estimation might be about fifteen years off to me.

Edit to say: In fact, he's an intelligent and well-versed enough guy that I'm sort of puzzled by this remark. Does anyone know what Mr Underkoffler means when he says that networks weren't around then? I think he must have something different in mind than I do.

[+] inimino|16 years ago|reply
I presume "we" there means the general market to which the Mac was introduced. Ubiquitous Internet access was years away.
[+] stcredzero|16 years ago|reply
I thought there were Ethernet LAN in Xerox Parc in the 70's.
[+] ghempton|16 years ago|reply
He kind of fumbles on the question of "what is the killer app?" You'd think it would be on the tip of his tongue considering how long hes been working on this...

That said, there really needs to be more open source software to enable this front. I think we will really see some innovation once a hacker can take a $10 webcam and and open source lib and start creating software with these types of user interfaces.

[+] davidalln|16 years ago|reply
A lot of this has been done with openFrameworks (http://openframeworks.cc/). It ties together a plethora of open source frameworks including OpenCV with easy to use bindings to indirectly create a somewhat simplified version of C++. If you YouTube openFrameworks, you'll get a lot of demos demonstrating this technology using free libs.
[+] pedrokost|16 years ago|reply
What i really hate about operating systems is that they the same as they were in the beginning. THe only thing that changed was the visual appearance. We still have a taskbar, a desktop,etc. Can't someone reinvent how operating systems work?
[+] frou_dh|16 years ago|reply
I respect the chops of those creating these things but I just don't feel like I'd want to use them daily. Perhaps I'm already locked in to a legacy mindset by my mid 20s!
[+] ryanjmo|16 years ago|reply
This talk seems like a whole bunch of 3-D snake oil.
[+] elblanco|16 years ago|reply
Nice first effort, but after watching all I can think is:

arms = tired

looks like a clumsy, highly particular and low volume way to sift through data