top | item 6210644

Leap Motion: Amazing, Revolutionary, Useless

173 points| kevin_morrill | 12 years ago |hanselman.com

135 comments

order
[+] jdietrich|12 years ago|reply
Believe it or not, this is a century-old problem.

The theremin is an electronic musical instrument, played by waving your hands in the air. It works by detecting RF capacitance between a pair of antennae and the player's body. You can see the Theremin being played at the YouTube link below.

Playing the theremin is incredibly difficult, due to the lack of tactile feedback. The human body is very poorly equipped to point precisely at an arbitrary position in free space. Only a handful of players can achieve anything better than squeaky science fiction noises and even virtuoso players struggle constantly with intonation. Modern theremin technique depends on a system of discrete hand gestures, which reduce the player's dependency upon coarse proprioception.

If the Leap Motion is to have any real utility, it will need phenomenally sophisticated software, to interpret intent from hand motion rather than simply passing the hand location as a raw input. The human body simply isn't capable of making the kind of movements that the designers of Leap Motion seem to expect, even with a great deal of practice.

https://www.youtube.com/watch?v=Ptq_N-gjEpI

[+] lcedp|12 years ago|reply
Thank you for insight. Really interesting. But popular sci-fi movies features like waving hands in the air / parallel to the floor touch screens // transparent monochrome screens(not HUDs) just seem so obviously wrong to me and I don't understand why so many people fail to see it.
[+] lifeformed|12 years ago|reply
I wonder if you could add some tactile feedback with some precision fans? A grid of fans below your hand and in front of it could provide a gradient of subtle pressure. Or maybe a fan on some sensitive servos that track each fingertip?
[+] vanderZwan|12 years ago|reply
As I understand, a problem with the theremin is also that the sensitivity depends on humidity and anything else that affects capacitance, and capacitance also depends of the shape of the hand (and arm position, and body position, etc) and because of that it doesn't directly translate to linear distance coordinates. So it becomes hard to build reliable muscle memory. The leap in principle should be able to avoid these problems, although so far a lot of the software written for it fails at this.

Also, a theremin is always on, you can't for example turn the detection on or off based on the amount of fingers you hold up.

[+] nairteashop|12 years ago|reply
I think the article is a bit unfair. I've been playing around with a leap for a few days and am suitably impressed.

What works very well with the device is coarse movements, especially relative hand movements. What doesn't work so well is finer gestures (1/100th of a millimeter motions of all 10 fingers? ha).

This app is a perfect example: https://airspace.leapmotion.com/apps/cyber-science-motion

You can use your hands, kept flat, to spin around / zoom a 3-D rendering of a human skull. You can also point at specific elements on the skull. Both of these coarse gestures work great, and the experience is incredible.

However, the app also unfortunately has a "click" gesture to pick apart elements of the skull - you click by spreading out your thumb and then folding it back in. Works terribly, as this fine gesture is detected maybe 50% of the time. It should've simply been left out.

I showed this app to my dad, who's a doctor, and he was blown away. He was visibly excited about the potential for a device he can use to spin around CT and MRI scans in the operating room without having to touch a mouse/joystick - currently he has a person doing this for him to keep things sterile, and this can sometimes be frustrating.

The leap, at least in its current incarnation, reminds me a lot of Google glass. Both Google/LEAP and their proponents say the devices are going to change the world. Maybe, maybe not. Neither device works perfectly like what you see in the heavily edited demo videos. But both can be invaluable in certain specialized fields, today, as long as folks are realistic about what can be done with them.

[+] abrichr|12 years ago|reply
I showed this app to my dad, who's a doctor, and he was blown away. He was visibly excited about the potential for a device he can use to spin around CT and MRI scans in the operating room without having to touch a mouse/joystick - currently he has a person doing this for him to keep things sterile, and this can sometimes be frustrating.

This is exactly the value proposition of my startup, TouchFree Labs. We're developing software that uses the Leap Motion Controller to allow surgeons to manipulate medical images inside of the operating room. You can see a demonstration of an early prototype here: http://www.youtube.com/watch?v=WaO-cimDOEQ. Demo starts at about 35s. (Apologies in advance for the low production value.)

Right now our bottleneck is medical expertise, and we're looking for surgeons who would be interested in collaborating with us. We're developing workflows that are tailored for different types of procedures, which requires very specialized knowledge. The application also learns the nuances of individual users' movements to improve gesture recognition, which requires lots of data.

I don't know how far away you are from Toronto, but if you could pass the message along to your dad, I'd be very grateful--if only to get some basic feedback. But if he's interested, he could be among the first surgeons in the world to use the Leap Motion inside of an operating room.

[+] emehrkay|12 years ago|reply
At this very moment I am using Apple's external trackpad. If this thing had Leap tech built in, it, to me, would be the next logical evolution of computing interfacing. I imagine the trackpad being a bit wider and the user could just raise his hands when needed, touch in just about any other situation. *I haven't used Leap yet
[+] rywang|12 years ago|reply
A big problem with the LEAP is that there isn't an effective way to click / select something. Pushing forward with your index finger isn't very accurate when the finger tip is also controlling the position. Hence, you always seem to miss where you intend to click. Good selection is a pretty important piece of almost any useful application.

Disclaimer: I work at 3Gear Systems (http://threegear.com), developing technology that possibly competes with the LEAP. We solve clicking by tracking the entire hand -- not just the straight finger.

[+] Qworg|12 years ago|reply
The Leap won't be a "real input device" until you can get your hands out of the flat orientation. At this point, it is a toy.

I think the article is good coverage of the state of the device.

[+] ibudiallo|12 years ago|reply
I think, from the example cited, what this author need is a touch screen lap top.
[+] Keyframe|12 years ago|reply
What's the lag like?
[+] mbesto|12 years ago|reply
There's a general assumption that the Minority Report interface is the interface of the future.[0] There are few reasons why I'm saying no.

1. First and foremost, gorilla arm.[1] My presumption with the "interface of the future" is that it's needed for prolonged use. So, first thing's first, the interface can't be one where our arms require our hands to be higher than our elbows. Unless of course our species got a whole lot stronger in the forearm to support such a feature. Don't see our species doing that anytime soon.

2. Feedback - Right now the feedback loop is eye->brain->hand->brain->eye (repeat) where the hand's pressure against a solid surface is the most important feedback response. With the minority report style interface we currently have a massive delay (comparatively speaking) between the brain->hand->brain loop. We also have to iterate the whole loop much more because we need to constantly assess with our eye where our hand is in 3D (not digital) space. Now let's say the technology gets much better and reduces this to 5ms. We are now bound by the differences of our synapses firing between touch and light. I could be wrong, but it's my assumption that due to the speed of light being the way that it is, that "touch" will always beat "sight" in performance.

For prolonged used applications my bet is on adaptive surfaces. For short term (turning an stove on, flicking a light switch, etc) interfaces I potentially see this Minority Report style interface happening. But does the benefit cost of innovation? Personally I think we are fooling ourselves.

[0] - http://www.ted.com/talks/john_underkoffler_drive_3d_data_wit...

[1] - http://en.wikipedia.org/wiki/Touchscreen#.22Gorilla_arm.22

[+] rywang|12 years ago|reply
Very true. I work at a company building an alternative gestural input device (http://threegear.com). Here's how we have tried to address your points.

1. Gorilla arm -- keep your hands low. We support tracking and interactions literally 1cm above the keyboard / desk. We're mounting the camera above the monitor to achieve this.

2. We use gestures with built-in physical feedback. For instance, our click mechanism is a "pinch" which brings the thumb and index finger tips together. You can "feel" the physical touch event between your fingers when you trigger a command.

Shameless plug: an old video showing interaction with Reddit, Google Maps, browser. http://www.youtube.com/watch?v=U0WLh7WNxCI

[+] rdtsc|12 years ago|reply
Spot on.

We got one of these devices. And did the demos waved around our hands to move through google earth but then we were wondering so our hands are getting tired and we couldn't really see how this was better than a mouse or joystick.

Maybe it would make it cool for interactive kiosks or cool little showroom gimicks for prolong use, forgetaboutit.

[+] nairteashop|12 years ago|reply
I agree with you that a Minority Report style interface doesn't make sense as the sole interface to a PC. However, something like the Leap IMO makes a great secondary interface to a keyboard and mouse/trackpad.

I'm using a Leap controller right now, with BTT for Mac/Touchless for gesture-based control. As I read through a page I can simply stick my hand out and wave it up to scroll down the page - it's a phenomenal experience for passive reading as I don't have to break focus to reach out for my mouse/trackpad. I've also configured some additional coarse gestures to launch mission control etc.

Using the Leap for such brief, coarse gestures avoids both the problems you've mentioned because my arm is resting on my desk, with fingers just a few inches above my trackpad/keyboard, so no "gorilla arm" problems; the gestures are coarse, requiring very little hand-eye co-ordination and finally, the gestures are brief so no fatigue problems.

All of this breaks down once you start trying to do any finer-control gestures, like trying to point at links and click on them like the OP tried to do. IMO the Leap should be used to augment the keyboard/mouse as a secondary interaction interface that you use occasionally.

[+] bennyg|12 years ago|reply
A coworker and I have written some gesture recognizers for Leap in C# and Objective-C hoping to make the thing more usable and easier for other developers to write software using it. Hate to do a shameless plug, but both can be found here:

https://github.com/uacaps/MotionGestureRecognizers-ObjC

https://github.com/uacaps/MotionGestureRecognizers-CSharp

---

We're hoping to start the community foundation for making tools that help make Leap extremely usable from both a development and from a user experience standpoint. The Leap is awesome, beautiful and we think can be used in a myriad of applications.

[+] nine_k|12 years ago|reply
If all you do is browse the web, a touchpad is often all you need.

For text editing / word processing, a good keyboard is often all what's needed, and the use of mouse is often discouraged by gurus.

I still can easily imagine using the Leap Motion device while editing images and especially 3D models. Even more I can imagine using it in games, especially games written with this device in mind.

I don't own the device but I tried it. What's great is that you don't need to wave your hands in the air, Minority report-style; moving your fingers us enough. I wish it was built into a keyboard; it would replace a touchpad / trackpoint easily, adding much more capabilities.

BTW does anyone here remember how clumsy were mice on PCs in, say, 1992?

[+] kabdib|12 years ago|reply
Mice worked pretty well in 1992. They worked well in 1984 on the Mac, and five years before that, on Xerox hardware and LMs (though they suffered from "really small ball bearing" disease, and easily got dirty or cranky and refused to roll well).

With a mouse, you have at least one button you can signal an event with.

Imagine doing a UI where you didn't have a mouse button. All you can do is move and point. That's a Kinect, for the most part.

I haven't used a LeapMotion, but I suspect it's the same problem; there's no way to generate a discrete event. It's all fuzzy. Did your fingers touch? Did you wave in a particular way? Some fuzzy matcher is pumping out "90% probability of event X, 75% probability of event Y" every few milliseconds, and it's up to higher layers to turn this goo into decisions that people are happy with. It's hard at all layers.

I really think you need a button, a clicker. Something "hard" in the UI that slams a voice of reason into that fuzzy tower that's continually only able to /guess/ what you're trying to do.

[We wanted a clicker on Kinect. Politically impossible. I think it would have helped a lot.]

[+] vanderZwan|12 years ago|reply
> BTW does anyone here remember how clumsy were mice on PCs in, say, 1992?

I came here to state this. I'm studying interaction design, while I understand the author's frustrations, the problems do largely seem to be to be poor interface design for how to use the data the leap collects about your hand (and the visualiser demonstrates how accurate that data is).

Basically, the current basic demos try to mimick a mouse by way of using extremely clunky gestures. That won't work: for this to take off, the interface needs to be designed from the start with gestures in mind. I have some ideas on how to do that, but it will require some further tinkering and testing.

The sensor itself is amazing and in my experience very reliable - although I might be biased after having tried to design gesture based interfaces with the Kinect and not succeeding due to its technical limitations and unreliability.

[+] nairteashop|12 years ago|reply
> I wish it was built into a keyboard

This may very well happen if the Leap takes off. They already have distribution deals with HP and Asus; the next logical step would be building it into the laptop. Should be quite possible since the device is small and relatively cheap.

[+] will118|12 years ago|reply
I find the Leap part of BTT (BetterTouchTool) is actually... err.. use-worthy..? Neither useful nor useless.

I've got some really cool (still a big part for me) and useful stuff working, augmenting my mouse/keyboard use. For example, a finger to the left minimises and two fingers to the right opens a list of recently used apps.

Yet I'm very conscious that everything would just be better suited to a keyboard shortcut..

I never bothered with Touchless and mouse emulation things; years of 2D GUI design isn't suited to this kind of interface. "Midnight" is a my favourite Leap app but I think that's just an iPad app that lends itself very well to the leap input too.

[+] imroot|12 years ago|reply
I did some investigative work to look at if we could use the leap motions to replace the touch-sensitive overlays that we strap to 60" TV's for our on-air traffic folks to use during their segments -- the overlays run about 2500, and if we could replace them with a leap motion and get the same functionality with less cost, this could allow us to roll out the traffic application to more stations than the six or so that are currently using our in-house traffic application.

The first thing that I noticed was that it couldn't take the range of the 60" television that we had hooked up to the traffic software, so I scaled this down to a Thunderbolt display, and tried again. In my tests, the recognition for the thumb was sporadic, if not completely missing -- in both my (fat guy) test as well as during the testing of the local personality (non-fat guy).

I then made some changes to our software to try to minimize the effects of the natural movements of the hand -- I turned down the sensitivity to attempt to compensate for the normal shakes and jitters that you have with your hands. This gave it a better feel, but, the traffic reporters still missed the feeling of touching the display and watching that display interact with your touch.

They're still neat devices (I really wanted to say neat toys, but, I don't want to cheapen the work that the Leap Motion folks put into this thing), but, I'm having a hard time implementing them in a way that would work for us...so they're sitting on my shelf, waiting for a project that could use them (or, take them to my local hackerspace should I not find a good project for them shortly)...

[+] CRidge|12 years ago|reply
Amazing, Revolutionary, Useless... and let's not forget buggy! And with horrible support... My device was not able to recalibrate, a problem shared with many others should I believe the forum. A week or two has passed, and no reply from the makers of the device, neither on the forum or on my bug report.

I guess I just got another hunk of junk to put in the failed-devices-closet... :-(

[+] alternize|12 years ago|reply
same here. my device reports "bright light detected" even when there's not enough ambient light to see the keys on my keyboard. turning off the monitor seems to help, but...
[+] Semaphor|12 years ago|reply
This is why I decided to back Mycestro [1] and skipped on the LeapMotion. While it won't support multi-hand (unless you have 2 devices) and multi-finger stuff, at least it should be able to easily recognize any motion I make with it on.

[1] http://www.mycestro.com/

[+] nixarn|12 years ago|reply
My first impressions with the Leap is similar. Got one, played around with it, felt kinda useless, haven't "touched it" since.
[+] lvs|12 years ago|reply
As another early adopter, I have to say that it's disappointing how much the company is relying on "the community" to generate their business model for them, rather than properly develop the software themselves.
[+] nsxwolf|12 years ago|reply
It's obviously not a replacement for a mouse and keyboard and never will be. I could see some useful gesture based macros, like "throw your hands up in total frustration" to rage-quit an app or open a distraction-free full screen editor.
[+] utopkara|12 years ago|reply
I find Leap Motion quite accurate. You will quickly get used to the convenience of gestures with BetterTouchTool, and wish you had it on computers which don't have Leap Motion.

Otherwise, The gestures used by apps is something that needs to be carefully crafted. For instance Touchless, the mouse replacement, simply doesn't cut it; you'll find yourself reaching out for the mouse/trackball/trackpad within the first 10 seconds.

The leap gets effected by strong light sources on the ceiling. You might want to use it facing downwards if that is an issue. Also, if you are wearing a watch or a ring, it might get confused with the reflection.

[+] tsenkov|12 years ago|reply
Disclaimer: I haven't tried Leap Motion, yet.

Did I get this right? Leap Motion vs. Kinect:

  - LM is smaller (significantly);
  - LM is cheaper (significantly);
  - LM is more accurate (significantly);
  - LM has almost no real apps (mostly concept demos).
If these are all correct I find Scott's post nothing more than a "normal", "the competition sucks, too", Microsoft type of post.
[+] AndrewDucker|12 years ago|reply
Very different markets.

The LM is very short range, so you couldn't use it like a Kinect. And you wouldn't plug a Kinect into your PC to watch your fingers move either.

[+] sytelus|12 years ago|reply
What surprised me after little digging up was that Leap Motion does not support point cloud. That means you can't get 3D world as points in space from Leap Motion. Their founders says Leap Motion isn't designed for this purpose. This means you can't use Leap Motion for applications such as 3D scanning. Personally I think that would be much more exciting then ability to move windows by waving.
[+] nwh|12 years ago|reply
Is all writing seriously boiling down to animated GIFs of "reactions"?
[+] rhema|12 years ago|reply
The biggest problem with the leap that I have seen (there is one in my lab) is that it sees hands spread out evenly on the surface really well, and that's about it. If you do a thumbs up gesture (or a rude gesture when it doesn't work), the fingers get occluded by the bottom of your hand.

Think about you hands as five friends trying to play connect at the same time and you can imagine the kinds of occlusion problems you might face.

Still, I, and probably others, like the leap. It's not useless. You just have to exploit it the right way, looking for natural interface design beyond a Tom Cruse movie.

The biggest free air interaction problems are (1) making visible what the available gestures are, and (2) providing tangible or visible feedback. You don't get to see and feel the the interaction like you can with a keyboard or less digitally inclined tools.

[+] deanclatworthy|12 years ago|reply
I've been experimenting with the LeapMotion at work today, and some initial observations:

- There's no apps yet that have made me go wow. - The range is quite small - The motion of hovering an arm in front of you is extremely tiring after more than 10-15minutes. Try holding your arm out in front of you for that long without moving and you'll see why.

The reason Kinect was a success is that you can take real-world activities such as dancing, jumping over obstacles, jogging (on the spot), and translate them into an interactive digital version.

With the Leap, I've yet to think of a real world scenario where I would be waving my hands in front of my chest, that would translate well into a digital experience. Conducting an orchestra would be one good application for this, perhaps training conductors, but I couldn't think of anything else.

[+] pbreit|12 years ago|reply
I think the "useless" comment misses that its sweet spot utility is unknown at this point. And I'm not sure "general purpose computer input device" is going to be such a sweet spot.

The precision "problem" can obviously be addressed with software.

That said, I do believe the absence of killer utility is a problem for Leap right now since it came out with a decent bang and now the less than favorable reviews are dripping in. I think they would have done themselves a significant favor by having a killer app ready from the outset. I also think they need to encourage people to look beyond simple human-computer interaction. Apparently these things could map a whole football game or count the number of people at a concert. Things like that. I also think the commercial angles will be better for business.