Leap is a great little device. My main complaint is that the API is too low-level. We would just fire events roughly every 20ms, in a markovian (stateless) way. For this reason, the instrument control was somewhat janky, because we didn't have the time to de-jitter hand paths / infer a tracking spline.
My wishlist for leap is:
* Higher level APIs, that apply the sort of sophisticated filtering and tracking that many people are liable to roll themselves.
* Multiple leap devices per computer. Needing a separate computer per leap is a big barrier to working with several leaps at once.
* ARM support. I'd love to have a RaspberryPI for each leap. Then they would be truly mobile. For right now, I think I'm going to have to get a mini ITX motherboard,
You need a smoothing library. I welcome our low level access, you can always pile what you need on top, the converse is not possible.
What would be nice is some intelligence pushed down into the Leap, like the ability to run Lua or JVM code directly so that realtime multidevice is possible.
This is amazing! I bought the LeapMotion a while ago and forgot it soon enough because the hand, finger and gesture recognition was very basic and didn't feel natural (at least to me and my friends who tried it). Now they push the boundaries of that small, old piece of hardware with a software update and show what it's really capable of. I can only imagine how impressive this must feel when you use it with an oculus rift and interact with a virtual reality environment. The sticking-blocks-together example is just the beginning of what's possible.
EDIT: Okay, I tried some of the examples and am excited! The recognition part is excellent, I can throw things up in the air and catch them again, all very natural. What's missing though are good physics. Objects glitch around my hands, wiggle mid air, get stretched even when they should be solid. But I guess that's Unity's problem, not LeapMotion's.
Yeah it's basically a holodeck. Exciting stuff. It's funny how on the shows the holo-deck was always something that took a whole lot of electricity so they could only have one or two on the ship, and you'd have to book time on it. It looks like in real life holodecks are much less power hungry than faster than light starship engines, and that more people desire their own holo-deck than want to go to space. Makes sense. Entertainment has historically been much more popular than exploration.
I wrote a case about the Leap for my MIT Entrepreneurship class and interviewed a bunch of the founding team.
These guys are great -- but they still havent found their killer app. Thats partly because the tech is ahead of its time -- but the vision is clearly excellent.
Their CEO is the founder of Quattro Wireless founder, an ex-Apple exec Andy Miller who is a really smart guy. SO they do really have a good team, and lots of VC backing.
Perhaps this new skeletal tracking stuff will help this really catch on. Best of luck to LEAP!
It seems like the type of company that needs to be acquired for its technology, by someone with not just the vision for the technology, but also the funds, and perhaps the platform for it, to really take advantage of it.
I just updated my LeapMotion library for Processing [0] with support for the new LeapMotion SDK. Mine provides a thin wrapper over the SDK; there are several other libraries for Processing with higher level APIs (e.g. additional gestures).
The new version is really a step up. Props to the dev team for being responsive as well. I sent them an issue during the beta, and they had pushed a fix a few days later. There's still glitches here and there, but those usually result from one hand being on top of the other or something similar.
I'm excited to see where they go with this. I'm not sold on using the leap for my everyday computing tasks, but I think there are some great applications for it.
OK, the darn thing's only £70, so I'm going to buy one and test it out. Blog report coming.
("Only" £70? Yep. Currently, the best hand-tracking solution I'm aware of, from Animazoo, costs £13,000. £70 is a pretty good deal even if it has some limitations.)
Yup. It works just like that. Now, keep in mind that it's not going to work perfectly ALL the time. If you get into complex occlusion cases it may not be able to reconstruct the skeleton perfectly.
It's not as good as the video implies... still quite easy to get it to glitch out. Notice in the demo video that they flip their hands pretty quickly. if you rotate them slowly things get a bit funky. I guess it's pretty hard to compensate for occluded fingers.
It does work just like the video although it is not perfect. They said on Reddit that they are going to keep pushing software updates to improve the tracking.
Big props to Leap Motion on the skeletal tracking. It was very easy to use. Took me around an hours to dust off my 3d trig and build this: www.youtube.com/watch?v=FX7pQlB6-IY&feature=youtu.be
How is this technology so much more accurate than Microsoft's Kinect? That's awesome! I love seeing the new guy develop a better product than a old giant.
Very, very different problem domains and technology.
Consider: how much variation is there between bodies and hands? How much larger is the work envelope of the Kinect vs the Leap? How much more information do you get from the Kinect vs the Leap (depth buffer, positional audio, etc.)?
It's hands-only, so there's a limited range of movements it can track.
However, if you apply the same question to a Kinect 2 or, say, another full-body mocap suit, then the answer is definitely.
One of these days when I have free time (no time soon, then) I shall write a routine for my mocap suits to track and give realtime feedback on various martial arts techniques.
[+] [-] bravura|11 years ago|reply
Leap is a great little device. My main complaint is that the API is too low-level. We would just fire events roughly every 20ms, in a markovian (stateless) way. For this reason, the instrument control was somewhat janky, because we didn't have the time to de-jitter hand paths / infer a tracking spline.
My wishlist for leap is:
* Higher level APIs, that apply the sort of sophisticated filtering and tracking that many people are liable to roll themselves.
* Multiple leap devices per computer. Needing a separate computer per leap is a big barrier to working with several leaps at once.
* ARM support. I'd love to have a RaspberryPI for each leap. Then they would be truly mobile. For right now, I think I'm going to have to get a mini ITX motherboard,
[+] [-] sitkack|11 years ago|reply
What would be nice is some intelligence pushed down into the Leap, like the ability to run Lua or JVM code directly so that realtime multidevice is possible.
[+] [-] dignati|11 years ago|reply
EDIT: Okay, I tried some of the examples and am excited! The recognition part is excellent, I can throw things up in the air and catch them again, all very natural. What's missing though are good physics. Objects glitch around my hands, wiggle mid air, get stretched even when they should be solid. But I guess that's Unity's problem, not LeapMotion's.
[+] [-] batmansbelt|11 years ago|reply
[+] [-] rasz_pl|11 years ago|reply
It was always about the software. Hardware is just 2 parallax USB 3.0 cameras in one small box. There is ZERO hardware processing.
[+] [-] dalek2point3|11 years ago|reply
These guys are great -- but they still havent found their killer app. Thats partly because the tech is ahead of its time -- but the vision is clearly excellent.
David Holz the founder is the brains behind this, has been working on a better UI since he was literally a kid in Florida. See: http://www.businessweek.com/articles/2012-05-24/david-holzs-...
Their CEO is the founder of Quattro Wireless founder, an ex-Apple exec Andy Miller who is a really smart guy. SO they do really have a good team, and lots of VC backing.
Perhaps this new skeletal tracking stuff will help this really catch on. Best of luck to LEAP!
[+] [-] higherpurpose|11 years ago|reply
[+] [-] ortusdux|11 years ago|reply
[+] [-] Goopplesoft|11 years ago|reply
[+] [-] heuermh|11 years ago|reply
[0] https://github.com/heuermh/leap-motion-processing
[+] [-] FajitaNachos|11 years ago|reply
I'm excited to see where they go with this. I'm not sold on using the leap for my everyday computing tasks, but I think there are some great applications for it.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] thenomad|11 years ago|reply
If so, I may have to buy a Leap...
[+] [-] thenomad|11 years ago|reply
OK, the darn thing's only £70, so I'm going to buy one and test it out. Blog report coming.
("Only" £70? Yep. Currently, the best hand-tracking solution I'm aware of, from Animazoo, costs £13,000. £70 is a pretty good deal even if it has some limitations.)
[+] [-] _Adam|11 years ago|reply
[+] [-] acous|11 years ago|reply
[+] [-] tyrant95|11 years ago|reply
[+] [-] mjolk|11 years ago|reply
edit: That said, maybe the new API is ridiculously better, but I doubt it.
[+] [-] knewter|11 years ago|reply
So easy to use...
[+] [-] cyrusaf|11 years ago|reply
[+] [-] angersock|11 years ago|reply
Consider: how much variation is there between bodies and hands? How much larger is the work envelope of the Kinect vs the Leap? How much more information do you get from the Kinect vs the Leap (depth buffer, positional audio, etc.)?
[+] [-] zvrba|11 years ago|reply
[+] [-] thenomad|11 years ago|reply
However, if you apply the same question to a Kinect 2 or, say, another full-body mocap suit, then the answer is definitely.
One of these days when I have free time (no time soon, then) I shall write a routine for my mocap suits to track and give realtime feedback on various martial arts techniques.
[+] [-] malkia|11 years ago|reply
[+] [-] Jemaclus|11 years ago|reply
[+] [-] batmansbelt|11 years ago|reply
[+] [-] cyrusaf|11 years ago|reply
[+] [-] jamesbritt|11 years ago|reply
[+] [-] jpeg_hero|11 years ago|reply
[+] [-] dignati|11 years ago|reply
[+] [-] mauricesvay|11 years ago|reply