Not to be cynical, but this reminds me a little of Alan Kay's comment:
"""
By the way, Sketchpad was the first system where it was discovered that the light pen was a very bad input device. The blood runs out of your hand in about 20 seconds, and leaves it numb. And in spite of that it’s been re-invented at least 90 times in the last 25 years.
"""
I think there will be some things it's great at and some it's terrible at. Here's one for free: how about using this to create the Rosetta Stone of sign language? Or a portable sign-to-speech translator? (These are hard problems for other reasons, but this thing brings you a lot closer.)
Think of it like this: Alan Kay isn't wrong, but you could say the same thing about any input. The mouse is a very bad input device. It takes forever to use it for on-screen typing. The keyboard is a very bad input device. It can't tell how hard you're hitting each key when you do musical typing in GarageBand. The microphone is a very bad input device. Voice control is way slower than just clicking the menu item you want ...
If this thing is real, then within a couple of years there will be a dozen reasons to refuse to buy a computer without one.
One change since that time is that there is way more flexibility in the placement of displays.
This also isn't a light pen. You do not need to hold it close to the display.
Finally, there is way more casual computer interaction. A light pen could be fine for short interactions (especially if you do not have to search for the pen first)
Painters are able to operate a brush for full working days in the studio without the elbow support that a desk supplies here. It seems if the benefit is great enough people can work up the supporting muscles through repetition.
These devices should be used with the monitor turned into a drafting-table like angled work surface instead of mounted vertically in the usual way. If you can work touch/resting into the screen, you have a much more humane setup.
I'm sad the word 'robot' hasn't appeared in this thread yet. Let's correct that.
Visual slam is great for medium distances, but pointclouds aren't really that dense and are slow to update. Also the lidar to make the point clouds is stupid expensive.
Add one of these guys onto your robot and you've got a really cool set of 'wiskers.' Short range, highly sensitive, super fast update. I'd love to put several of these on a robot and use that to give it a sensitive field surrounding its body.
Depending on how open the software and hardware are this will be a great addition to the robotics community.
I work on adaptive mobile robots as part of my research, and I'd be very interested to see how the LEAP compares to the Kinect in this area. I submitted a developer kit request, so maybe I'll get to find out.
Also, from the Ars Technica post on LEAP:
"The company says the breakthrough in resolution comes not from the hardware, which consists of relatively standard parts, but from what CTO David Holz calls 'a number of major algorithmic and mathematical problems that had not been solved or were considered unsolvable.'"
I'm conflicted by that statement. As a current academic, I hope they publish these supposed breakthroughs, as hiding them behind trade secrets makes me sad. As an entrepreneurial-minded person, however, I understand the desire for competitive advantage.
Waiting for the load to drop so I can try to get a preorder!
Question: How can you render the "other side" of the hand at the 51+ second mark? If this is indeed possible, that's quite a remarkable technology you have.
What is the API of the dev kit like? Does it give the programmer events translated into hand kinematics (like, 'right index finger pointing forward') or is it just a cloud of points?
How will you license the technology for others to reproduce? Will you be aggressive at licensing trying to profit , or will you be permissive and partner with other manufactures to make this truly ubiquitous?
Please give us some technical details to satisfy our curiosity. Perhaps I missed it on the website, but I didn't notice any mention of how it functions (IR? Sound waves?), what sort of range of distances it works in, etc.
This is very cool technology. The company was formerly "Ocuspec". One break-through is using very inexpensive hardware (<$5 at RadioShack) to get that sub-mm resolution. At the other end of the spectrum, they can cover a football field (and more in the future). What they are showing now is just the beginning. Kudos to rolling out with an SDK. I can inly imagine what sorts of applications developers will dream up.
Very very inconvenient. Writing on air is completely different and considerably more tricky that writing on paper. Physically more demanding too. It's one of those things that sound nice in theory, but not really practical.
I disagree with you on the details a bit, but I think viewing this as good at somethings and bad at others is far more constructive than arguing that this should replace a mouse, keyboard etc.
However, I would make the good/bad list a bit more general: * good for manipulating UI elements that represent 3D
* bad for manipulating UI elements that represent 2D
Maps and camera interactions (CAD) are perfect examples of things that represent 3D elements. Short games are another area that can represent 3D - longer games might also work well, but the user is likely to get tired of waving his/her arms around.
Much of what we do on computers today is strictly 2D. Coding, word processing, most web browsing, email, etc. Pencil tools/drawing tools are similarly usually just a 2D activity, so using a 3D-capable tool and reducing your movements to 2D doesn't really make sense.
I'm not sure how they're handling variance in user perspective, but assuming they've got that figured out, if you were to couple this with a stereo vision setup and some form of haptic feedback a lot of companies doing 3D design (both CAD and 3D artwork) would eat this up. It won't replace a keyboard and mouse, but it would provide a much more "immersive" way of interacting with the media.
I’d love to get one of these and play with it. Will the SDK and spec for talking to it be freely available after the initial batch of preorders and free dev kits?
As a counterexample, Emoviv gave a TED talk a while ago showing off a headset that lets you control your computer with your mind. When you visit their website you discover that you can only develop with a $500 “developer edition” headset that comes with a single, nontransferrable license to use the SDK (additional licenses are $99). The consumer model of the headset only runs approved applications.
Dev Kits ship for free to 20,000 developers in 1-3 months.
Pre-orders are for consumers at $70 and ship this winter.
The idea is to give all the hackers maximum access to create awesome apps and then deliver a healthy shiny ecosystem to the consumer. Also, we'd like to see a larger shift towards people creating things, so encouraging early adopters to get aboard the coding train is a positive trend.
It's a huge new interaction space, and we're looking for innovators to explore it!
Contrast with Android where every device is a developer device - one tickbox in the settings is all that is required. No accounts, registration, handing over money, authorizing devices etc (yes I'm looking at you Apple).
The first thing I thought after watching the video was how much money will they make with pre-orders on this site and how much would they have made with a kickstarter campaign?
I really can't imagine using this for a longer period of time. Maybe as an extension of keyboard and mouse/trackpad that you would use to scroll trough pages when researching something or stuff like that. You still need a keyboard to type as far as I can see.
That being said; I really like the idea and would love to know the tech behind it.
I can't see why there is so much negativity as to end use applications. I can see lots of potential for this - slideshow presentations, laptops (goodbye annoying trackpad), as well as a stylus-and-tablet replacement for designers, etc. etc.
I think the key however the key will be in the recognition of subtler gestures. If you can show me a man using two hands to type, then moving them not far from the keyboard to activate simple gestures for navigating a document, I'd be really sold that this is for everybody.
There's no perceptible latency in its response to gestures -- I'm very impressed assuming it's not a rigged demo. (In videos I've seen of the Kinect the software responds to a gesture only after a noticeable fraction of a second.)
- If this works as advertised, this company will never ship the product. They will be bought within months for a huge sum of money, even if they do not want to be bought. Reason for that is:
- Litigation, litigation! They will need deep pockets to defend themselves against patent claims.
To me this looks amazing and although LEAP seem to be pushing for you to get rid of your mouse/keyboard, personally I think this is probably best as an addition to it. Imagine if you had one of these built into the keyboard.
You're typing an email need to add a location switch over to google maps, hands off keyboard as you manipulate it around to get a decent resolution, 'tap' the address bar to copy it, swipe left to switch back to the email program, tap again to paste and boom carry on typing.
You wouldn't need to be using it all the time for it to be extremely useful.
We keyboard jockeys sometimes forget how much faster something like this would make the less shortcut-key knowledgeable users!
Forget the desktop. With the sensor being this small, I can imagine hanging this from your neck and have gesture sensing anywhere, SixthSense-style. ( http://www.pranavmistry.com/projects/sixthsense/ )
I think i'll need to demo this unit before I purchase it. I remember getting burned in the early nineties by the power glove's cool commercial: http://www.youtube.com/watch?v=93iDhnBcMGo
Interfaces like this look cool in movies but your hands aren't 'designed' to be above your heart for an extended period of time. Now if you had something like a drafting table with a touch screen I'd be first in line.
If you're an interface designer (I am), you should pre-order this thing. This will be a standard form of interaction in a couple of years and you should jump on it early and start figuring out the kinks. Too cool.
I'd love something that was a combination of the two, a touch surface and something like this looking 'down' toward that surface at what my hands were doing. I could make typing like motions on the touch surface for typing. But more importantly mostly my hands would be resting on something rather than hanging out in front of me.
[+] [-] danblick|14 years ago|reply
""" By the way, Sketchpad was the first system where it was discovered that the light pen was a very bad input device. The blood runs out of your hand in about 20 seconds, and leaves it numb. And in spite of that it’s been re-invented at least 90 times in the last 25 years. """
from http://archive.org/details/AlanKeyD1987 around 7:10
[+] [-] JackC|14 years ago|reply
Think of it like this: Alan Kay isn't wrong, but you could say the same thing about any input. The mouse is a very bad input device. It takes forever to use it for on-screen typing. The keyboard is a very bad input device. It can't tell how hard you're hitting each key when you do musical typing in GarageBand. The microphone is a very bad input device. Voice control is way slower than just clicking the menu item you want ...
If this thing is real, then within a couple of years there will be a dozen reasons to refuse to buy a computer without one.
[+] [-] Someone|14 years ago|reply
This also isn't a light pen. You do not need to hold it close to the display.
Finally, there is way more casual computer interaction. A light pen could be fine for short interactions (especially if you do not have to search for the pen first)
[+] [-] bakedbeing|14 years ago|reply
[+] [-] kemiller|14 years ago|reply
[+] [-] mortenjorck|14 years ago|reply
[+] [-] TeMPOraL|14 years ago|reply
[+] [-] iandanforth|14 years ago|reply
Visual slam is great for medium distances, but pointclouds aren't really that dense and are slow to update. Also the lidar to make the point clouds is stupid expensive.
Add one of these guys onto your robot and you've got a really cool set of 'wiskers.' Short range, highly sensitive, super fast update. I'd love to put several of these on a robot and use that to give it a sensitive field surrounding its body.
Depending on how open the software and hardware are this will be a great addition to the robotics community.
[+] [-] bgalbraith|14 years ago|reply
Also, from the Ars Technica post on LEAP:
"The company says the breakthrough in resolution comes not from the hardware, which consists of relatively standard parts, but from what CTO David Holz calls 'a number of major algorithmic and mathematical problems that had not been solved or were considered unsolvable.'"
I'm conflicted by that statement. As a current academic, I hope they publish these supposed breakthroughs, as hiding them behind trade secrets makes me sad. As an entrepreneurial-minded person, however, I understand the desire for competitive advantage.
[+] [-] abecedarius|14 years ago|reply
[+] [-] ChrisFornof|14 years ago|reply
Pre-orders ($70) only ship domestic (for now) around winter.
20,000 dev kits are being made. We want to ensure this tech becomes ubiquitous.
We're getting slammed with launch response. But if you guys have questions, we'll try to answer them here shortly.
-Chris Community Builder
[+] [-] dmvaldman|14 years ago|reply
Question: How can you render the "other side" of the hand at the 51+ second mark? If this is indeed possible, that's quite a remarkable technology you have.
[+] [-] neilk|14 years ago|reply
What language bindings will it ship with?
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] vibrunazo|14 years ago|reply
Any partnerships on the way already?
[+] [-] theambiapps|14 years ago|reply
[+] [-] jarin|14 years ago|reply
[+] [-] georgieporgie|14 years ago|reply
[+] [-] falling|14 years ago|reply
[+] [-] pbreit|14 years ago|reply
[+] [-] jasonwatkinspdx|14 years ago|reply
[+] [-] acgourley|14 years ago|reply
good
* Fruit Ninja - made for single finger input, short gameplay
* Pinching and zooming maps - good because it's usually a short activity
* CAD camera interactions - good for periodic strange rotation needs or showing a client that doesn't understand the normal movement hotkeys
* Periodic writing with pencil tool
bad
* Shooters - can't turn player around, gameplay too long.
* Longterm writing or drawing - too tiring
[+] [-] huhtenberg|14 years ago|reply
Very very inconvenient. Writing on air is completely different and considerably more tricky that writing on paper. Physically more demanding too. It's one of those things that sound nice in theory, but not really practical.
[+] [-] jamesfrank|14 years ago|reply
However, I would make the good/bad list a bit more general: * good for manipulating UI elements that represent 3D * bad for manipulating UI elements that represent 2D
Maps and camera interactions (CAD) are perfect examples of things that represent 3D elements. Short games are another area that can represent 3D - longer games might also work well, but the user is likely to get tired of waving his/her arms around.
Much of what we do on computers today is strictly 2D. Coding, word processing, most web browsing, email, etc. Pencil tools/drawing tools are similarly usually just a 2D activity, so using a 3D-capable tool and reducing your movements to 2D doesn't really make sense.
[+] [-] benjamincburns|14 years ago|reply
[+] [-] Sidnicious|14 years ago|reply
As a counterexample, Emoviv gave a TED talk a while ago showing off a headset that lets you control your computer with your mind. When you visit their website you discover that you can only develop with a $500 “developer edition” headset that comes with a single, nontransferrable license to use the SDK (additional licenses are $99). The consumer model of the headset only runs approved applications.
[+] [-] ChrisFornof|14 years ago|reply
Pre-orders are for consumers at $70 and ship this winter.
The idea is to give all the hackers maximum access to create awesome apps and then deliver a healthy shiny ecosystem to the consumer. Also, we'd like to see a larger shift towards people creating things, so encouraging early adopters to get aboard the coding train is a positive trend.
It's a huge new interaction space, and we're looking for innovators to explore it!
[+] [-] rogerbinns|14 years ago|reply
[+] [-] pbreit|14 years ago|reply
Funding announcement: http://www.marketwatch.com/story/leap-motion-announces-1275-...
[+] [-] AndrewHampton|14 years ago|reply
[+] [-] CWIZO|14 years ago|reply
That being said; I really like the idea and would love to know the tech behind it.
[+] [-] twelvechairs|14 years ago|reply
I think the key however the key will be in the recognition of subtler gestures. If you can show me a man using two hands to type, then moving them not far from the keyboard to activate simple gestures for navigating a document, I'd be really sold that this is for everybody.
[+] [-] abecedarius|14 years ago|reply
[+] [-] paulbaumgart|14 years ago|reply
[+] [-] kscottz|14 years ago|reply
[+] [-] Someone|14 years ago|reply
- I have to see it before I believe it.
- If this works as advertised, this company will never ship the product. They will be bought within months for a huge sum of money, even if they do not want to be bought. Reason for that is:
- Litigation, litigation! They will need deep pockets to defend themselves against patent claims.
[+] [-] halfnelson|14 years ago|reply
[+] [-] mattmanser|14 years ago|reply
To me this looks amazing and although LEAP seem to be pushing for you to get rid of your mouse/keyboard, personally I think this is probably best as an addition to it. Imagine if you had one of these built into the keyboard.
You're typing an email need to add a location switch over to google maps, hands off keyboard as you manipulate it around to get a decent resolution, 'tap' the address bar to copy it, swipe left to switch back to the email program, tap again to paste and boom carry on typing.
You wouldn't need to be using it all the time for it to be extremely useful.
We keyboard jockeys sometimes forget how much faster something like this would make the less shortcut-key knowledgeable users!
[+] [-] needle0|14 years ago|reply
[+] [-] cryptide|14 years ago|reply
[+] [-] kreek|14 years ago|reply
[+] [-] rglover|14 years ago|reply
[+] [-] mrsebastian|14 years ago|reply
[+] [-] ChuckMcM|14 years ago|reply