Just look at this nonsense [1]. We have brand new technology with unlimited potential for new abstractions and paradigms. So, what do we do? We make a virtual desktop workstation, the same one we had since 70s! We limit all rendering to a 2D surface, make it curved and put a nebula in the background. Is this really the cutting edge of information technologies?
Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
That was the same thinking behind Elon Musk's startup Neuralink, but given that there really aren't any options that are substantially better, Logitech isn't being stupid.
Perhaps some combination of voice, gesture, and 3D could be useful specifically while in-game. But for productivity, I don't think there's anything better than a traditional workstation.
Voice has privacy issues. Gesture recognition is inaccurate and causes "gorilla arm" syndrome.
3D doesn't really add much, and in fact for productivity some people think even 2D Window-based GUIs are cumbersome, and that is the entire motivation for tiling window managers. From that perspective, 3D is even more chaotic and cumbersome.
WHY? There's a reason we have the keyboard, and it wasn't a limitation of technology. The reason the "typewriter" interface was carried over to the PC is because it's GOOD. This is almost as bad as when MS decided Kinect would replace all game controllers because look hands free. It turns out we have hands for a reason. The speed and precision we can do things with our hands far exceeds basically anything else beyond a direct link to your brain.
Have you ever watched someone who is handicapped who has eye tracking for a keyboard? Have you paid attention to how slow and error prone it is? What on earth makes you want to have THAT as your primary input method???
I absolutely do not want facial features and eye movement to completely replace finger movement for input. I can neither control my eyes nor face at anywhere near the rate that I can control my fingers. If VR wants to replace my desktop environment for anything other than games, it'll need to retain some sort of high fidelity, high frequency input vector.
One of the first things I did with my Rift (well, after Superhot...) was try out the Virtual Desktop kit.
It's lacking. I thought it'd give me a 3D frustrum to throw window back/forward, drop Spotify out to the peripheral, have VisStudio in glorious megapixel size.
Instead, you get your regular desktop screens mirrored and constrained to a 2D plane. Gods help you if you've got mismatched DPI screens, like my 1080p & 2160p pair.
(Go on, I'll await some smart sod to tell me "Product" is exactly what I'm looking for...)
I'm not sure it's nonsense. What I see is incremental change. Take the workstation, and move it to virtual space. Then add more 2D screens. Then try a different input or display method, one small step at a time. Sounds reasonable to me, much more so than re-inventing every part of the interface at the same time, and hoping to get it all within spitting distance of what's right.
Until someone invents a neural interface, keyboards might be the best option for text input. That or develop purely conversational interfaces, but the issue with conversational interfaces is they lack the privacy of a physical interface.
Keyboards are still a highly precise and versatile text input method. Voice isn't good enough yet (and you may never get good enough voice recognition, thanks to homophones and the issue of making up new words).
While just putting a window on a nebula background works as a Proof of Concept, it does need further innovation. The keyboard, however, does not.
An infinite real estate computing environment is a pretty damn good value prop imo; a hell of a lot better than vr is being pushed atm (360 vid and games).
The problem with the gif is that it doesn't make any use of the "virtual" environment, but this is where vr should be heading at least as the intermediate before fully interactive vr.
>Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
No. You don't get it. This doesn't do 3d finger tracking. It just figures out where the keyboard is and does some edge detection to give a rough idea of where your hands are.
Maybe, but the current state of VR makes it clumsy to use in some scenarios. I expect this to go away with time and further development, but right now there's a gap to fill. One step at a time allows us to go far.
Combine this with the new Pimax, and infinite workspaces start to become a practical reality. Which could help propgate the technology more which would lead seamlessly into more teleconferences, which have a feeling of personal interactions... which is important, even if we as programmers don't always appreciate it.
The other day I was playing a game of onward. It was the rare instance that a team wanted to work together on a shared goal. In the 15 seconds before the match, while planning some tactics... it occured to me how natural the meeting felt. It was like we were all in the same room. I felt engaged with EVERYONE. That doesn't happen on conference calls.
I work from home full time, I think the biggest downside are the missing meaningful interactions with other people. I think VR has huge potential to bridge that gap.
"I work from home full time, I think the biggest downside are the missing meaningful interactions with other people. I think VR has huge potential to bridge that gap."
VR also has the potential to be way more addictive than current forms of online interaction and entertainment.
I'm reminded of I think it was a Larry Niven story (maybe Ringworld) in which the protagonist has an electrical wire put in to the pleasure center of his brain and he just sits home and pushes the button that activates it for weeks on end.
I love that natural feeling of interaction with other people in VR. When playing in Rec Room so often you take a break from games to just have a chat with some other people because the interaction is somehow so engaging despite the cartoony graphics.
As someone who has been a developer of virtual worlds for 17 years. I can say that this is a very incremental change, as all of the changes have been since 1994.
Personally I think we need to reimagine interfaces to the world around us, even the virtual worlds (VR/AR/MR) around us. Voice input, AI, hand sensing technologies could make for new ways for changing the worlds. The book Daemon by Danial Suarez and his eSpace holds much promise. I have been experimenting with those idea in a new VR world I have been working on for a few years.
I think motion sensing is probably the way forward; voice control is intrinsically limiting in most (crowded) environments where computers are used today.
Considering humans can ride bikes, drive cars, play instruments, etc. (including typing on keyboards!), I think that indicates that non-verbal, physical interaction is not nearly saturated as a transmission channel.
Conversely, it's hard to imagine someone verbalizing "navigate to HN" in a loud open-space office, or "Excel, create a pivot table" or whatever. I think it's fine in private spaces like your home, but in public spaces, you're implicitly broadcasting your activity to everyone around you, which I consider to be a strong negative.
Voice input is good for people who have clear sounding voice and generic American accent. Which is maybe 5% of the world. For people who speak English with thick accent or don’t speak it at all or have deep / less clear voice this would be massive decrease in accessibility.
I don’t personally believe in voice input. English is my second language but I have spoken it daily for close to 10 years. I can’t get Siri to understand what I say so I just keep it disabled and iPhone keeps bugging me to enable it.
We cannot rely on voice input as that is a dead end. Until we figure out how to link thoughts directly to computer inputs keyboards will be the king.
A lot of these technologies are being developed and its pretty awesome. However, I'll argue there's clearly space for a keyboard in the virtual world.
If you want to sculpt a model in the real world, you'd use clay and your hands. We can use VR to move away from KBM interaction for things like that.
If you want to write a book in the real world you...type it out. Its the best way we know to get text out of your brain and somewhere else. Don't throw the baby out with the bathwater.
I'm more impressed by the videos of the hand tracking around the keyboard than the tracker in the keyboard! Is that fidelity just using the Vive's outward facing camera? They don't show any tracking gloves in the mockups.
Its actually pretty simple. If you know where the keyboard is, you can clip out that part of the camera's input. Then you just need some edge detection to get the hand silhouettes.
And actually...that seems like a portable idea. So I suppose you could do this for any tracked thing.
If you want to explore this kind of thing, you can mount a camera on your HMD (Vive's is crippled), use a WebVR stack (simple), track objects using visual markers and javascript tracking libraries (jsartoolkit5 and/or tracking.js), and do selective camera pass-through AR. It's crufty, but not hard.
EDIT: You can simply use the Vive's camera, with tracking.js color tracking, especially with a small minDimension (number of pixels) threshold. Yellow is good.
I understand Logitech wants to keep selling keyboards and I agree that keyboards are currently the fastest way to enter text. However, it IS the wrong direction. The better direction is what will be opened with tools such as the Vive Knuckles enabling a more complete language of gestures. It seems to me that the better way forward is to stop clinging to an old paradigm that requires bulky equipment that can obstruct the user's volumetric interactions and instead to build off of those volumetric interactions even though there is a cost of a learning curve. I doubt businesses will implement it since they run the risk of alienating current gen customers and thus losing money. I think there will be some VR experiences aimed at younger customers that will implement it and over the next decade we will wonder why we ever tried to bring a keyboard into VR/AR instead of just using our hands.
A lot of people can't touch-type. I can't, not really, I need to glance at the keyboard once in a while. My typing speed is comparable to touch-typists', which is why I never felt the need to learn to properly touch-type. The need of seeing the keyboard is problematic in very few situations in practice, and with a back-lit keyboard it's never a problem... as long as seeing my keyboard is possible at all, which was not the case in VR.
I could invest a lot of time to unlearn my current way of typing and learn proper touch-typing technique, but that's a lot of work and doing it just to be able to type in VR feels like a waste of time. With this tech I'd be able to work in VR without changing my way of typing. To me, this makes using VR for work practical for the first time. It's actually huge, and if it works well it's, to me and others in similar situation, potentially life-changing tech.
I thought the same thing so I disabled the backlight on my laptop's keyboard. After accidentally turning it on in the dark a month later I will never go back to a non back-lit keyboard.
It's hard to place your hands on the keyboard in pitch black or when you're in a weird position like lying on your back with the laptop on your chest. If you've ever partially closed your laptop so that the monitor would shine on your keyboard so you could find 'f' and 'j', you know exactly what I'm talking about.
And also TFA says "for a true typing experience you need to see your hands, and we’ve created a way to use the Vive’s existing tracking to do that".
A few ideas mentioned in the article sound interesting:
“But VR can transform and augment that trusty keyboard – so easy to disregard – into a contextually aware companion for whatever application you use, becoming a palette for your creative workflow, dynamically providing you with any commands and shortcuts you need.”
If you are playing a VR game where you are using a joystick or some other peripheral, you still may need to use your keyboard for some actions, in-game chat, or simply to pull up a browser and google something while in-game.
If you primarily have your hands on a joystick or game controller, you would have to take your VR headset off just to figure out where your keyboard/mouse are.
strictly off-topic, but has anyone else noticed on the Yoga Book (the 'Halo' Keyboard') that they screen printed the physical locator nibs from the F and J onto the flat keyboard :)
I could see the hand tracking being useful for people who don't know how to touch type (and have to look at their keyboard and fingers) and who need to type while using a headset.
I don't think non-fanatics are willing to go through such painful process for a large field of view. Maybe it would be more profitable to start working on a 360 projection screen.
I think this issue is caused partly by the design of modern keyboards. Back in the day, keyboards would have clear, lasting, physical markings on the F and J keys and the 5 key on the numpad. They also had special keys that were easily recognized for caps lock, return and so on. This meant you could put your hands almost anywhere on the keyboard and always know where you were. Finding the home row without looking took a fraction of a second. Another feature that keyboards used to have, which is becoming increasingly uncommon today, is grouping of the function keys into fours. This meant you could always press the right function key without looking. Modern keyboards more often than not don't do this, and reliably pressing the right function key without looking is nearly impossible today.
Old keyboards were (generally) designed to allow for touch-typing. Modern keyboards are (generally) designed to be looked at.
I honestly don't understand how this is difficult for people. I can touch type without looking, on a keyboard which is moving randomly. It's not a parlour trick, it's just what's supposed to happen if you type much at all. If you can't touch type in the dark, there's something wrong with your skill or your equipment.
veli_joza|8 years ago
Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
[1] https://d201n44z4ifond.cloudfront.net/wp-content/uploads/sit...
nwah1|8 years ago
Perhaps some combination of voice, gesture, and 3D could be useful specifically while in-game. But for productivity, I don't think there's anything better than a traditional workstation.
Voice has privacy issues. Gesture recognition is inaccurate and causes "gorilla arm" syndrome.
3D doesn't really add much, and in fact for productivity some people think even 2D Window-based GUIs are cumbersome, and that is the entire motivation for tiling window managers. From that perspective, 3D is even more chaotic and cumbersome.
tw04|8 years ago
Have you ever watched someone who is handicapped who has eye tracking for a keyboard? Have you paid attention to how slow and error prone it is? What on earth makes you want to have THAT as your primary input method???
ewjordan|8 years ago
csmattryder|8 years ago
One of the first things I did with my Rift (well, after Superhot...) was try out the Virtual Desktop kit.
It's lacking. I thought it'd give me a 3D frustrum to throw window back/forward, drop Spotify out to the peripheral, have VisStudio in glorious megapixel size.
Instead, you get your regular desktop screens mirrored and constrained to a 2D plane. Gods help you if you've got mismatched DPI screens, like my 1080p & 2160p pair.
(Go on, I'll await some smart sod to tell me "Product" is exactly what I'm looking for...)
silversmith|8 years ago
deevolution|8 years ago
harryf|8 years ago
...plus the QWERTY keyboard design goes back to 1870 ( https://en.wikipedia.org/wiki/QWERTY )
dawnbreez|8 years ago
While just putting a window on a nebula background works as a Proof of Concept, it does need further innovation. The keyboard, however, does not.
setr|8 years ago
The problem with the gif is that it doesn't make any use of the "virtual" environment, but this is where vr should be heading at least as the intermediate before fully interactive vr.
That, and porn
jayd16|8 years ago
No. You don't get it. This doesn't do 3d finger tracking. It just figures out where the keyboard is and does some edge detection to give a rough idea of where your hands are.
m-p-3|8 years ago
amelius|8 years ago
swalsh|8 years ago
The other day I was playing a game of onward. It was the rare instance that a team wanted to work together on a shared goal. In the 15 seconds before the match, while planning some tactics... it occured to me how natural the meeting felt. It was like we were all in the same room. I felt engaged with EVERYONE. That doesn't happen on conference calls.
I work from home full time, I think the biggest downside are the missing meaningful interactions with other people. I think VR has huge potential to bridge that gap.
pmoriarty|8 years ago
VR also has the potential to be way more addictive than current forms of online interaction and entertainment.
I'm reminded of I think it was a Larry Niven story (maybe Ringworld) in which the protagonist has an electrical wire put in to the pleasure center of his brain and he just sits home and pushes the button that activates it for weeks on end.
Technology is getting closer and closer to that.
kirrent|8 years ago
return0|8 years ago
BatFastard|8 years ago
Personally I think we need to reimagine interfaces to the world around us, even the virtual worlds (VR/AR/MR) around us. Voice input, AI, hand sensing technologies could make for new ways for changing the worlds. The book Daemon by Danial Suarez and his eSpace holds much promise. I have been experimenting with those idea in a new VR world I have been working on for a few years.
gervase|8 years ago
Considering humans can ride bikes, drive cars, play instruments, etc. (including typing on keyboards!), I think that indicates that non-verbal, physical interaction is not nearly saturated as a transmission channel.
Conversely, it's hard to imagine someone verbalizing "navigate to HN" in a loud open-space office, or "Excel, create a pivot table" or whatever. I think it's fine in private spaces like your home, but in public spaces, you're implicitly broadcasting your activity to everyone around you, which I consider to be a strong negative.
richardknop|8 years ago
I don’t personally believe in voice input. English is my second language but I have spoken it daily for close to 10 years. I can’t get Siri to understand what I say so I just keep it disabled and iPhone keeps bugging me to enable it.
We cannot rely on voice input as that is a dead end. Until we figure out how to link thoughts directly to computer inputs keyboards will be the king.
jayd16|8 years ago
If you want to sculpt a model in the real world, you'd use clay and your hands. We can use VR to move away from KBM interaction for things like that.
If you want to write a book in the real world you...type it out. Its the best way we know to get text out of your brain and somewhere else. Don't throw the baby out with the bathwater.
wwwigham|8 years ago
jayd16|8 years ago
And actually...that seems like a portable idea. So I suppose you could do this for any tracked thing.
haydenlee|8 years ago
mncharity|8 years ago
EDIT: You can simply use the Vive's camera, with tracking.js color tracking, especially with a small minDimension (number of pixels) threshold. Yellow is good.
https://artoolkit.github.io/jsartoolkit5/examples/ https://trackingjs.com/examples/color_camera.html ; http://www.keyboard-layout-editor.com/#/ https://joric.org/keycaps/#GB-Retro-DSA etc
jlebrech|8 years ago
but i'd would rather use a wider range of motion (at some point, if someone makes that) to input to stay active.
DoctorMemory|8 years ago
mlevental|8 years ago
klibertp|8 years ago
I could invest a lot of time to unlearn my current way of typing and learn proper touch-typing technique, but that's a lot of work and doing it just to be able to type in VR feels like a waste of time. With this tech I'd be able to work in VR without changing my way of typing. To me, this makes using VR for work practical for the first time. It's actually huge, and if it works well it's, to me and others in similar situation, potentially life-changing tech.
shpx|8 years ago
It's hard to place your hands on the keyboard in pitch black or when you're in a weird position like lying on your back with the laptop on your chest. If you've ever partially closed your laptop so that the monitor would shine on your keyboard so you could find 'f' and 'j', you know exactly what I'm talking about.
And also TFA says "for a true typing experience you need to see your hands, and we’ve created a way to use the Vive’s existing tracking to do that".
melling|8 years ago
“But VR can transform and augment that trusty keyboard – so easy to disregard – into a contextually aware companion for whatever application you use, becoming a palette for your creative workflow, dynamically providing you with any commands and shortcuts you need.”
bhhaskin|8 years ago
lps41|8 years ago
If you are playing a VR game where you are using a joystick or some other peripheral, you still may need to use your keyboard for some actions, in-game chat, or simply to pull up a browser and google something while in-game.
If you primarily have your hands on a joystick or game controller, you would have to take your VR headset off just to figure out where your keyboard/mouse are.
tudorw|8 years ago
devdoomari|8 years ago
the keyboard can follow the user's view... along with the silhouette of the user's hand...
jefurii|8 years ago
ilaksh|8 years ago
return0|8 years ago
loup-vaillant|8 years ago
joshumax|8 years ago
giraffee|8 years ago
lancebeet|8 years ago
Old keyboards were (generally) designed to allow for touch-typing. Modern keyboards are (generally) designed to be looked at.
unknown|8 years ago
[deleted]
microcolonel|8 years ago