(no title)
drewrv | 1 year ago
It's interesting that meta went through the effort of bundling an accessory but stuck with hand gestures anyway.
drewrv | 1 year ago
It's interesting that meta went through the effort of bundling an accessory but stuck with hand gestures anyway.
jayd16|1 year ago
Haptic feedback, discrete buttons and precise analog input from controllers are also very important. The downside of controllers is that your hands are full and it's just not feasible for an all day wearable.
Hopefully someone figures out a good compromise be it rings or gloves or whatever.
Morphiak|1 year ago
Electromyography is an awesome technology, among other reasons, because it can (or will) detect neural signals below the activation (movement) threshold, meaning you should be able to train yourself to type without moving your fingers. A viable way to thought control without the invasive aspects of other approaches.
Back in the nineties, I said the computer user fifty years from then would look like a hippy. Headband (neural interface), sunglasses (I thought monitor, but AR is cooler), and a crystal around their neck (optical computer, maybe a miss, we'll see what the next decade brings, a slab in a pocket will do for now). Given my zero trust of end stage capitalism near my noggin, wristbands are an excellent transitional, as long as they're local (or can be made so, happy jail breaking)
threeseed|1 year ago
Where it falls apart is not being able to feel yourself touching objects which nothing other than a full glove is going to be able to simulate. Controllers and rings provide no benefit over Apple's approach.
jazzyjackson|1 year ago
Anyway, Logitech made an awesome little handheld keyboard for home theater PCs, called DiNovo Mini HTPC, I was able to pair it with Vision Pro.
https://www.ebay.com/itm/226367904044?_skw=Logitech+DiNovo+M...
vunderba|1 year ago
Uses: Being able to physically walk around a life-sized 3d model of an engine, human body, etc.
2. Tying AR to a point relative to the user
Uses: Heads-up Display notifications, virtual screens, etc.
These things are not mutually exclusive.
Even once you've placed a "AR object" at some static absolute location, I'm sure you can scroll through the list of active processes similar at any time, and snap it back to your body.
As somebody who hates the sedentary aspect of software engineering, I messed around with a friend's Apple Vision Pro and fell in love with the spatial computing aspect. I do a great deal of pacing when working through problems, and the ability to physically move around multiple virtualized workspaces was really engaging.
underlipton|1 year ago
drewrv|1 year ago
I wonder if Apple decided against a controller in order to allow third party solutions to flourish . They can take their time and see what people gravitate towards.
cyanydeez|1 year ago
_moof|1 year ago
dagmx|1 year ago
If your only experience is the HoloLens, you’re roughly a decade out of date with how well it can work today.
There’s also not been much until the Vision Pro that combines eye tracking with hand tracking which is what’s really needed.
You should really try the Vision Pro, because it really does move hand tracking to the point where it’s the best primary interaction method. Controllers might be good for some stuff , in the way an Apple Pencil is, but most interactions do not need it.
closewith|1 year ago
jazzyjackson|1 year ago
talldayo|1 year ago
nine_k|1 year ago
While analog manipulation devices (mice, trackpads, joysticks, the 3D controllers) are good at physically precise manipulation and navigation, keys and buttons are good at symbolic / textual entry and logical / symbolic navigation with comparatively very low effort and high speed.
When VR / AR acquires a fast and low-effort symbolic input mode, comparable in efficiency to a keyboard, and it becomes possible to build highly productive interfaces driven by it, like Vim and MS Excel are driven by the keyboard, many interesting developments will happen.
unknown|1 year ago
[deleted]
cubefox|1 year ago
yimiqidage001|1 year ago
threeseed|1 year ago
Its interface is unlike anything else and really can only be experienced in person. The ability to simply glance at UI controls and slightly move your head whilst resting on your leg really does feel like magic.
And the UI is built around it so if you are looking for example at a sidebar it will lock you into choosing the options unless you substantially look away. This makes using it much easier and faster than a controller.
adastra22|1 year ago
baby|1 year ago
kybernetikos|1 year ago
Also the gestures are recognised by something you wear rather than cameras, so I'd expect them to be more reliable.
navaed01|1 year ago
Jcowell|1 year ago
Try it out , it’s really neat
unknown|1 year ago
[deleted]
Mistletoe|1 year ago
Something named after a small rodent that we use already. And a monitor that we use already. Then you are cooking. You've invented the desktop pc.