Despite all the negativity around Google Glasses' camera, that was actually the best feature. You could really capture some great moments directly from your eye's perspective at the wink of an eye. My mother recently passed and out of all the photos and videos, my favorite was a 10 second video of me handing her flowers shot from Google Glasses. It looks like she's staring right back into my soul.
I have a three month old daughter now and I find myself fumbling about with my phone trying to take photos of her. Just last night I dropped my Pixel phone while trying to capture a photo of her. Phone is fine, but she wasn't too happy with the loud noise of my phone hitting the wood floor :) I kind of miss Google Glasses simply for the camera feature.
Instead of a minimal heads up display, I would much rather have a minimal wearable camera without all the extra functionality Google Glasses offered. Google Clips seems to be an alternative hands-free camera with different pros and cons (+I can be in the photo. -Can't capture the same type of photos from my eye viewpoint).
Just my two cents; I'm a father of 4 and I'm okay with not capturing those moments on film. And I mean that in two senses. One, I'm fine using my very imperfect memory to recall special moments and two, because of the logistical problems with grabbing the camera I'm not willing to risk missing out being fully in the moment. The occasionally photograph from an event or time period seems sufficient to conjure up the feelings from that time.
Regarding an always on camera - I'll second other comments in warning there are just so many concerns with abuse, I don't see how to get around those.
Yup, another father here: I'd love an always-on camera that I can tell to save the last 30 seconds of playback. There are a lot of moments I'd love to have on video or camera that are there-and-gone - even if I had my phone in my pocket, by the time I pulled it out, it would be too late.
That's all well and good, and I don't think many people are disputing the value of having more and better memories of loved ones.
The trouble is in resolving that against the kind of culture that has fostered things like the "creepshots" subreddit.
Smartphones (and GoPros) have already shown the value of being able to film and share more of our lives, but they've also shown the downsides of having our graceless moments broadcast to the world, or the many events that are now impossible to enjoy in a sea of smartphones held aloft, or, if you're an attractive woman, far more of you put online for other people to gawk at.
Until those abuses are resolved somehow, people are going to resist having a little camera attached to everyone's face.
Given Google refused to allow you to store pictures local-only, Glass' camera was... it's least-used feature for me. I loved notifications without having to get out my phone (or look at my wrist). Google's absolute refusal to make photo sync optional on Glass was honestly the beginning of the end for my trust in El Goog.
Despite all the negativity around Google Glasses' camera, that was actually the best feature
The negativity around Google Glasses amounted to a mismanaged launch. iPhones were introduced to a public already familiar 1st-hand with cell phones, camera phones, and digital cameras. The public was already passingly familiar with smart phones. Google Glasses were introduced with maximum hype to as many early adopters as possible, with no thought as to how it could backfire, and how to navigate those pitfalls. In retrospect, is it any wonder that there were so many inadvisable actions, all adding up to a societal backlash?
Google Glasses should have been rolled out to far fewer people, and in a form factor almost indistinguishable from ordinary eyeglasses.
Except Google Clip is like the camera in the movie "The Circle" and is a rabbid privacy violation.
The discontent with Google Glasses was likely because it was ahead of its time.
I don't think anyone cares if you use such a glasses camera in your private space, with people who know you and are fine with it. The problem is when it becomes normal and everyone has them on all the time, in public spaces too. Then nobody can opt out anymore.
The trick is getting a camera powerful enough to get acceptable quality pictures but small enough, and blended with the surroundings enough that you don't notice it.
> You could really capture some great moments directly
That should be, "you could really capture some great moments directly on film..."
You already captured these moments; you were there; they are ingrained in your memory.
> My mother recently passed and out of all the photos [..] staring right back into my soul."
First of all, my sincere condolences.
Again, you were there. That moment, that specific wonderful moment you witnessed, nice as it is to have something to remember it by, it feels very personal, not something you could share unless you were there, and you were you.
Back in the days of photo-lab development, there existed magic moments caught on camera as well, but it was the luck of the draw. Since they were so rare, there was no fear of missing out, but any such moment caught on camera was all the more special for it.
Maybe this fear of missing out on your own life, pushing one to commit everything to camera, comes at the cost of actually missing out. Life is made better by witnessing these magic moments, but I'm unsure the drive to capture them all enhances the experience, witnessed and remembered in that particular fashion by no one but yourself.
This is actually something I would like to wear. It's like normal glasses, I am not recording videos of the people around me, and it could show me relevant information when needed without looking at my phone. I can think of a few use cases this could cover, in a very elegant way.
The nice thing is, that they are completely "invisible" for other people around me.
I do think they are cool and could be useful, but not to the point of making someone that doesn't wear glasses want to wear them.
My main problem is what does it give me over wearing a smart watch? The only thing I can think of is being slightly more discreet and the "gesture" not being rude to activate when you are in company. That is something I found out quickly when I started wearing an Apple Watch - even if you are just checking a text message it appears you are checking the time and want to leave. Since then I have greatly cut down on notifications going to my watch, and have even fewer that even ping on my phone.
To me, having a camera on it is what would make it compelling, but at the same time make it creepy. Say being able show driving directions overlaid on the actual road, versus some floating text. Or you sit down at a desk with just a keyboard and mouse and your "displays" are only shown in your field of view - and you can customize, move and resize them as you wish.
I'm curious if the position of the "Vaunt display" will feel similar to Google Glass.
From the article:
>> It projects a rectangle of red text and icons down in the lower right of your visual field. But when I wasn’t glancing down in that direction, the display wasn’t there. My first thought was that the frames were misaligned.
The HUD in Google Glass was also outside of your normal field of view and for certain things this was a poor experience. For example, using Google maps integration I felt like I was taking my eyes off the road and felt safer simply using my smartphone mounted to my windshield.
These are going to own the security industry's human surveillance staff. Imagine being a security guard for some corporate or university campus, these are a must.
I think there's definitely a market for endurance athletes. Runners and cyclists would like to see time, speed, distance, cadence, power output, navigation, etc without having to look down at a wristwatch or bike computer. Those people are accustomed to spending a lot on sports equipment. There are existing products like the Everysight Raptor and Garmin Varia View but they're bulky or goofy looking or obstruct vision, so Intel has plenty of space to offer a better alternative.
I think cyclists would need it more than runners, but cyclists are too small of a market to be worth the r&d cost. That's why current offerings are bad. Cyclist-oriented tech has to piggy back on more popular tech, the way bike lights got a LOT better very quickly after smart phones started pushing li-ion tech.
I agree in general that it would be useful in these situations.
As someone who wears glasses, I find it impossible to run with them because of the movement or them just falling off. Are people who wear glasses able to run with them - am I doing something wrong?
This might just be the start of a new, big, market at a time when they desperately need to diversify their sources of revenue. Seeing them achieve that through in-house innovation as opposed to copying (or buying) competing products would be great.
I'm disappointed that "programming for Vaunt will involve JavaScript". We're stuck with this horribly designed language on the web because browsers don't run anything else (natively), and there is a strong trend in the field to replace it, either with compilation of saner languages to JS or with WebAssembly. We don't need to infect another nascent market with this atrocity, let it die.
Interesting that they chose red as the display color. One of the co-creators of Google Glass said that they initially all thought red would be good, but after trying different colors the consensus was that it was terrible. This was because there wasn't enough contrast with the background environment.
It sounds like Intel's tech is fundamentally different—they paint your retina with a laser—and this may make the background issue irrelevant. And it was certainly part of the safety pitch, which was that this is a very low-powered laser. If it had blue or green in it they couldn't make this claim.
>> Google Glass said that they initially all thought red would be good, but after trying different colors the consensus was that it was terrible
They should have looked at a larger sample size. Red is used in rangefinder viewfinders for over 30+ years. Leica M240 has red and white as options. Red is by far the best from all standpoints - contrast, legibility and versatility.
Red dot sights are used in firearms and they are also by far superior to any other color.
I am doubtful of Google engineers and whether they put enough thought into exploring colors by using a larger sample size or cross checking different industries.
I really like how much effort they put in to make things as natural and unintruive as possible.
The article mentionned there's no interaction yet; I wonder if they could track eyeball movement and use eye blinks for clicking. (Someone else mentionned a ring as an input device which is also an excellent idea.)
I want these, and then I want an app on them that will tell me if the person standing directly in front of me is on my LinkedIn/Facebook, and if so, what their name is and what they do.
Unfortunately as the other reply says, there's no camera to see who's there. But as someone who's bad with names I generally agree, in fact I don't even need the Internet connectivity. Just let me write a note on someone - "John Smith. Met at Steve's BBQ" - and have it show up when the camera sees them again.
I don't see the point of these overpriced smartwatches and glasses that can barely do one thing (display notifications), but I want my future full AR/VR glasses, so I'm glad the early adopters pay for it.
Right now, what I'd want is a very simple, slim, long-lasting watch that would notify me of emails, messages, phone calls and notifications based on their origin (business account, work account, personal, random). It doesn't even need a display, different vibration modes and LED's would do fine.
Fitbit Ionic has a five day battery life, give or take. But a smartwatch pales in comparison to how convenient Glass was to use: I used to be able to be read and reply to texts while doing anything, including driving, safely. And not having to interrupt a conversation when I noticed the incoming call or message was unimportant was fantastic.
There exist a lot of display-less smartwatches out there, I found this list which is mostly that [1]. Often they use the hands of the watch to display extra data and have very long lasting batteries.
It's in the article:
It's a monochrome Virtual Retinal Display (VRD), powered by a monochrome VCSEL light source and MEMS scanner. The resolution of the image is 400X100 pixels.
Surveillance will become ubiquitous. There's no plausible scenario where it doesn't, right? Can't legislate it away once the sensors/cameras become unnoticeably small.
So we'll need new social norms to control what people share about what they learn. E.g. we already don't mention what we hear from behind bathroom doors in polite company. We'll need rules so that people can continue to operate as humans in this new paradigm.
Yeah, right - "no one will use it for every tweet notification, it will only provide contextual information" - it will be used for whatever people want to use it and we know what people want to use it for (porn).
Now in addition to people talking out loud alone on the streets, people checking their notifications on their watch and phones at dinner, we'll have people looking at your teeth doing weird eye and head motions when you talk to them...
I watched the Verge video this morning where they tried to explain how the hologram projector works, but I am still confused. How can it ensure a sharp image shining directly through your own intraocular lens (not the glasses themselves) to the retina? If you have a "longer" or "shorter" eyeball the light ray's focus point will not be on the retina itself.
Laser light is "coherent" both spatially and temporally. Spatial coherence allows the beam to be "collimated", which basically just means that it looks like a cylinder and not a cone. That's why you can shine a laser at the moon and see the spot; most of the energy from the laser makes it to the same place, bounces back to your eyes, and you get a bright spot. Shining a flashlight doesn't work because the light spreads out too much. Technically some of the light still gets to the moon and back to your eyes, it's just below the threshold of your eye's ability to distinguish differences in brightness.
For the glasses, the laser is probably being scanned using a mems mirror (like how a DLP tv works, sorta), and modulated in brightness periodically to create the pixels. Since there's only one "point" of contact between your lens and the beam, the lens doesn't distort the beam like it would an image. That's where the idea of focus comes in. If you were looking at a photograph with your eye, there would be many sources and colors of light. Since the lens refracts incoming light based on direction, position, and color, your lens' job is to make sure the "pixels" of the photograph stay spatially organized with respect to each other. That's what being in focus means. And since the laser has only one color and one direction, all that light stays together and makes a nice dot on your retina. The only thing left to do is make a correction to the overall distortion pattern your lens introduces ,which is similar for pretty much everyone. Same reason you need to add barrel distortion before sending video to an HMD. I think that's what they were showing with that "warping" red image of the glasses' display.
bufferless - Don't wait for an image, just stream pixels as fast as you find them
clockless - No world "ticks", the world is an append-only log of "percepts"¹ which can be projected onto any time.
stochastic - Don't wait for certainty about a pixel, just push out the most probable ones first
signed distance field - Afformentioned "percepts" don't have well defined boundaries like a polygon, instead "fields" centered on a point describe how light moves around them. Any two fields can be trivially summed, so you can ignore most of a scene when searching for a specific pixel near a small number of local fields.
Together they allow you to supply the eyes with nearly zero-latency data with arbitrarily low computing power.
¹ As an aside, there is evidence humans don't see a "now" tick either, we perceive "fields out of time" directly, log them, and interpolate their relationship to "now" thereafter, such that we feel that we are "seeing" something which our eyes have already stopped reporting about. Thus SDFs and clockless rendering are a natural fit and map well to human perception.
I would be excited if these can display charts, graphs, pictures and video -- stuff that's too detailed for a smart watch and Alexa can't speak out loud.
I was just learning how to make a "Julia Child" omelette and would've loved to have her technique (<5 secs!) on repeat while I perform the maneuver.
Gestures might be needed for some cases, but there are many situations when you could just use the smartphone as the control device. Often I do have one hand available, but the problem using a smartphone is that it requires me to stare at the screen. For example walking in the city, driving the car.
Q. Hey, this won't just try to show me
more Twitter bullshit, will it?
A. No, no, no! Heh heh! It will show you
Yelp bullshit. Much better, yes?
Q. Ah... so the advertising will finally
be the kind we all yearn for?
A. Yessssss!
I‘m not sure I would want these. Shining a laser (even though it will be safe and well tested) right into my eyes seems a bit.. risky. I presume the laser will be controlled by its own microcontroller, well tested and „unhackable“.. but still, if we can learn anything from Spectre, it’s that inconspicuous system parts can open a security hole. Burning holes into a retina is quickly done given the right amount of power.
There’s also the thing that blue light accelerates retina cell death.. I‘d rather wait for some long term studies before putting these on.
Just because you don't understand something doesn't mean it's unsafe. From the description it sounds like the laser is incapable of a high enough output to damage your eyes.
> Using a Vaunt display is unlike anything else I’ve tried. It projects a rectangle of red text and icons down in the lower right of your visual field.
Interesting choice of location -- I would have thought it would be better to put it somewhere above the normal field of view, as most people tend to look up when thinking and trying to recall something. The kind of information smart glasses offer seems like it could be more naturally accessed that way.
So the tech geek in my wants to love these things. It is so SciFi future-y. I just cannot see using them.
1. I paid to have my eyes fixed so I do not need glasses.
2. I am not sure what value any of this brings. I have not see the killer app..
That being said there is an irrational part of me that wants to hold out for the in eyeball version of this. The real issue is the killer app, now that I think about it, is the brain interface where you can think about they information you want and have it come up. Until then...well.
I think this looks awesome. The only problem is I have two pairs of glasses - one for reading/computer and one for everything else - so I'll need two of them.
If I was them, I would add a bone conduction speaker in the stem so it can give you audible information as well.
I also like the ring input device another commenter mentioned.[1]
I can picture using this with a voice interface w/ full screen transparent overlay toggle for navigating the world. Eye-swiping seems too difficult. Couple that with sensing where your hands are and you could manipulate 3d interfaces with gestures. People talking to themselves on a Jawbone was weird enough. Now they'll be full skitso where everyone's a talking mime with their own personal minority report / terminator UI.
I can't help but think of how much more productive a factory worker or repairman would be with these glasses, if it were possible to display instructions, dimensions, parts of a reference manual, etc...
The first successful business in the "glasses with HUD" space will be one that targets businesses which manufacture complex things using human beings.
> Intel intends to attract investors who can contribute to the business with strong sales channels, industry or design expertise, rather than financial backers.
Luxottica-Essilor? They own global eyeglass distribution, both offline and online. They can take the Android market. Apple’s glasses are supposedly a couple of years away.
Whether it fits your personal style has no bearing at all on whether it's normal. I don't like v-neck shirts, so personally I never wear them, but they're definitely normal.
As someone who wears glasses that look like those every day -- I think they look great! Big glasses like these help frame my overly-large head and face features, making me look better :)
Since Intel doesn’t productize themselves, I wonder who Intel will find to build and sell it. Traditional PC companies don’t seem like a great fit, but I have a feeling the launch partners will be companies like Asus anyway.
Funny you should say that - when I was working @Intel I wrote a proposal for a Bluetooth Low Power ring that provide a simple input device for headless devices with 1 rotary input and one press input, sufficient to scroll and click on items.
Another proposal I had was a bracelet that would sense capacitance changes in the hand upon fingers touching each other; this way you could have a 12-key "keyboard" on the phalanges of the non-opposable fingers (3 phalanges x 4 fingers) touch-able by the thumb.
Such technologies would require minimal power input and provide good interaction with any headset; but at the time Intel were not interested in the research needed to build a prototype.
IMHO, Soon, smart glasses will be as ubiquitous as cellphones. We as humans keep increasing our communications and multitasking. This item covers both.
It's interesting that this is on the front page at the same time as the article about Apple. I would love a product with Apple quality that would provide real value AR. Hope they make it happen.
This will lead to all glasses wearers being treated with suspicion, until such time as cameras are also readily available in shirt buttons and everything else. At that point recording will become unavoidable and therefore normalized.
nogridbag|8 years ago
I have a three month old daughter now and I find myself fumbling about with my phone trying to take photos of her. Just last night I dropped my Pixel phone while trying to capture a photo of her. Phone is fine, but she wasn't too happy with the loud noise of my phone hitting the wood floor :) I kind of miss Google Glasses simply for the camera feature.
Instead of a minimal heads up display, I would much rather have a minimal wearable camera without all the extra functionality Google Glasses offered. Google Clips seems to be an alternative hands-free camera with different pros and cons (+I can be in the photo. -Can't capture the same type of photos from my eye viewpoint).
mjlangiii|8 years ago
Regarding an always on camera - I'll second other comments in warning there are just so many concerns with abuse, I don't see how to get around those.
pavel_lishin|8 years ago
thaumaturgy|8 years ago
The trouble is in resolving that against the kind of culture that has fostered things like the "creepshots" subreddit.
Smartphones (and GoPros) have already shown the value of being able to film and share more of our lives, but they've also shown the downsides of having our graceless moments broadcast to the world, or the many events that are now impossible to enjoy in a sea of smartphones held aloft, or, if you're an attractive woman, far more of you put online for other people to gawk at.
Until those abuses are resolved somehow, people are going to resist having a little camera attached to everyone's face.
ocdtrekkie|8 years ago
stcredzero|8 years ago
The negativity around Google Glasses amounted to a mismanaged launch. iPhones were introduced to a public already familiar 1st-hand with cell phones, camera phones, and digital cameras. The public was already passingly familiar with smart phones. Google Glasses were introduced with maximum hype to as many early adopters as possible, with no thought as to how it could backfire, and how to navigate those pitfalls. In retrospect, is it any wonder that there were so many inadvisable actions, all adding up to a societal backlash?
Google Glasses should have been rolled out to far fewer people, and in a form factor almost indistinguishable from ordinary eyeglasses.
azinman2|8 years ago
xPhobophobia|8 years ago
starsinspace|8 years ago
SimbaOnSteroids|8 years ago
magic_beans|8 years ago
It's probably better for you -- and for her -- not to have so many photos.
bluntfang|8 years ago
craigsmansion|8 years ago
That should be, "you could really capture some great moments directly on film..."
You already captured these moments; you were there; they are ingrained in your memory.
> My mother recently passed and out of all the photos [..] staring right back into my soul."
First of all, my sincere condolences.
Again, you were there. That moment, that specific wonderful moment you witnessed, nice as it is to have something to remember it by, it feels very personal, not something you could share unless you were there, and you were you.
Back in the days of photo-lab development, there existed magic moments caught on camera as well, but it was the luck of the draw. Since they were so rare, there was no fear of missing out, but any such moment caught on camera was all the more special for it.
Maybe this fear of missing out on your own life, pushing one to commit everything to camera, comes at the cost of actually missing out. Life is made better by witnessing these magic moments, but I'm unsure the drive to capture them all enhances the experience, witnessed and remembered in that particular fashion by no one but yourself.
y0ghur7_xxx|8 years ago
The nice thing is, that they are completely "invisible" for other people around me.
jclardy|8 years ago
My main problem is what does it give me over wearing a smart watch? The only thing I can think of is being slightly more discreet and the "gesture" not being rude to activate when you are in company. That is something I found out quickly when I started wearing an Apple Watch - even if you are just checking a text message it appears you are checking the time and want to leave. Since then I have greatly cut down on notifications going to my watch, and have even fewer that even ping on my phone.
To me, having a camera on it is what would make it compelling, but at the same time make it creepy. Say being able show driving directions overlaid on the actual road, versus some floating text. Or you sit down at a desk with just a keyboard and mouse and your "displays" are only shown in your field of view - and you can customize, move and resize them as you wish.
nogridbag|8 years ago
From the article:
>> It projects a rectangle of red text and icons down in the lower right of your visual field. But when I wasn’t glancing down in that direction, the display wasn’t there. My first thought was that the frames were misaligned.
The HUD in Google Glass was also outside of your normal field of view and for certain things this was a poor experience. For example, using Google maps integration I felt like I was taking my eyes off the road and felt safer simply using my smartphone mounted to my windshield.
bsenftner|8 years ago
nradov|8 years ago
bagacrap|8 years ago
QasimK|8 years ago
As someone who wears glasses, I find it impossible to run with them because of the movement or them just falling off. Are people who wear glasses able to run with them - am I doing something wrong?
jwr|8 years ago
maltalex|8 years ago
This might just be the start of a new, big, market at a time when they desperately need to diversify their sources of revenue. Seeing them achieve that through in-house innovation as opposed to copying (or buying) competing products would be great.
erikj|8 years ago
seangrogg|8 years ago
SquareWheel|8 years ago
megy|8 years ago
xab9|8 years ago
xamuel|8 years ago
gnicholas|8 years ago
It sounds like Intel's tech is fundamentally different—they paint your retina with a laser—and this may make the background issue irrelevant. And it was certainly part of the safety pitch, which was that this is a very low-powered laser. If it had blue or green in it they couldn't make this claim.
fermienrico|8 years ago
They should have looked at a larger sample size. Red is used in rangefinder viewfinders for over 30+ years. Leica M240 has red and white as options. Red is by far the best from all standpoints - contrast, legibility and versatility.
Red dot sights are used in firearms and they are also by far superior to any other color.
I am doubtful of Google engineers and whether they put enough thought into exploring colors by using a larger sample size or cross checking different industries.
xab9|8 years ago
athenot|8 years ago
The article mentionned there's no interaction yet; I wonder if they could track eyeball movement and use eye blinks for clicking. (Someone else mentionned a ring as an input device which is also an excellent idea.)
marcus_holmes|8 years ago
tomtoise|8 years ago
Nition|8 years ago
jotm|8 years ago
Right now, what I'd want is a very simple, slim, long-lasting watch that would notify me of emails, messages, phone calls and notifications based on their origin (business account, work account, personal, random). It doesn't even need a display, different vibration modes and LED's would do fine.
ocdtrekkie|8 years ago
jgrahamc|8 years ago
tgb|8 years ago
[1]https://www.wareable.com/smartwatches/best-smart-analogue-wa...
rthomas6|8 years ago
trisimix|8 years ago
lev99|8 years ago
How many characters can it display at once? One line or multiple lines? What's the resolution? Are images supported?
I don't think a monochrome device will sell, but the display is (to me) by far the most interesting development here.
mojomark|8 years ago
JoeAltmaier|8 years ago
So we'll need new social norms to control what people share about what they learn. E.g. we already don't mention what we hear from behind bathroom doors in polite company. We'll need rules so that people can continue to operate as humans in this new paradigm.
pi-squared|8 years ago
mynameisvlad|8 years ago
deadbunny|8 years ago
Now you mention it, probably...
bencollier49|8 years ago
sumnulu|8 years ago
bsaul|8 years ago
Now in addition to people talking out loud alone on the streets, people checking their notifications on their watch and phones at dinner, we'll have people looking at your teeth doing weird eye and head motions when you talk to them...
lj3|8 years ago
dvfjsdhgfv|8 years ago
maltalex|8 years ago
Google products come with visible and invisible strings; this, at least for now, doesn't.
epmaybe|8 years ago
defterGoose|8 years ago
For the glasses, the laser is probably being scanned using a mems mirror (like how a DLP tv works, sorta), and modulated in brightness periodically to create the pixels. Since there's only one "point" of contact between your lens and the beam, the lens doesn't distort the beam like it would an image. That's where the idea of focus comes in. If you were looking at a photograph with your eye, there would be many sources and colors of light. Since the lens refracts incoming light based on direction, position, and color, your lens' job is to make sure the "pixels" of the photograph stay spatially organized with respect to each other. That's what being in focus means. And since the laser has only one color and one direction, all that light stays together and makes a nice dot on your retina. The only thing left to do is make a correction to the overall distortion pattern your lens introduces ,which is similar for pretty much everyone. Same reason you need to add barrel distortion before sending video to an HMD. I think that's what they were showing with that "warping" red image of the glasses' display.
epmaybe|8 years ago
erikpukinskis|8 years ago
erikpukinskis|8 years ago
bufferless - Don't wait for an image, just stream pixels as fast as you find them
clockless - No world "ticks", the world is an append-only log of "percepts"¹ which can be projected onto any time.
stochastic - Don't wait for certainty about a pixel, just push out the most probable ones first
signed distance field - Afformentioned "percepts" don't have well defined boundaries like a polygon, instead "fields" centered on a point describe how light moves around them. Any two fields can be trivially summed, so you can ignore most of a scene when searching for a specific pixel near a small number of local fields.
Together they allow you to supply the eyes with nearly zero-latency data with arbitrarily low computing power.
¹ As an aside, there is evidence humans don't see a "now" tick either, we perceive "fields out of time" directly, log them, and interpolate their relationship to "now" thereafter, such that we feel that we are "seeing" something which our eyes have already stopped reporting about. Thus SDFs and clockless rendering are a natural fit and map well to human perception.
asah|8 years ago
I was just learning how to make a "Julia Child" omelette and would've loved to have her technique (<5 secs!) on repeat while I perform the maneuver.
jpalomaki|8 years ago
tritium|8 years ago
Tade0|8 years ago
They should really have a chat with my SO. She can always tell if someone is paying attention.
I think squinting is a pretty natural gesture that could be used for controlling this device. That is - if it's feasible to make it so.
manmal|8 years ago
There’s also the thing that blue light accelerates retina cell death.. I‘d rather wait for some long term studies before putting these on.
driverdan|8 years ago
lev99|8 years ago
sdrothrock|8 years ago
Interesting choice of location -- I would have thought it would be better to put it somewhere above the normal field of view, as most people tend to look up when thinking and trying to recall something. The kind of information smart glasses offer seems like it could be more naturally accessed that way.
myrandomcomment|8 years ago
1. I paid to have my eyes fixed so I do not need glasses. 2. I am not sure what value any of this brings. I have not see the killer app..
That being said there is an irrational part of me that wants to hold out for the in eyeball version of this. The real issue is the killer app, now that I think about it, is the brain interface where you can think about they information you want and have it come up. Until then...well.
anotherevan|8 years ago
If I was them, I would add a bone conduction speaker in the stem so it can give you audible information as well.
I also like the ring input device another commenter mentioned.[1]
[1] https://news.ycombinator.com/item?id=16308730
earenndil|8 years ago
It would probably be too heavy, part of their 'non-intrusiveness' goal is to make it be very light.
vadimberman|8 years ago
Battery life, normal temperature, reliability vs a gimmick that 1 out of a 1,000 users will want.
sebringj|8 years ago
rland|8 years ago
The first successful business in the "glasses with HUD" space will be one that targets businesses which manufacture complex things using human beings.
trisimix|8 years ago
dlokshin|8 years ago
hbosch|8 years ago
walterbell|8 years ago
Luxottica-Essilor? They own global eyeglass distribution, both offline and online. They can take the Android market. Apple’s glasses are supposedly a couple of years away.
2muchcoffeeman|8 years ago
I don’t like huge thick rimmed glasses and I never will.
adrianmonk|8 years ago
eutropia|8 years ago
kazinator|8 years ago
aurizon|8 years ago
adultSwim|8 years ago
This is the kind of product I would be really interested in using. I was sad when Google Glass died after pushback in the Bay Area.
pavlov|8 years ago
Since Intel doesn’t productize themselves, I wonder who Intel will find to build and sell it. Traditional PC companies don’t seem like a great fit, but I have a feeling the launch partners will be companies like Asus anyway.
bravo22|8 years ago
yohann305|8 years ago
senectus1|8 years ago
ddalex|8 years ago
Another proposal I had was a bracelet that would sense capacitance changes in the hand upon fingers touching each other; this way you could have a 12-key "keyboard" on the phalanges of the non-opposable fingers (3 phalanges x 4 fingers) touch-able by the thumb.
Such technologies would require minimal power input and provide good interaction with any headset; but at the time Intel were not interested in the research needed to build a prototype.
ourmandave|8 years ago
clueless123|8 years ago
moogly|8 years ago
nradov|8 years ago
adultSwim|8 years ago
floatboth|8 years ago
TYPE_FASTER|8 years ago
navium|8 years ago
ekblom|8 years ago
sethammons|8 years ago
trisimix|8 years ago
adultSwim|8 years ago
nvus|8 years ago
temp987654|8 years ago
[deleted]
icantdrive55|8 years ago
[deleted]
alvil|8 years ago
[deleted]
crispytx|8 years ago
That sounds awful! Like some shit from Black Mirror! Jesus, Fuck!
crispytx|8 years ago
xedarius|8 years ago
Uhhrrr|8 years ago
ryanpetrich|8 years ago
rocky1138|8 years ago
This is already the case.