Can't help but notice that we keep wanting to make content that should be flat more 3D (e.g., animated 3D backgrounds, scroll filler on corporate pages, the entire metaverse), and content that could benefit from being 3D more flat (like UI buttons losing their pseudo-3D bevels and not looking like interactive widgets at all).
Re UI, my theory is that this is partly because minimalism doesn't become dated as fast.
Related, I remember that what blew me away upon seeing the 3DS for the first time was not the 3d playfield (which looked exactly as I expected), but the UI in 3D. It really felt like a glimpse into an exciting future.
Imagine, for example, opening your favorite brand’s website and being presented with a miniature virtual storefront. You could look at their most popular products as if you were standing on a sidewalk peering into their shop.
I imagined this as instructed as was simultaneously bored and depressed.
The blog we are reading right now is Shopify's blog, so I have a theory.
Another interesting question is "Why is shopify's blog the place talking about this, instead of somewhere else?" I remember that demo... it was amazing and then it did go nowhere. With modern face recognition and tracking, it should be entirely possible to use it for some nifty no-touch experiences.
It's a relatively invasive technique that requires a computer's camera to record details of a person's face, and that's marketable data that any commercial shopping site would likely collect and sell on to other parties. It'd be much simpler to implement arrow key / spacebar navigation through a 3D scene (still computationally expensive relative to rendering the standard HTML/CSS/JS page), but then that data couldn't be collected.
In general interactive 3D on the web still uses too much computational power on the client-side for most devices.
It's published by Shopify, so in this case it's because it likely would not have gotten published otherwise. Unfortunately E-commerce companies are not known for their "research for the fun of research".
Or, imagine your head tracking data is being uploaded and sold to the hugest bidder! They could tell whether you have a health condition which they could sell you an expensive drug for! Or raise your insurance rates! Wow this IS exciting.
It has to be about shopping because pr0n actually drives new tech and that venue would not be pr0n friendly.
I would imagine this would be an obvious easy sell for CAD, so there must be something wrong with it, perhaps a patent, holding back progress. I will say that I've done CAD for projects (woodworking and making STL models for 3d printer) and people who do not CAD think that CAD people look at 3d animations 99% of the time, but IRL when I do CAD I spend most of my time thinking very abstractly (how to balance the spacing of power supply and PCB such that it looks nice, is easy to access, thermal concerns, wire routing... Or I can look up the clearance hole dia for a 4-40 screw now what is a common drill size with sane tolerances to drill it, how about an eighth of an inch?)
Because this is from someone who works for Shopify and has to justify their 20% project to their bosses.
Why can’t people have a little more imagination and see this as a great new data visualization UX like for things like charts or maps, instead of automatically panning it to death?
I don’t know any of the science behind it, but my experience with these kinds of ideas (including the 2007 wiimote target demo which I replicated and showed to family), is that they feel exhausting to use beyond a 15 min novelty.
My guess is that the brain detects that the motions and such are not quite right, and like a wrong eyeglass prescription, your brain and eyes work overtime trying to figure out compensation.
The thing is with the wiimote (atleast with motion plus) and the switch, nintendo really tuned in the motion. I think you can give anyone age 5-99 a wiimote (or a joycon), say play bowling or tennis, and they can pick it up within a few second
Barely anyone else puts the effort into getting gestures right. Part of that is the developers spend so long developing the gestures, they don't know whats natural and whats not anymore.
Big example is windows precision drivers. In theory they should be more natural, in practice they are terrible.
I had the same thing with compiz when it first came out. I loved it in all the videos; and the first day, but I quickly switched back to plain old desktops.
> is that they feel exhausting to use beyond a 15 min novelty.
Anecdote aside, would this potentially change if it was done for something with utility like data visualization? Unlike a VR headset or the original demo, you don’t need to something radically different to experience something radically different ie holding or wearing an IR light. You just start using it with your existing hardware
> Imagine, for example, opening your favorite brand’s website and being presented with a miniature virtual storefront. You could look at their most popular products as if you were standing on a sidewalk peering into their shop.
I am imagining it, and it's worse than the current norm.
This is exactly why I think 3D TVs died out a couple years ago: once the novelty factor fades away, you realize that it's actually hindering your initial goal.
I think it's important to remember that Shopify doesn't have a singular ad product. It's a platform to build a web store on, and merchants can add whatever they want to their shop, but Shopify doesn't collect any data across shops.
That's the shop's data, not Shopify's.
Bias: my team at Shopify helps merchants launch their shop and get their initial sales.
> Look directly at the captcha, please read in a clear voice while drinking verification can.
> Whoops, looks like you may have blinked or looked away! Please try again. If you have run out of verification cans, you can say "I love Cosco" to temporarily credit your account with one VCT while we dispatch a new box to your address.
It's ironic that 15 years later, the latency between the movement of the camera and the targets has only got bigger (original video vs the first one in the "Final prototype" section).
This is super cool. I wish WebVR was still a thing and that it had gotten more love, better browser support and more libraries adept at handling head tracking, stereoscopy, etc. During that brief period where it was standard in Chrome, I happened to be bidding a bunch of interior architectural designs for a client, so I scrapped together a demo script in ThreeJS that let me stitch together prerendered stereo panoramic renderings, and insert some interactive 3D objects / animations inside the skylight space. I patched in support for the pointer remote from the original Oculus Rift, and just stuck our architectural renderings on a private website and mailed the client a couple Oculus Rifts as a gift to check them out. It was a big "wow" factor and landed us a lot more work.
None of this is to say that it would be a great idea to build websites this way. Websites are made to convey the desired information quickly, so unless the desired information is a full 3D understanding of a space I have a hard time seeing the utility of building out a 3D space as a metaphor for information. Did that make sense? Let me say it differently: We've tried a lot of visual metaphors in my life for organizing and exploring information spaces. As it turns out, the simpler ones are usually the most easily grasped and therefore the most efficient at doing their job (i.e., remaining "invisible" and serving as a transparent container for information). Even 3D interfaces that do serve the purpose of conveying spatial relationships and art often fall into the trap of trying to do too much. This is not meant as a paean to minimalism (at all - I love clever interfaces and I appreciate when information design makes you figure out a paradigm to get the most out of interacting with it - as long as "the most" is defined as more interesting views of the data I'm trying to get at). Just to say that design should showcase content, not itself, except in the case where the content is supposed to showcase the design, and then you can go nuts.
The New Nintendo 3DS was based around this, and although they abandoned it I think there's still a lot of room for something like it. It's more AR than AR is.
nobody cared because you had to buy new hardware (the fire phone) to get it, at the expense of the current hardware you already really liked. In this case, you don’t have to get any new hardware.
TrackIR is existing and mature product using the original idea.
Sure it requires extra hardware, but it's amazing in first person games that support it. You get to use your good monitor, don't get dizzy like in VR and it could be quite cheap.
I think implementing similar thing with just camera and computer vision is a good idea, let's make this more common thing.
The techniques themselves are very cool and inspiring.
That said, the reason such things do/don't take off isn't about how cool they are, generally. It's about how useful they are.
Even for games, immersion and realism are secondary to gameplay. For most applications, it's hard to see how a "realistic window" is useful. A flat paper, paper-like screen is better to read off, for example.
Cool tech. Definitely worth exploring gaming potential. Not easy to apply to other uses. There's no benefit to making an online shop window more like a real shop window. Catalogue is what you want in any case.
We use this concept to great effect in a [CAVE] where we were limited to 2d projections. The thing that many are not aware of is that stereovision is just one of many ways we perceive depth: https://en.wikipedia.org/wiki/Depth_perception
Broadly speaking: yes! I wish we used this more, it's not hard and it looks very convincing and subtle stuff feels quite good...
... when latencies are VR-like. When it lags even a little, it feels awful. So it tends to work pretty well for tech demos like this, and not at all when exposed to the real world.
Specialized hardware and software stacks can do a pretty good job - see the Fire Phone for an example there. It did quite well on technical notes, the experience was smooth and battery-efficient(-enough) and the illusion was almost fast enough to be transparent.
But it failed. Unless we get this kind of thing baked into everything for years, it will continue to fail, because there will be no content. And integrated webcams and their related software continue to be utter shit. We're not close to (or even moving towards) being able to use this, unless Apple surprises everyone in their next lineup and kicks off a renaissance.
..... and that's before getting into privacy issues with exposing an API that reveals info about your face. It's a harder practical realm than it seems it should be, when people have been making very effective tech demos for decades on commodity (but specific) hardware.
Less is more. We don't need 3D except in a few cases. Computer output is about information, not trying to make things look like in real life 3D. When we read a book we just need some black-and-white text.
There are great applications for virtual reality for instance in medicine. But in general 3D is only needed when it is needed. If it is not needed then pushing it onto users becomes a distraction.
Are there any phone apps that do this kind of head tracking using the front-facing camera? I imagine this would simplify the camera calibration setup since most phones have a known geometry relating the camera to the screen location, unlike most webcams
The web is plenty immersive when there aren’t 84738473 ads and user hostile UX layouts and popups and shifting layouts and all the other things everyone ought to already know are bad.
Now we can all look forward to the day when websites refuse to load because they want camera permissions for fake depth, shiny buttons, and other random visual effects.
Also they'll analyze your face and send it to the ad tracking agency.
I have a somewhat related question/thought regarding daily desktop use: is it possible to have the sound from the speakers follow the browser or window that's playing it? For example, having a podcast playing on the top right corner while a different website on another browser tab also plays sound, but with the sound source corresponding to its location, making it easier to distinguish between the two. It would also be really helpful when you have a ton of browser tabs open be able to find it with a more physical metaphor.
[+] [-] bm3719|3 years ago|reply
[+] [-] ilkke|3 years ago|reply
Related, I remember that what blew me away upon seeing the 3DS for the first time was not the 3d playfield (which looked exactly as I expected), but the UI in 3D. It really felt like a glimpse into an exciting future.
[+] [-] codeulike|3 years ago|reply
I imagined this as instructed as was simultaneously bored and depressed.
Why does it always have to be about shopping.
[+] [-] shadowgovt|3 years ago|reply
Another interesting question is "Why is shopify's blog the place talking about this, instead of somewhere else?" I remember that demo... it was amazing and then it did go nowhere. With modern face recognition and tracking, it should be entirely possible to use it for some nifty no-touch experiences.
[+] [-] riidom|3 years ago|reply
But yes, first promote some gimmicky fun stuff to get the foot in the door and when it has arrived, show the real face.
[+] [-] photochemsyn|3 years ago|reply
In general interactive 3D on the web still uses too much computational power on the client-side for most devices.
[+] [-] icepat|3 years ago|reply
It's published by Shopify, so in this case it's because it likely would not have gotten published otherwise. Unfortunately E-commerce companies are not known for their "research for the fun of research".
[+] [-] kwertyoowiyop|3 years ago|reply
[+] [-] VLM|3 years ago|reply
I would imagine this would be an obvious easy sell for CAD, so there must be something wrong with it, perhaps a patent, holding back progress. I will say that I've done CAD for projects (woodworking and making STL models for 3d printer) and people who do not CAD think that CAD people look at 3d animations 99% of the time, but IRL when I do CAD I spend most of my time thinking very abstractly (how to balance the spacing of power supply and PCB such that it looks nice, is easy to access, thermal concerns, wire routing... Or I can look up the clearance hole dia for a 4-40 screw now what is a common drill size with sane tolerances to drill it, how about an eighth of an inch?)
[+] [-] chaostheory|3 years ago|reply
Why can’t people have a little more imagination and see this as a great new data visualization UX like for things like charts or maps, instead of automatically panning it to death?
This could be very useful in hospitals.
CAD and 3D modeling software?
[+] [-] rchaud|3 years ago|reply
the subdomain of this submission offers a clue about that
[+] [-] jabroni_salad|3 years ago|reply
https://youtu.be/G9FGgwCQ22w?t=183
[+] [-] mr_sturd|3 years ago|reply
[+] [-] Waterluvian|3 years ago|reply
My guess is that the brain detects that the motions and such are not quite right, and like a wrong eyeglass prescription, your brain and eyes work overtime trying to figure out compensation.
[+] [-] shaunsingh0207|3 years ago|reply
Barely anyone else puts the effort into getting gestures right. Part of that is the developers spend so long developing the gestures, they don't know whats natural and whats not anymore.
Big example is windows precision drivers. In theory they should be more natural, in practice they are terrible.
[+] [-] bigmattystyles|3 years ago|reply
[+] [-] dr_dshiv|3 years ago|reply
[+] [-] kwertyoowiyop|3 years ago|reply
[+] [-] chaostheory|3 years ago|reply
Anecdote aside, would this potentially change if it was done for something with utility like data visualization? Unlike a VR headset or the original demo, you don’t need to something radically different to experience something radically different ie holding or wearing an IR light. You just start using it with your existing hardware
[+] [-] permo-w|3 years ago|reply
[+] [-] Peritract|3 years ago|reply
I am imagining it, and it's worse than the current norm.
[+] [-] mihaic|3 years ago|reply
[+] [-] hgsgm|3 years ago|reply
My what?
Do these people ever leave their fantasy land?
[+] [-] itissid|3 years ago|reply
[1]https://en.wikipedia.org/wiki/Selfridges#Windows
[+] [-] bobnamob|3 years ago|reply
[+] [-] 6177c40f|3 years ago|reply
[+] [-] mabbo|3 years ago|reply
That's the shop's data, not Shopify's.
Bias: my team at Shopify helps merchants launch their shop and get their initial sales.
[+] [-] bongobingo1|3 years ago|reply
> Whoops, looks like you may have blinked or looked away! Please try again. If you have run out of verification cans, you can say "I love Cosco" to temporarily credit your account with one VCT while we dispatch a new box to your address.
[+] [-] wraptile|3 years ago|reply
[+] [-] ZoomZoomZoom|3 years ago|reply
[+] [-] throwaway4aday|3 years ago|reply
[+] [-] pxmpxm|3 years ago|reply
[+] [-] noduerme|3 years ago|reply
None of this is to say that it would be a great idea to build websites this way. Websites are made to convey the desired information quickly, so unless the desired information is a full 3D understanding of a space I have a hard time seeing the utility of building out a 3D space as a metaphor for information. Did that make sense? Let me say it differently: We've tried a lot of visual metaphors in my life for organizing and exploring information spaces. As it turns out, the simpler ones are usually the most easily grasped and therefore the most efficient at doing their job (i.e., remaining "invisible" and serving as a transparent container for information). Even 3D interfaces that do serve the purpose of conveying spatial relationships and art often fall into the trap of trying to do too much. This is not meant as a paean to minimalism (at all - I love clever interfaces and I appreciate when information design makes you figure out a paradigm to get the most out of interacting with it - as long as "the most" is defined as more interesting views of the data I'm trying to get at). Just to say that design should showcase content, not itself, except in the case where the content is supposed to showcase the design, and then you can go nuts.
[+] [-] meheleventyone|3 years ago|reply
[+] [-] wodenokoto|3 years ago|reply
[+] [-] astrange|3 years ago|reply
[+] [-] chaostheory|3 years ago|reply
[+] [-] speps|3 years ago|reply
He managed to get a brand new in box one, not spoiling if it works or not you'll have to watch to find out...
[+] [-] in3d|3 years ago|reply
[+] [-] psuedo_uuh|3 years ago|reply
I would puke
[+] [-] metalrain|3 years ago|reply
Sure it requires extra hardware, but it's amazing in first person games that support it. You get to use your good monitor, don't get dizzy like in VR and it could be quite cheap.
I think implementing similar thing with just camera and computer vision is a good idea, let's make this more common thing.
[+] [-] dalbasal|3 years ago|reply
That said, the reason such things do/don't take off isn't about how cool they are, generally. It's about how useful they are.
Even for games, immersion and realism are secondary to gameplay. For most applications, it's hard to see how a "realistic window" is useful. A flat paper, paper-like screen is better to read off, for example.
Cool tech. Definitely worth exploring gaming potential. Not easy to apply to other uses. There's no benefit to making an online shop window more like a real shop window. Catalogue is what you want in any case.
[+] [-] fho|3 years ago|reply
[CAVE](https://de.wikipedia.org/wiki/Cave_Automatic_Virtual_Environ...)
[+] [-] Groxx|3 years ago|reply
... when latencies are VR-like. When it lags even a little, it feels awful. So it tends to work pretty well for tech demos like this, and not at all when exposed to the real world.
Specialized hardware and software stacks can do a pretty good job - see the Fire Phone for an example there. It did quite well on technical notes, the experience was smooth and battery-efficient(-enough) and the illusion was almost fast enough to be transparent.
But it failed. Unless we get this kind of thing baked into everything for years, it will continue to fail, because there will be no content. And integrated webcams and their related software continue to be utter shit. We're not close to (or even moving towards) being able to use this, unless Apple surprises everyone in their next lineup and kicks off a renaissance.
..... and that's before getting into privacy issues with exposing an API that reveals info about your face. It's a harder practical realm than it seems it should be, when people have been making very effective tech demos for decades on commodity (but specific) hardware.
[+] [-] galaxyLogic|3 years ago|reply
There are great applications for virtual reality for instance in medicine. But in general 3D is only needed when it is needed. If it is not needed then pushing it onto users becomes a distraction.
[+] [-] furyofantares|3 years ago|reply
No.
[+] [-] yafbum|3 years ago|reply
[+] [-] ergonaught|3 years ago|reply
[+] [-] wmil|3 years ago|reply
Also they'll analyze your face and send it to the ad tracking agency.
[+] [-] social_quotient|3 years ago|reply