For those who haven't had the pleasure: developing on Tensilica Xtensa cores generally means living within 128-256KB of directly-accessible memory; a windowed register file that makes writing your own exception handlers "interesting"; a 6-year-old GCC bolted to a proprietary backend; per-seat licensing fees to use the compiler; and a corporate owner that's only halfway interested in the ecosystem they now control.
So yeah, kind of wishing it would just die and let ARM take over the embedded space.
I'm not personally opposed to Tensilica "dying", especially if it doesn't involve Cadence dying, since they are a (somewhat indirect) competitor from my point of view, but ARM is not a substitute for Tensilica. You can't extend its ISA unless you license the arhictecture for $30M. A DSP like Tensilica's is also much more efficient than ARM on a range of tasks, and in particular, having local memory instead of caches is done for a reason. (The least efficient accelerator of all and the favorite of academics who have easy access to it, GPUs, also have this.)
As to their compiler licensing - that's what happens when you develop for a small niche, you get more expensive tools which are worse than the free ones used by the majority. But it doesn't mean that the thing doesn't have its uses. I hear that a recent chip by AMD had 40 Tensilica (smallish, inaccessible to most software) cores.
The same is true about CEVA (which was mentioned in a sister thread), more or less.
Ceva DSP is almost same story about compiler. They used to have GCC2.95 glued to their proprietary backend, 33% of compilations results in crash :-).
They got better by going to GCC4.4, but still use crazy licensing.
But other DSP cores are often even worse offering as little as only assembler.
But, DSP core itself is nice, very well thought out in contrary to mainstream DSPs from TI or Freescale(now NXP).
That's the company/family powering the ESP8266 SoC isn't it ( LX106)? Is it just their DSP chips that are using that weird gcc/proprietary frankenstein? I ask because I have an xtensa lx106 cross-compiler for the esp8266 chip installed which appears to be GCC 4.8.2
Isn't that par for the course for DSPs? Now-a-days I see a lot of hybrid DSPs with ARM cores that you can run whatever gcc you want on them but the main processing cores that do all that DSP goodness usually require some proprietary software from the vendor to operate.
That was my thought too. I guess it worked well for them, but it certainly wouldn't be my first choice. I wonder what the 10 custom instructions were?
ARM licensed cores don't have an easy way to add instructions, but they do have the TCM bus which might be low latency enough, depending on what they were trying to do.
Holographic Processing Unit (HPU) chip used in its virtual reality HoloLens specs
It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.
Beyond that, interesting teardown. The 10w number is great, but my guess is a lot of the high power processing happens in the sensor suite because it's using 4 repurposed IR sensor receiver combos to relay depth data in a highly structured way. That means this processor is the glue between the IMU and RGBD camera combo. I think in the end this can't scale down to consumer side with this approach, not to mention the other hindrances to scaling down with the protection system.
> It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.
The media write in the language understood by the reader, not the jargon of the domain expert. Once the AR-VR distinction filters down to most readers, media writers will dutifully follow suit.
An ordinary persons understanding of holograms (ie as a shimmery 3d light projection, like in Star Wars) is actually quite close to AR, which presumably is why MS have chosen to use that term. It's a bit more immediately understandable than Augmented Reality.
> It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.
Microsoft hasn't helped in this case, with all their blabbering on about "holograms" when as far as I can tell, the images it produces are not holograms in any conventional sense of the word (although the diffraction-based wave guiding is pretty cool.)
That cracked me up. Is that blue circle a gaze tracker? because it keeps pointing at virtual boobs whenever he looks at the models :)) not to mention sending avatar to the kitchen, priceless :D
If the first run is on a 28nm process, does this suggest the second generation on e.g. Intel's 14nm process might yield a drastically more powerful HoloLens model in the current form factor, or at least a more compact one capable of all the same (assuming the optics were also compacted) for revision 2?
There's a whole ton of stuff I'm not taking into account such as the physical size of the current HPU, but my guess is that this is the largest point of improvement just going by the process alone.
Pure speculation: 28nm masks and manufacturing are significantly cheaper than 16FF and newer, which likely helps meet the cost of HoloLens at the volume they're forecasting. Moving to 16ff or Intel's 14nm allows more for processing cores at a fixed die size, but also makes it more expensive.
There's a reason it's called the bleeding edge. :)
I have absolutely no knowledge of the physics involved, but it looks like the projection style that is used has a very limited field of view that won't be resolved with more processing power. I don't see how shrinking the optics could help this.
Because of this, it looks like future immersive improvements will be restricted, but I would be very happy to have it explained to me why I am wrong :)
The main thing people complain about in the HoloLens is the field of view; I don't know how much these chips are the bottleneck for that, but it seems like a problem you can throw more cores at, so I'm sort of expecting them to solve it that way, but I don't know a whole lot about hardware.
Depending on what format the IP was provided in, and the level at which Microsoft made their customization, it's not necessarily possible to just transplant it onto a newer process node.
No. In fact pure performance will very likely be lower on the 14nm process. They might get better power/performance, but I would be willing to be that they won't even achieve that. The only thing guaranteed is higher density aka more of those Tensilica cores.
Well this explains a lot. In my university (Russia, Mathematic-Mechanic faculty of SPBu) there are a lot of investments in computer vision stuff and it it almost impossible to build fast and with low TDP software for CV without SDP/FPGA/etc stuff.
Yeah but - 10W all getting radiated inside a single 12x12mm BGA package? Isn't that a lot? It sounds to me like that'd be running very hot...
I had a project with a Raspberry Pi2 which was only drawing ~2.5W all up (including the wifi dongle), and it got flaky when the sun shone on the case and got it a bit warm.
Several independent review article about Hololens mention the limited view (relatively small view angle), underwhelming and off putting. (You can immediately spot a sponsored article if it doesn't even mention that issue at all.) iPhone 6 and high end Android smartphones already have augment reality apps with SLAM technology that is far more impressive than what MS PR is suggestion Hololens might one day be able to offer - the E3 presentation in 2015 and 2016 were faked as we know in the meantime.
> iPhone 6 and high end Android smartphones already have augment reality apps with SLAM technology that is far more impressive
This is the most wrong comment I've seen on HN in a long time. The SLAM in Hololens is incredibly impressive and nothing on any smartphone is one hundredth as good, with the possible exception of the still unreleased Project Tango phone.
Have you even tried one? The small field of view is disappointing, but only because what is in it is so impressive. I'm not sure what you're talking about with the smartphones, but I'm pretty sure they can't show objects in a room with natural looking distances and depth of field like the Hololens can.
These kind of comments are really dismissive. Sure, MS PR did fire up the hype train a bit too much, but in the end they only want to show what it will be capable of in the future, as they are not selling the current version to consumers yet.
Pretty sure it will eventually reach that level of the E3 presentation when hardware gets faster/smaller/cheaper, by now it's just a tech demo and they have a lot more to actually show than Magic Leap for example.
Opposite VR solutions HoloLens however significantly
limited field of view has: Particularly when you view the
video on the large virtual canvas was how far away an
ugly proscenium effect. In addition, the glasses showed
in black and white content distinct RGB effects - similar
to the rainbow effect, causing the color wheel of DLP
projectors.
This is a GPU rather the a DSP (the tricky part is rendering not filtering). Core count is a bit of a pointless metric unless you know how big/ powerful a core is.
I like the idea of a self contained VR headset with no wires, but I don't think going to see on for ab a decade, there's just too much processing power needed for a realistic experience. I hope I'm wrong!
Edit: It appears this is in fact DSP (digital signal processor). That's a huge amount of power for dedicated signal processing, I'm intrigued to know what can be done with it.
I think you're wrong about this being a GPU. The first bullet point in the slide tilted "HPU: Architecture" says "Sensor aggregator with environment and gesture processing". I don't see any indication of the HPU doing rendering tasks.
[+] [-] ericseppanen|9 years ago|reply
So yeah, kind of wishing it would just die and let ARM take over the embedded space.
[+] [-] yosefk|9 years ago|reply
As to their compiler licensing - that's what happens when you develop for a small niche, you get more expensive tools which are worse than the free ones used by the majority. But it doesn't mean that the thing doesn't have its uses. I hear that a recent chip by AMD had 40 Tensilica (smallish, inaccessible to most software) cores.
The same is true about CEVA (which was mentioned in a sister thread), more or less.
[+] [-] f00fc0d3|9 years ago|reply
But, DSP core itself is nice, very well thought out in contrary to mainstream DSPs from TI or Freescale(now NXP).
[+] [-] raverbashing|9 years ago|reply
The ones that manage to get software end up better in the long run
But I suppose in the case of MS they will have lower level information to make MS compilers target it directly without intermediaries
[+] [-] aylons|9 years ago|reply
Not only a violation of the GPL, but for a code owned by the FSF and even Stallman himself.
That's a bold move. And very douchey.
[+] [-] smcl|9 years ago|reply
[+] [-] ashitlerferad|9 years ago|reply
[+] [-] chocolatebunny|9 years ago|reply
[+] [-] new299|9 years ago|reply
[+] [-] TD-Linux|9 years ago|reply
ARM licensed cores don't have an easy way to add instructions, but they do have the TCM bus which might be low latency enough, depending on what they were trying to do.
[+] [-] ComodoHacker|9 years ago|reply
[+] [-] AndrewKemendo|9 years ago|reply
It seems to be insurmountably hard for press to understand that virtual reality and augmented reality are related but distinct concepts.
Beyond that, interesting teardown. The 10w number is great, but my guess is a lot of the high power processing happens in the sensor suite because it's using 4 repurposed IR sensor receiver combos to relay depth data in a highly structured way. That means this processor is the glue between the IMU and RGBD camera combo. I think in the end this can't scale down to consumer side with this approach, not to mention the other hindrances to scaling down with the protection system.
[+] [-] jessriedel|9 years ago|reply
The media write in the language understood by the reader, not the jargon of the domain expert. Once the AR-VR distinction filters down to most readers, media writers will dutifully follow suit.
[+] [-] codeulike|9 years ago|reply
[+] [-] taneq|9 years ago|reply
Microsoft hasn't helped in this case, with all their blabbering on about "holograms" when as far as I can tell, the images it produces are not holograms in any conventional sense of the word (although the diffraction-based wave guiding is pretty cool.)
[+] [-] brador|9 years ago|reply
[+] [-] markingram|9 years ago|reply
[+] [-] rasz_pl|9 years ago|reply
[+] [-] rblatz|9 years ago|reply
[+] [-] eganist|9 years ago|reply
If the first run is on a 28nm process, does this suggest the second generation on e.g. Intel's 14nm process might yield a drastically more powerful HoloLens model in the current form factor, or at least a more compact one capable of all the same (assuming the optics were also compacted) for revision 2?
There's a whole ton of stuff I'm not taking into account such as the physical size of the current HPU, but my guess is that this is the largest point of improvement just going by the process alone.
[+] [-] BooneJS|9 years ago|reply
There's a reason it's called the bleeding edge. :)
[+] [-] dcw303|9 years ago|reply
I have absolutely no knowledge of the physics involved, but it looks like the projection style that is used has a very limited field of view that won't be resolved with more processing power. I don't see how shrinking the optics could help this.
Because of this, it looks like future immersive improvements will be restricted, but I would be very happy to have it explained to me why I am wrong :)
Edit: my pessimism around FOV was from reading this: http://doc-ok.org/?p=1274
[+] [-] Eridrus|9 years ago|reply
[+] [-] mappu|9 years ago|reply
[+] [-] typon|9 years ago|reply
[+] [-] sbierwagen|9 years ago|reply
14nm is outdated, by the way. TSMC says they'll start shipping 10nm parts by the end of the year: http://en.ctimes.com.tw/DispNews.asp?O=HJZ4GC65UYSSAA00NW
[+] [-] ex3ndr|9 years ago|reply
[+] [-] markingram|9 years ago|reply
[+] [-] oaf357|9 years ago|reply
[+] [-] bigiain|9 years ago|reply
I had a project with a Raspberry Pi2 which was only drawing ~2.5W all up (including the wifi dongle), and it got flaky when the sun shone on the case and got it a bit warm.
[+] [-] aidenn0|9 years ago|reply
[+] [-] cloudjacker|9 years ago|reply
But even then, low bar....
[+] [-] frik|9 years ago|reply
[+] [-] modeless|9 years ago|reply
This is the most wrong comment I've seen on HN in a long time. The SLAM in Hololens is incredibly impressive and nothing on any smartphone is one hundredth as good, with the possible exception of the still unreleased Project Tango phone.
[+] [-] woah|9 years ago|reply
[+] [-] kayoone|9 years ago|reply
Pretty sure it will eventually reach that level of the E3 presentation when hardware gets faster/smaller/cheaper, by now it's just a tech demo and they have a lot more to actually show than Magic Leap for example.
[+] [-] eDameXxX|9 years ago|reply
Please watch many of available videos on YT and you will see that HoloLens is really impressive device (despite fact that is just 1st version).
------
PS. Microsoft announced couple days ago that HoloLens is "ready for businness" [1]
[1] https://www.microsoft.com/microsoft-hololens/en-us/commercia...
[+] [-] castell|9 years ago|reply
https://translate.google.com/translate?sl=de&tl=en&js=y&prev... (scroll down a bit)
Machine translated:
[+] [-] puranjay|9 years ago|reply
[+] [-] blahi|9 years ago|reply
[1] https://www.youtube.com/watch?v=T-JvTZjbwNs posted elsewhere in the comments.
[+] [-] ioquatix|9 years ago|reply
[+] [-] velox_io|9 years ago|reply
I like the idea of a self contained VR headset with no wires, but I don't think going to see on for ab a decade, there's just too much processing power needed for a realistic experience. I hope I'm wrong!
Edit: It appears this is in fact DSP (digital signal processor). That's a huge amount of power for dedicated signal processing, I'm intrigued to know what can be done with it.
[+] [-] sbierwagen|9 years ago|reply
The Cherry Trail SoC has the GPU hardware.
[+] [-] ulber|9 years ago|reply
[+] [-] T-A|9 years ago|reply
http://www.theverge.com/2016/8/16/12503948/intel-project-all...
[+] [-] jng|9 years ago|reply
[+] [-] unexistance|9 years ago|reply