(no title)
mco | 9 years ago
Firstly, your claims about virtual memory in general purpose CPUs is misleading: its purpose is memory virtualization and I wouldn't want a system without it in the presence of multiple processes (how can you trust every process not to shoot down another by accidentally accessing a wrong memory location?).
Ultimately, our hardware will become more specialized/heterogeneous, and we'll have many accelerators for various tasks, but there will likely always be a general purpose CPU at the heart of the system (that will have virtual memory, caches, etc.); for an overview, I enjoyed [1]. I see what you're building as another accelerator for inherently parallel latency-insensitive workloads (like you find in HPC). In a way, GPUs (+ Xeon Phi) cater to these markets today (benchmarks against these would be useful).
Second, I remember the previous post [2], where you claimed the system you are building relies on a RISC ISA, but now you claim it has changed to VLIW. You said yourself before "[...] stick to RISC, instead of some crazy VLIW or very long pipeline scheme. In doing this, we limit compiler complexity while still having very simple/efficient core design, and thus hopefully keeping every core's pipeline full and without hazards [...]"
What is the rationale behind this? Do you think you'll be able to manage compiler complexity now?
Any response is much appreciated!
trsohmers|9 years ago
As for why we think "this time is different" is a mix of a combination of good ideas and timing. I 100% agree with you that in the 50 years of von Neumann derivatives, basically all the low hanging fruit (and many higher up) have been attempted, and thankfully I can saw I've learned from a lot of them. Rather than be an entirely new concept, I think we have gone back to some fairly old ideas in going back to the time before hardware managed caches, and thought about simplicity when it comes to what it takes to actually accomplish computational goals. A lot of the hardware complexity that was starting to be added back in the mid/late 80's around the memory system (our big focus at REX) was before much attention was put into the compiler. While I am proud of what we have done on the hardware side, I think most of the credit will go to the compiler and software tools if we are successful, as that is what enables us to have such a powerful and efficient architecture. Ergo, we have the advantage of ~30 years of compiler advancements (plus a good amount of our own) where we have the luxury to remake the decision for software complexity over hardware complexity... plus 30 years of fabrication improvement. Couple that with Intel's declining revenues, end of easy CMOS scaling, and established portability tools (e.g. LLVM, which we have used as the basis for our toolchain) and I think this is the best time possible for us.
When it comes to virtual memory: Why would you need to have your memory space virtualized (which requires address translation) in order to have segmentation? We use physical addresses since it saves a lot of time and energy at the hardware level, but that doesn't mean you can't have software implement the same features and benefits that virtual memory, garbage collection, etc provide. The way our memory system as a whole (and in particular our Network On Chip) behaves and its system of constraints plays a very large role in this, but I can't/don't want to go into the details of that publicly right now. It may seem a bit hand wavy right now, but we do not see this as a limitation/real concern for us, and unless you want to write everything in assembly, the toolchain will make this no different than C/C++ code running on todays machines.
In the case of GPGPUs for HPC, we have the advantage of being truly MIMD over a SIMD architecture, plus a big improvement in power efficiency, programmability, and cost. We'd win in the same areas (I guess tie on programmability) for the Xeon Phi for benchmarks like LINPACK and STREAM, but the one benchmark I am especially looking forward to is HPCG (and anything else that tries to stress the memory system along with compute). While NVIDIA and Intel systems on the TOP500 list struggle to get 2% of their LINPACK score on HPCG[0], we should be performing 25x+ better[0]. Based on our simulations, we should be performing roughly equally across all 3 BLAS levels, which has been unheard of in HPC since the days of the original (Seymour designed) Cray machines.
Of course, my naivety from 2 years ago haunts me now ;) When the linked comment was written, I had yet to "see the light". Once I understood (through my co founder, the brilliant Paul Sebexen) the 'magic' that is possible when a toolchain has enough information to make good compilation decisions, did I realize that the simplicity of a VLIW decoding system made the most sense (and gave us a lot of extra abilities). It was about ~3 months after I made that comment that we started to go down this path, which early prototyping that applied to existing VLIW and scratchpad based systems led to our DARPA and later seed funding. It is only because of the fact that our hardware is so simple (and mathematically elegant in its organization) that the compiler can efficiently schedule instructions and memory movement. While I've only lived through a small fraction of the last 50 years of computer architecture, I think of myself as a very avid historian of it, and it really shocks me that no one has gone about thinking of the memory system quite like we have. I totally agree with my younger self on long pipelines though.
TL;DR: We think we'll succeed because we are combining old hardware ideas with new software ideas to make (in our opinion) the best architecture, plus this is the best time for a new fabless semiconductor startup. We have actually built the mythical "sufficiently smart compiler" due to some very clever (but simple) hardware that enables people to actually effectively program for this. We think we will be more energy efficient, performant, and easier to program for than our competition in our target areas (HPC, high end DSP).
[0] http://www.hpcg-benchmark.org/downloads/sc15/hpcg_sc15_updat...
Keyframe|9 years ago
When you say you rest your high hopes on toolchain, aren't you a bit scared of what happened to Itanium? Intel had toolchain under their r&d and it failed because they couldn't deliver. I'm interested to hear more about "mythical 'sufficiently smart compiler' and how it relates to your architecture.