stuntprogrammer | 8 years ago | on: Hobbes – A language and an embedded JIT compiler
stuntprogrammer's comments
stuntprogrammer | 8 years ago | on: Hobbes – A language and an embedded JIT compiler
The FPGAs were used mainly for feedhandlers and there was a different DSL for that (compiling to verilog).
It was indeed rather something to see :)
stuntprogrammer | 9 years ago | on: Japan to Unveil Pascal GPU-Based AI Supercomputer
stuntprogrammer | 9 years ago | on: 1.1B Taxi Rides on Kdb+/q and 4 Xeon Phi CPUs
stuntprogrammer | 9 years ago | on: 1.1B Taxi Rides on Kdb+/q and 4 Xeon Phi CPUs
stuntprogrammer | 9 years ago | on: K100.1-1966 Safety Code and Requirements for Dry Martinis (1966) [pdf]
stuntprogrammer | 9 years ago | on: Intel discloses “vector+SIMD” instructions for future processors
You are correct that, generally speaking, most STL heavy code would be hard to vectorize and unlikely to gain much advantage. (Plus there are the valarray misadventures). You will sometimes see clang and gcc vectorize std::vector if the code is simple enough, and they can assume strict aliasing. Intel's compiler has historically been less aggressive about assuming strict aliasing.
Various proposals are working through the standard committee to add explicit support for SIMD programming. E.g. if something like http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n418... were to be standardized we could write matrix multiply explicitly as:
using SomeVec = Vector<T>
for (size_t i=0; i<n; ++i) {
for (size_t j=0; k<n; j+=SomeVec::size()) {
SomeVec c_ij = A[i][0] * SomeVec(&B[0],j, Aligned);
for (size_t k = 1; k < n; ++k) {
c_ij += A[i][k] * SomeVec(&N[k][j], Aligned);
}
c_ij.store(&C[i][j], Aligned);
}
}
For my own work on vector languages and compilers I've had an easier time of it since they have been designed to enable simpler SIMD code generation.stuntprogrammer | 9 years ago | on: Intel discloses “vector+SIMD” instructions for future processors
That said, Intel have announced the emergency "Knights Mill" processor jammed into the roundmap between KNL and Knights Hill. It's specifically targeted at deep learning workloads and one might expect FP16 support. They had a bullet point suggesting 'variable' precision too. I would guess that means Williamson style variable fixed point. (I also guess that the Nervena "flexpoint" is a trademarked variant of it).
I assume the FPGA inference card supports fp16. And Lake Crest (the first Nervena chip sampling next year) will support flex point of course. I would expect subsequent Xeon / Lake Crest successor integrations to do the same.
Fun times..
Aside on the compiler work -- I think it's not that hard to emit this instruction at least for GEMM style kernels where it's relatively obvious.
stuntprogrammer | 9 years ago | on: Japan plans 130-petaflops supercomputer
stuntprogrammer | 9 years ago | on: Intel preferentially offers two customers Skylake Xeon CPUs
https://cloudplatform.googleblog.com/2016/11/power-up-your-G...
stuntprogrammer | 9 years ago | on: How We Knew It Was Time to Leave the Cloud
https://scalableinformatics.com/assets/documents/Unison_Peta...
Surprisingly cost effective given the massive performance and support. Cloud is great for some things but not everything.
stuntprogrammer | 9 years ago | on: A Look at How Traders and Economists Are Using the Julia Programming Language
stuntprogrammer | 9 years ago | on: Welcoming Adrian Cockcroft to the AWS Team
stuntprogrammer | 10 years ago | on: The K Language
stuntprogrammer | 10 years ago | on: The K Language
Occasionally someone pops up wanting to do a company to compete with them but they're typically offering paper of dubious value, covered in slime.
In any event, I'm not particularly interested in going to war to steal their market share. I do worry that they're going to go into a long decline to irrelevancy though -- I'd very much prefer that not to happen. I'd rather they/FD write a new generation building on the best of arthur's work plus some things more suited for new platforms and workloads. There are big opportunities there.
stuntprogrammer | 10 years ago | on: The K Language
stuntprogrammer | 10 years ago | on: The K Language
It's not just knowing an APL though -- far from it. For the chi/nyc it's a mix of hardcore low-level stuff, and some market knowledge. For the west coast, a mix of low level stuff (less hard core) and more distributed systems stuff.
stuntprogrammer | 10 years ago | on: The K Language
You are correct though that their model over the years has been to extract a large amount, up front, from a small number of users. They (mostly FD) have failed to make the leap to users outside finance due to what I'd call cultural reasons. They also lack strong technical leadership imnsho (they're, at heart, not an engineering company).
I'd certainly take a swing at doing an open source version for them but not clear to me that they'd know how to play it.
stuntprogrammer | 10 years ago | on: The K Language
stuntprogrammer | 10 years ago | on: The K Language
That sounds like the low-end base salary for ~entry level. It probably gets topped up with ~100k bonus. There are a couple places that hire many but don't pay well.
At what I think is the high end, I see about an offer per quarter of $500-900k base with $1-2M bonus (usually guaranteed for the first year) for pure technology role. Add more front office work and the bonus potential shoots up (but it's a tough gig at the moment at least).
For context, non-kdb principal level roles I've seen on west coast top out at around $1M, mostly in RSUs, from a say a $250k base at the usual names.
I've seen offers in the $1-10M range on the east coast to build competitive technology. Amusingly, on the west coast, I've only seen the stereotypical "be my technical cofounder and get screwed on comp" offers to do so.
I've moved into non-finance stuff for quite a while now though, so not sure what's become of it. Given business challenges in that particular sub-field, who knows..