stuntprogrammer's comments

stuntprogrammer | 8 years ago | on: Hobbes – A language and an embedded JIT compiler

Secret sauce, at least for the team involved, and well-received. Nice combination of extreme perf and decent productivity for them.

I've moved into non-finance stuff for quite a while now though, so not sure what's become of it. Given business challenges in that particular sub-field, who knows..

stuntprogrammer | 8 years ago | on: Hobbes – A language and an embedded JIT compiler

Not kdb+, but a proprietary (internal-only) language that it heavily influenced was designed around execution on GPU clusters.

The FPGAs were used mainly for feedhandlers and there was a different DSL for that (compiling to verilog).

It was indeed rather something to see :)

stuntprogrammer | 9 years ago | on: Intel discloses “vector+SIMD” instructions for future processors

The problem is less the spurious DRAM accesses etc, as awful as they would be. The compiler problem is really a mix of 1) understanding enough about fixed-bound unit-stride loops to nonoverlapping memory (or transforming access to such) and 2) data layouts that prevent that. E.g. while there are well understood data layouts at each point of the compilation pipeline, it's hard in general for compilers to profitably shift from array of structs to struct of array layouts.

You are correct that, generally speaking, most STL heavy code would be hard to vectorize and unlikely to gain much advantage. (Plus there are the valarray misadventures). You will sometimes see clang and gcc vectorize std::vector if the code is simple enough, and they can assume strict aliasing. Intel's compiler has historically been less aggressive about assuming strict aliasing.

Various proposals are working through the standard committee to add explicit support for SIMD programming. E.g. if something like http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n418... were to be standardized we could write matrix multiply explicitly as:

  using SomeVec = Vector<T>
  for (size_t i=0; i<n; ++i) {
    for (size_t j=0; k<n; j+=SomeVec::size()) {
      SomeVec c_ij = A[i][0] * SomeVec(&B[0],j, Aligned);
      for (size_t k = 1; k < n; ++k) {
        c_ij += A[i][k] * SomeVec(&N[k][j], Aligned);
      }
      c_ij.store(&C[i][j], Aligned);
    }
  }
For my own work on vector languages and compilers I've had an easier time of it since they have been designed to enable simpler SIMD code generation.

stuntprogrammer | 9 years ago | on: Intel discloses “vector+SIMD” instructions for future processors

Current publicly announced AVX512 does not support fp16. Skylake Server (SKX) and Knights Landing (KNL) are at a disadvantage here. They've not publicly said anything about extensions in Knights Hill (the long announced successor to KNL).

That said, Intel have announced the emergency "Knights Mill" processor jammed into the roundmap between KNL and Knights Hill. It's specifically targeted at deep learning workloads and one might expect FP16 support. They had a bullet point suggesting 'variable' precision too. I would guess that means Williamson style variable fixed point. (I also guess that the Nervena "flexpoint" is a trademarked variant of it).

I assume the FPGA inference card supports fp16. And Lake Crest (the first Nervena chip sampling next year) will support flex point of course. I would expect subsequent Xeon / Lake Crest successor integrations to do the same.

Fun times..

Aside on the compiler work -- I think it's not that hard to emit this instruction at least for GEMM style kernels where it's relatively obvious.

stuntprogrammer | 10 years ago | on: The K Language

I shouldn't say too much. But more than one large firm has explored writing an in-house replacement and floated actual $ offers.

Occasionally someone pops up wanting to do a company to compete with them but they're typically offering paper of dubious value, covered in slime.

In any event, I'm not particularly interested in going to war to steal their market share. I do worry that they're going to go into a long decline to irrelevancy though -- I'd very much prefer that not to happen. I'd rather they/FD write a new generation building on the best of arthur's work plus some things more suited for new platforms and workloads. There are big opportunities there.

stuntprogrammer | 10 years ago | on: The K Language

For his sins, Fermin got dragged into the q/kdb+ world. I assume he's still using it at MS :-)

stuntprogrammer | 10 years ago | on: The K Language

Drop me a note (email in profile). I'm not a recruiter, these are the offers I've been contacted with, through my personal network based on past gigs.

It's not just knowing an APL though -- far from it. For the chi/nyc it's a mix of hardcore low-level stuff, and some market knowledge. For the west coast, a mix of low level stuff (less hard core) and more distributed systems stuff.

stuntprogrammer | 10 years ago | on: The K Language

It's not obvious to me that an open core business can sustain the necessary margins to be interesting as an engineering company rather than a glorified services business. There are been very few examples (friend of mine argues it's just RedHat).

You are correct though that their model over the years has been to extract a large amount, up front, from a small number of users. They (mostly FD) have failed to make the leap to users outside finance due to what I'd call cultural reasons. They also lack strong technical leadership imnsho (they're, at heart, not an engineering company).

I'd certainly take a swing at doing an open source version for them but not clear to me that they'd know how to play it.

stuntprogrammer | 10 years ago | on: The K Language

A lot of the decisions that some might quibble with are not actually Arthur's end of things. The former CEO drove a lot of it.

stuntprogrammer | 10 years ago | on: The K Language

In the interests of transparency..

That sounds like the low-end base salary for ~entry level. It probably gets topped up with ~100k bonus. There are a couple places that hire many but don't pay well.

At what I think is the high end, I see about an offer per quarter of $500-900k base with $1-2M bonus (usually guaranteed for the first year) for pure technology role. Add more front office work and the bonus potential shoots up (but it's a tough gig at the moment at least).

For context, non-kdb principal level roles I've seen on west coast top out at around $1M, mostly in RSUs, from a say a $250k base at the usual names.

I've seen offers in the $1-10M range on the east coast to build competitive technology. Amusingly, on the west coast, I've only seen the stereotypical "be my technical cofounder and get screwed on comp" offers to do so.

page 1