throwmeawaysoon's comments

throwmeawaysoon | 4 years ago | on: AMD Receives Approval for Acquisition of Xilinx

>come up with a "better" (performant, cheaper, easier to use, etc.) solution than GPUs for ML applications

you probably are aware but Xilinx themselves is attempting this with their versal aie boards which (in spirit) similar to GPUs, in that they group together a programmable fabric of programmable SIMD type compute cores.

https://www.xilinx.com/support/documentation/architecture-ma...

i have not played with one but i've been told (by a xilinx person, so grain of salt) the flow from high-level representation to that arch is more open

https://github.com/Xilinx/mlir-aie

throwmeawaysoon | 4 years ago | on: AMD Receives Approval for Acquisition of Xilinx

>Lattice has been by far the favorite of the FOSS community

i'm interested in the OSS flows but i haven't dug in yet. so some questions (if you have experience): isn't it only for their ice40 chips? and how smooth is the flow from RTL to bitstream to deploy?

one hesitation i have with jumping in is that i'm working on accelerator type stuff, so my designs typically need on the other of 30k-50k LUTs. will yosys+nextpnr let me deploy such a design to some chip?

throwmeawaysoon | 4 years ago | on: AMD Receives Approval for Acquisition of Xilinx

this is true in general but

1) vivado webpack edition (ie free) lets you write (and flash) a bitstream for some of the small chips. i know it at least works for the artix-7 family because i'm doing it every day lately

2) for the artix-7 (and some lattice chips) you supposedly can use OSS (https://github.com/SymbiFlow/prjxray). i haven't tried it yet but one problem i can foresee is that the OSS tools won't infer stuff like brams and dsp. in fact the symbiflow people (i think?) explicitly call this out as the part of the project that's a work in progress.

some useful links:

https://arxiv.org/abs/1903.10407

https://github.com/YosysHQ/nextpnr

https://www.rapidwright.io/

throwmeawaysoon | 4 years ago | on: Launch HN: OneChronos (YC S16) – Combinatorial auctions market for US equities

>We've developed the tech in house

i'm not often impressed but that's quite impressive. kudos to you.

i currently work on deep learning compilers (as a phd student) but i'm interested in basically all of these things (compilers, combinatorial optimization, auction theory). i know lpage expressed that you're hiring but i'm curious what roles you're hiring for (your careers page is light on details).

throwmeawaysoon | 4 years ago | on: Launch HN: OneChronos (YC S16) – Combinatorial auctions market for US equities

i'm not an economist or a game theorist so i don't remember the details but this paper talks about how certain market designs lead to untruthful bidding

https://www.cs.cmu.edu/~sandholm/vickrey.IJEC.pdf

but in the context of second price auctions.

lpage might be alluding to something having to do with their proxy bidder implementation but the above paper actually discusses how proxy bidders themselves lead to untruthful bidding (so maybe lpage is suggesting their implementation is better?).

throwmeawaysoon | 4 years ago | on: Launch HN: OneChronos (YC S16) – Combinatorial auctions market for US equities

poking around on your socials, it seems like you've been building for ~5 years, and are just now officially launching, after i guess a capital injection from yc.

since the core ip is "deep" as you say, i'm guessing it cost quite a bit to develop, unless you built out all of the components yourself, which, while possible, seems unlikely given the technical complexity of each piece (you, and whoever else is on the engineering team, seem smart but this looks like "research edge" tech along several dimensions).

so i'm curious whether you paid the development costs up front (either using your own money or FFF) or if you validated and raised in small pieces. if the latter, i'm curious how one does that for such a complex product/service.

lots of assumptions in the above - feel free to disabuse me of my ignorance.

page 1