superfx's comments

superfx | 6 years ago | on: HK's extradition law: Not just HK people have reason to fear Chinese “justice”

This strikes me as the quintessential problem with autocracies. Sometimes one gets extremely efficient governments in the short term, when the autocrat is competent and not entirely corrupt, but in the long run whatever short term gains were had are squandered by corruption and greed. Democracy is inefficient in the short term but efficient in the long run.

superfx | 7 years ago | on: The West Coast is beating the East Coast on transportation?

"E-scooters aren't a reliable way to get anywhere yet, and who knows if they'll ever be, not to mention that they are not for everyone. My grandmother is not going to ride one -- nor my wife, for that matter, nor should the kids. But the Subway is a common denominator."

I used to think so, but some European cities really do offer counterexamples. I'm thinking of places like Munich, Vienna, and Copenhagen. It's not uncommon to see people there who, by American stereotypes, wouldn't be expected to ride scooters: moms with kids, men in suits, etc. Perhaps the urban cultural gap is so vast that what you're saying is indeed true of the US, but I wouldn't take it as a given.

superfx | 7 years ago | on: Learning Dexterity

It looks that way because they're moving rapidly from one face configuration to another. But there's no way that's happening by random. I would guess that even just holding the cube constant in a dynamic grip is quite difficult.

superfx | 8 years ago | on: End-to-end differentiable learning of protein structure

I would say the biggest thing is obviously the architecture, coupling LSTMs with the geometric units that spit out the actual 3D structure that can then be directly optimized via the dRMSD loss function. That's the biggest point of distinction from everything else out there (no contact map prediction, etc.) So it really is about end-to-end differentiability IMO, which hasn't been done before.

As for why it took so long, it is and it is not fine-tuning. Getting RGNs to train _at all_ was a rather difficult process, and required a lot of finicking around. But since I got them working, I haven't actually spent all that much time fine-tuning them, and so I expect there to be a lot of low-hanging fruit in terms of optimizing performance (starting from the baseline I found.)

superfx | 8 years ago | on: End-to-end differentiable learning of protein structure

Re drug discovery, often times in “rational” drug design, medicinal chemists try to make small molecules that bind snuggly into a binding pocket on the protein. Having the structure of the protein aids greatly in that process.

superfx | 8 years ago | on: End-to-end differentiable learning of protein structure

I do think however that protein folding is very much understudied in the ML community, relative to say the big three of vision, NLP, and speech. The lack of standardized data sets and benchmarks, not to mention the need for domain knowledge, have made it difficult to get into the field

superfx | 8 years ago | on: End-to-end differentiable learning of protein structure

Hi! I’m the author of the paper. Not sure why you say Rosetta isn’t mentioned? It’s extensively referenced throughout the paper, discussed in the discussion section, and is one of the top 5 CASP servers compared to in the results section.

Also as for how it’s different from what’s described in the paper, that’s the topic of the introduction of the paper. Rosetta uses both fragment assembly and co-evolution methods.

superfx | 8 years ago | on: Andrew Ng is raising a $150M AI Fund

I took CS221 from Andrew in 2006 (or was it 2007?) Even more has changed since then ;-) It was my second ML course, after taking Daphne Koller's punishing CS229. Right then though I knew ML will sweep the world pretty soon.
page 1