rademacher | 5 years ago | on: Thanks for the Bonus, I Quit
rademacher's comments
rademacher | 5 years ago | on: Are deep neural networks dramatically overfitted? (2019)
This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also discuss approaches to provide non-vacuous generalization guarantees for deep learning. Based on theoretical observations, we propose new open problems and discuss the limitations of our results.
rademacher | 5 years ago | on: The matrix calculus you need for deep learning (2018)
[1] https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf
rademacher | 5 years ago | on: What Data Can’t Do
rademacher | 5 years ago | on: Julia: A Post-Mortem
All the arguments against Julia are basically that python has a lot of momentum and it takes time and effort to switch to a new language. I think Julia should really seek to displace MATLAB as a near term goal .
rademacher | 5 years ago | on: Julia adoption keeps climbing
rademacher | 5 years ago | on: A New Satellite Can Peer Inside Buildings, Day or Night
The resolution of the image is proportional to the bandwidth of the waveform and the distance traversed by the satellite during the collection process.
This article is a bit of an exaggeration and Capella is certainly not the first or only SAR service.
rademacher | 5 years ago | on: Image Scaling Attacks
rademacher | 5 years ago | on: Surviving Disillusionment
Some of us just aren't cut out to work in a big corporate environment. From what I've seen, large technical companies are made up of two sets, the technical set and the manager/business set. Unfortunately, it seems that the manager set yields a disproportionate amount of influence and power and therefore is "valued" more. I'm sure there are smaller companies that could make the folks leaving stick around the industry. But, if they've been successful and are mid career they may have priced themselves out of those opportunities.
rademacher | 5 years ago | on: A Programmer’s Intuition for Matrix Multiplication
rademacher | 5 years ago | on: Fourier Filtering
rademacher | 7 years ago | on: Millennials Didn’t Kill the Economy. The Economy Killed Millennials
Anecdotal evidence shows that a lot of new grads with technical degrees are getting offers around $200k in the bay area and seattle (these are the other Millennials). By 5 years they're making more than many boomers did over a very successful 30+ year career. Clearly this is only a small subset of Millennials, but it is an example of concentration. I'm not sure whether this discrepancy between "classes" of new grads existed previously, does anyone have insight?
rademacher | 7 years ago | on: Vinod Khosla is willing to litigate California’s coast for the rest of his life
[1] https://www.theatlantic.com/family/archive/2018/12/rich-peop...
rademacher | 7 years ago | on: AlphaFold at CASP13: What just happened?
rademacher | 7 years ago | on: Nasdaq Acquires Quandl to Advance the Use of Alternative Data
I think your example is really justifying a "machine learner" that has some domain expertise and doesn't blindly apply algorithms to some array of numbers.
rademacher | 7 years ago | on: Pythran as a bridge between fast prototyping and code deployment
"Written in the productivity language Julia, the Celeste project—which aims to catalogue all of the telescope data for the stars and galaxies in in the visible universe—demonstrated the first Julia application to exceed 1 PF/s of double-precision floating-point performance (specifically 1.54 PF/s)." [1]
[1] https://www.nextplatform.com/2017/11/28/julia-language-deliv...
rademacher | 7 years ago | on: Pythran as a bridge between fast prototyping and code deployment
Was there anything specific that turned you off from Julia?
rademacher | 7 years ago | on: Pythran as a bridge between fast prototyping and code deployment
The Julia language was designed to target the two language problem and at least from these benchmarks it looks pretty competitive [1]. I imagine over time, pythran may fix some limitations and beat Julia in most benchmarks.
[1] https://github.com/fluiddyn/BenchmarksPythonJuliaAndCo/tree/...
rademacher | 7 years ago | on: Deep learning pioneer Yoshua Bengio is worried about AI’s future
rademacher | 7 years ago | on: Deep learning pioneer Yoshua Bengio is worried about AI’s future
A quick glance at the definition of moral gives, "a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do" which suggests that they may be fluid. Are killer robots necessarily any less moral than killer humans? We seek to replace humans with "robots" in many cases under the assumption that they perform better. I suppose in the case of killer robots this could mean more effective killing, or perhaps it could mean more accurate strikes and less civilian casualties? (I'm not saying I am an advocate for military AI, just posing some questions).
Finally, suggesting that we need to focus less on incremental progress when DL still isn't completely understood seems premature. I'm not sure another great leap in AI is on the horizon until a leap in computational power or a new framework is discovered.