rademacher's comments

rademacher | 5 years ago | on: Are deep neural networks dramatically overfitted? (2019)

I haven't read this paper yet so I can't speak to it's quality but it appears to be addressing the same questions in this post. Bengio is a coauthor so maybe that's a good sign . Here's the abstract.

This paper provides theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also discuss approaches to provide non-vacuous generalization guarantees for deep learning. Based on theoretical observations, we propose new open problems and discuss the limitations of our results.

https://arxiv.org/abs/1710.05468

rademacher | 5 years ago | on: What Data Can’t Do

The problem is high dimensions knowing the distribution or even characterizing it fully with data is incredibly difficult (curse of dimensionality). I think the real assumption in ML is just that there is some low dimensional space that characterizes the data well and ML algorithms find these directions where the data is constant.

rademacher | 5 years ago | on: Julia: A Post-Mortem

Julia only hit v1.0 in 2018. It's doing pretty well for being so young in my opinion.

All the arguments against Julia are basically that python has a lot of momentum and it takes time and effort to switch to a new language. I think Julia should really seek to displace MATLAB as a near term goal .

rademacher | 5 years ago | on: A New Satellite Can Peer Inside Buildings, Day or Night

SAR has been around since the 70s and works using the same principles as CT scans (projection slice theorem). Think about observing an image by rotating it and projecting it orthogonally to the rotated direction. Take a bunch of these measurements (basically a radon transform) and then you can invert the process using the back projection algorithm.

The resolution of the image is proportional to the bandwidth of the waveform and the distance traversed by the satellite during the collection process.

This article is a bit of an exaggeration and Capella is certainly not the first or only SAR service.

rademacher | 5 years ago | on: Image Scaling Attacks

Typically when you downdsample you're going to want to filter than use whatever downdsample kernel you want with the correct stride. Since the filter is lowpass, think just Fourier transform then taking an inner smaller square of the image and inverting, then you can embed the poison image only in that frequency spectrum. Now by playing with the power, if we downdsample by a factor of 4 then just assume that we lose a quarter of the power in the original image while the poison image loses no power. So right off the bat, we are scaling up the poison image power by a factor of the downsampling ratio. For example, we might go from 1/4 power in the poison image relative to the true image then to equivalent power. The other aspect would be if the interpolation kernel and strides are known we can just make sure that the poison image has large values at those specific pixels and further increase the gain.

rademacher | 5 years ago | on: Surviving Disillusionment

I think this suffices as a summary, "The other reality is the frustration and drudgery of operating in a world of corporate politics, bureaucracy, envy and greed— a world so depressing, that many people quit in frustration, never to come back."

Some of us just aren't cut out to work in a big corporate environment. From what I've seen, large technical companies are made up of two sets, the technical set and the manager/business set. Unfortunately, it seems that the manager set yields a disproportionate amount of influence and power and therefore is "valued" more. I'm sure there are smaller companies that could make the folks leaving stick around the industry. But, if they've been successful and are mid career they may have priced themselves out of those opportunities.

rademacher | 5 years ago | on: Fourier Filtering

They use wavelet transforms for jpeg 2000. Natural images tend to be sparse with respect to wavelet transforms.

rademacher | 7 years ago | on: Millennials Didn’t Kill the Economy. The Economy Killed Millennials

I think this is really the tale of two Millennials and is a phenomenon that actually effects all generations currently. There is a concentration of wealth in this country and the wage increases in the bottom 90% have not kept up with the cost of goods limiting the purchasing power of this group. This issue is exacerbated by the increasing costs of goods and education leading to higher levels of debt for Millennials.

Anecdotal evidence shows that a lot of new grads with technical degrees are getting offers around $200k in the bay area and seattle (these are the other Millennials). By 5 years they're making more than many boomers did over a very successful 30+ year career. Clearly this is only a small subset of Millennials, but it is an example of concentration. I'm not sure whether this discrepancy between "classes" of new grads existed previously, does anyone have insight?

rademacher | 7 years ago | on: AlphaFold at CASP13: What just happened?

The author does make a point of discussing the question of what business does a team like DeepMind have researching the folding problem? The solution is of no apparent value to the parent company Alphabet, and yet they were still funded. Perhaps this has to do with the attitudes or values of "modern" tech companies? Historically, there seems to have been a cyclically nature to the volume of basic research in industry, peaking with Bell Labs, sinking with the rise of Welch, and now coming back with the Googs and Facebooks.

rademacher | 7 years ago | on: Nasdaq Acquires Quandl to Advance the Use of Alternative Data

Isn't there utility in accepting the null hypothesis? It's almost as valuable to know that there is no signal in the data as there is in the opposite, i.e., knowing where not to look for information.

I think your example is really justifying a "machine learner" that has some domain expertise and doesn't blindly apply algorithms to some array of numbers.

rademacher | 7 years ago | on: Pythran as a bridge between fast prototyping and code deployment

Here is a reference for the comment above and a brief excerpt,

"Written in the productivity language Julia, the Celeste project—which aims to catalogue all of the telescope data for the stars and galaxies in in the visible universe—demonstrated the first Julia application to exceed 1 PF/s of double-precision floating-point performance (specifically 1.54 PF/s)." [1]

[1] https://www.nextplatform.com/2017/11/28/julia-language-deliv...

rademacher | 7 years ago | on: Pythran as a bridge between fast prototyping and code deployment

I used to only use MATLAB, which for a lot of research code applications is actually nice for getting a prototype running quickly. Now that I have the freedom to choose, I typically use Julia as I'm trying to gain skills in opensource languages that are actually valuable in the job market. The choice of Julia over python is probably due to my nature of going against the grain, which flies in the face of my previous point.

Was there anything specific that turned you off from Julia?

rademacher | 7 years ago | on: Deep learning pioneer Yoshua Bengio is worried about AI’s future

So he's saying that concentration of "wealth" is bad, and war is bad.

A quick glance at the definition of moral gives, "a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do" which suggests that they may be fluid. Are killer robots necessarily any less moral than killer humans? We seek to replace humans with "robots" in many cases under the assumption that they perform better. I suppose in the case of killer robots this could mean more effective killing, or perhaps it could mean more accurate strikes and less civilian casualties? (I'm not saying I am an advocate for military AI, just posing some questions).

Finally, suggesting that we need to focus less on incremental progress when DL still isn't completely understood seems premature. I'm not sure another great leap in AI is on the horizon until a leap in computational power or a new framework is discovered.

page 1