latently's comments

latently | 8 years ago | on: The Limitations of Deep Learning

The brain is a dynamic system and (some) neural networks are also dynamic systems, and a three layer neural network can learn to approximate any function. Thus, a neural network can approximate brain function arbitrarily well given time and space. Whether that simulation is conscious is another story.

The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:

http://grey.colorado.edu/CompCogNeuro

The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:

http://grey.colorado.edu/emergent

latently | 8 years ago | on: Higher-level causation exists (but I wish it didn’t)

An interesting technicality from the post and paper is that the measure of causal information (mutual information between the initial and final state) bears some resemblence to the Lyapunov exponent as it is used to measure whether a system is on the edge of chaos. When the exponent is 1 (IIRC) the system does not diverge exponentially when the initial conditions are changed slightly and the system is said to be on the edge of chaos and to have good generalization ability. Anywhere else and the system is either damped or chaotic and you don't expect "interesting" stuff to happen there, such as higher-order "causal" effects. (seriously though, why are people so obsessed with causality when it's clear that there is almost never one "causal" description. let it go!)

latently | 8 years ago | on: Banning exploration in my infovis class

The word explore is actually great in a data analysis context. The notions of exploratory vs confirmatory analysis are widely used, and exploratory means exactly what your students think it means. Just make sure they don't explore all of the data at once, otherwise they will have to go collect more so that they can confirm what they found when they were exploring.

latently | 8 years ago | on: Have We Forgotten about Geometry in Computer Vision?

In my opinion you are overconfident in the foundations of mathematics. Like deep learning models, math works. Why and how does it work? It's open to interpretation in both cases. In both cases, we don't have a complete understanding. It is that lack of complete understanding that makes it a black box.

latently | 9 years ago | on: Ask HN: Who is hiring? (March 2017)

Latently | Deep Learning Engineer | Boulder, Co | REMOTE

Latently is a stealth-mode pre-revenue startup that is looking for engineers who want to earn sweat equity alongside the founders. We have substantial hardware support through IBM's Global Entrepreneur program and you'll be training state-of-the-art recurrent neural networks on unstructured text on a sweet GPU cluster. If you have the luxury of time this is a great opportunity. Send resumes to [email protected]

latently | 9 years ago | on: Data on the uselessness of LinkedIn endorsements

A better way to do this analysis would have been to create an extremely sparse matrix with one column for every possible endorsement category with the value being the number of endorsements (normalized). Then try to predict various aspects of coding performance.

Definitely wouldn't endorse the author for machine learning :)

page 1