wall_words | 10 years ago | on: Leaf: Machine learning framework in Rust
wall_words's comments
wall_words | 10 years ago | on: Returning multiple values from functions in C++
wall_words | 10 years ago | on: Benchmarks for Blaze, A high-performance C++ math library
wall_words | 10 years ago | on: Benchmarks for Blaze, A high-performance C++ math library
wall_words | 10 years ago | on: Benchmarks for Blaze, A high-performance C++ math library
wall_words | 10 years ago | on: Benchmarks for Blaze, A high-performance C++ math library
wall_words | 11 years ago | on: Show HN: Paperman – LaTeX editor with a Markdown feel
wall_words | 11 years ago | on: Sile, a typesetting system inspired by TeX and InDesign
The syntax is very similar to LaTeX, but it's more modern in that it has native support for fonts and images, and uses Lua as its scripting language.
wall_words | 11 years ago | on: A visual proof that neural nets can compute any function
wall_words | 11 years ago | on: A visual proof that neural nets can compute any function
What are you talking about? Deep learning is one of the hottest areas of research today, and a lot of it has to do with neural networks. NN's are the state of the art in several domains. Case in point: http://image-net.org/challenges/LSVRC/2014/results. All of the top entries use convolutional networks; in fact, almost all of the entries do.
The fact that the loss function represented by a neural network can be highly nonconvex is what makes them so effective in the domains in which they are used. See this presentation by Yann LeCun for more info: http://www.cs.nyu.edu/~yann/talks/lecun-20071207-nonconvex.p...
"ML theory has essentially never moved beyond convex models, the same way control theory has not really moved beyond linear systems. Often, the price we pay for insisting on convexity is an unbearable increase in the size of the model, or the scaling properties of the optimization algorithm ... This is not by choice: nonconvex models simply work better. Have you tried acoustic modeling in speech with a convex loss? ... To learn hierarchical representations (low-level features, mid- level representations, high-level concepts....), we need “deep architectures”. These inevitably lead to non-convex loss functions."
This isn't to say that NN's are going to solve all our problems, but to say that there has been a shift in interest away from NN's is absurd.
wall_words | 11 years ago | on: Early Modern Recipes (1600-1800) in a Modern Kitchen
It's remarkable that English written over six centuries ago is still more or less comprehensible, albeit with a little effort.
wall_words | 11 years ago | on: Conrod – A Rust GUI Library
C++11 and C++14 have done a lot to enable users to write clean, efficient code. I probably would not be using C++ today if I were forced to write in C++03 style. I am still not convinced that C++ has gotten any easier to learn over time. But for those who know how to use it well, no language ranks better in allowing users to write clean, efficient, and portable code.
It's good to see alternatives to Torch, Theano, and TensorFlow, but it's important to be honest with the benchmarks so that people can make informed decisions about which framework to use.