tarxzvf | 3 years ago | on: Dall-E 2 illustrations of Twitter bios
tarxzvf's comments
tarxzvf | 4 years ago | on: It looks like you’re trying to take over the world
tarxzvf | 4 years ago | on: It looks like you’re trying to take over the world
- Replace 'AI' with 'God', does it still make sense?
- Exponents still take time. 2^33 to get to current world population with no hitches.
- Solomonoff / Bekenstein / Gödel - name your favorite limiting theorem.
- For any optimization method we can literally construct a learning problem that it can never successfully learn. Take it a step further and you have a communication channel where the AI listens to everything and understands nothing.
- Was any force ever able to get close to world domination? At one point in history the US had nuclear power and no one else had it. Was that edge enough?
When we get closer to manufacturing universal intelligence its more impressive incarnations will look more like countries and corporations than omnipotent deities. The problems we’ll have to face will have more to do with consciousness and human rights than with alignment. Alignment is really more about automation at the incomprehensible scale, where the clash between dimensionality reduction and Goodhart’s Law becomes absurd.
tarxzvf | 4 years ago | on: Stages of Denial
tarxzvf | 4 years ago | on: Difficult math is about recognizing patterns
"Both these properties, predictability and stability, are special to integrable systems... Since classical mechanics has dealt exclusively with integrable systems for so many years, we have been left with wrong ideas about causality. The mathematical truth, coming from non-integrable systems, is that everything is the cause of everything else: to predict what will happen tomorrow, we must take into account everything that is happening today.
Except in very special cases, there is no clear-cut "causality chain," relating successive events, where each one is the (only) cause of the next in line. Integrable systems are such special cases, and they have led to a view of the world as a juxtaposition of causal chains, running parallel to each other with little or no interference."
- Ivar Ekeland
tarxzvf | 4 years ago | on: The Local Minima of Suckiness
tarxzvf | 4 years ago | on: Fooling Neural Networks [pdf]
This paper provides some interesting results on the weakness inherent in universal priors: https://arxiv.org/abs/1510.04931
tarxzvf | 4 years ago | on: Fooling Neural Networks [pdf]
In short, it’s not only that you can devise adversarial examples that find the blindspots of the function approximator and fool it into misprediction, it’s that for any learning optimization algorithm you can abuse its priors and biases and create an environment in which it will perform terribly. This is a fundamental and inherent feature of how we go about machine learning — equating it with optimizing functions — and we will need a paradigm shift to go around it.
It’s curious to me how most of these results are known for decades, yet most researchers seem dead set on ignoring them.