tarxzvf's comments

tarxzvf | 3 years ago | on: Dall-E 2 illustrations of Twitter bios

This is impressive. Yet, before you go the AGI is nigh, ask yourself a simple question: will this spiral in or spiral out? If we feed everything the model comes up with back as training data, will we get Endless Forms Most Beautiful or will we get an equilibrium?

tarxzvf | 4 years ago | on: It looks like you’re trying to take over the world

Yes. The only force we know that achieves this is undirected, and no single part of it stays at the top for long. Contrast with superintelligence, a single entity which does not evolve but optimizes in a directed way.

tarxzvf | 4 years ago | on: It looks like you’re trying to take over the world

Honestly astonished that superintelligence is a mainstream idea. The story it tells makes sense only if you never bothered to dig further than its surface.

- Replace 'AI' with 'God', does it still make sense?

- Exponents still take time. 2^33 to get to current world population with no hitches.

- Solomonoff / Bekenstein / Gödel - name your favorite limiting theorem.

- For any optimization method we can literally construct a learning problem that it can never successfully learn. Take it a step further and you have a communication channel where the AI listens to everything and understands nothing.

- Was any force ever able to get close to world domination? At one point in history the US had nuclear power and no one else had it. Was that edge enough?

When we get closer to manufacturing universal intelligence its more impressive incarnations will look more like countries and corporations than omnipotent deities. The problems we’ll have to face will have more to do with consciousness and human rights than with alignment. Alignment is really more about automation at the incomprehensible scale, where the clash between dimensionality reduction and Goodhart’s Law becomes absurd.

tarxzvf | 4 years ago | on: Stages of Denial

Just replace K and JavaScript with German and English to see the vacuity of this argument. Either of several possible representations can become native to one’s thinking. The question is which is a better aid in reaching some non-arbitrary goal. The only merit of K presented and emphasized here was the supposed brevity of its programs. Personally I’ve found the habitable zone somewhere that allows for more air between ideas.

tarxzvf | 4 years ago | on: Difficult math is about recognizing patterns

Everything is pattern matching (or memorization). You can use this approach to half-automate the solution to a known existing class of problems, but how do you come up with anything new? How did Paul Cohen came up with the forcing technique? Who figured out probabilistic proofs as a possible vector of attack?

"Both these properties, predictability and stability, are special to integrable systems... Since classical mechanics has dealt exclusively with integrable systems for so many years, we have been left with wrong ideas about causality. The mathematical truth, coming from non-integrable systems, is that everything is the cause of everything else: to predict what will happen tomorrow, we must take into account everything that is happening today.

Except in very special cases, there is no clear-cut "causality chain," relating successive events, where each one is the (only) cause of the next in line. Integrable systems are such special cases, and they have led to a view of the world as a juxtaposition of causal chains, running parallel to each other with little or no interference."

- Ivar Ekeland

tarxzvf | 4 years ago | on: The Local Minima of Suckiness

Feels like a take from an alternate reality. I can’t think of a single great developer I know who was not self-taught. In my experience, if you got the will, drive, attitude, and curiosity, you had it for a while, and any given situation can only slow or accelerate your pace. And if you don’t got them, you don’t got them, and no sage shove from the outside is going to help.

tarxzvf | 4 years ago | on: Fooling Neural Networks [pdf]

What exactly are the right priors for general intelligence? And keep in mind, whichever prior you choose, I can design learning problem where it will lead you astray.

This paper provides some interesting results on the weakness inherent in universal priors: https://arxiv.org/abs/1510.04931

tarxzvf | 4 years ago | on: Fooling Neural Networks [pdf]

The problem goes much deeper than these adversarial examples. The main issue is Solomonoff Uncomputability (or the No Free Lunch in Search and Optimization theorem, or any of the other hard limiting theorems).

In short, it’s not only that you can devise adversarial examples that find the blindspots of the function approximator and fool it into misprediction, it’s that for any learning optimization algorithm you can abuse its priors and biases and create an environment in which it will perform terribly. This is a fundamental and inherent feature of how we go about machine learning — equating it with optimizing functions — and we will need a paradigm shift to go around it.

It’s curious to me how most of these results are known for decades, yet most researchers seem dead set on ignoring them.

page 1