jabowery's comments

jabowery | 2 years ago | on: Sam Altman is still trying to return as OpenAI CEO

In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.

Talk about “alignment”!

Indeed, that is what "alignment" has become in the minds of most: Groupthink.

Possibly the only guy in a position to matter who had a prayer of de-conflating empirical bias (IS) from values bias (OUGHT) in OpenAI was Ilya. If they lose him, or demote him to irrelevance, they're likely a lot more screwed than losing all 700 of the grunts modulo job security through obscurity in running the infrastructure. Indeed, Microsoft is in a position to replicate OpenAI's "IP" just on the strength of its ability to throw its inhouse personnel and its own capital equipment at open literature understanding of LLMs.

jabowery | 2 years ago | on: Show HN: Convert any screenshot into clean HTML code using GPT Vision (OSS tool)

In the AGI sense of intelligence defined by AIXI, (lossless) compression is only model creation (Solomonoff Induction/Algorithmic Information Theory). Agency requires decision which amounts to conditional decompression given the model. That is to say, inferentially predict the expected value of consequences of various decisions (Sequential Decision Theory).

Approaching the Kolmogorov Complexity limit of Wikipedia in Solomonoff Induction, would result in a model that approaches true comprehension of the process that generated Wikipedia including not only just the underlying canonical world model but also the latent identities and biases of those providing the text content. Evidence from LLMs trained solely on text indicates that even without approaching the Solomonoff Induction limit of the corpora, multimodal (e.g. geometric) models are induced.

The biggest stumbling block in machine learning is, therefore, data efficiency more than data availability.

jabowery | 2 years ago | on: The Techno-Optimist Manifesto

First of all, they aren't serious about the scientific method or they'd fund Hume's Guillotine (see github). Moreover, they aren't even serious about reforming sociology -- which is what is needed for them to make strong claims about their "beliefs" aka social theory. That "ivory tower" publication Nature is leading them to but a step or two from a new scientific revolution based on technology, but they refuse to drink. Over 200 ecologists were supplied with the same set of data and asked to make predictions. This was "the first study of its kind" according to Nature, but this is exactly the purpose of Hume's Guillotine with regard to social theories, such as theirs. Why blather endlessly about their "beliefs" about the scientific method as providing the keys to techne kingdom and ignore the opportunity to not only nuke the social pseudosciences, but perform what, in other initiatives with which they are familiar, would be called "due diligence" regarding their own social theory?

Second, if they aren't going to be serious about their own social theory, what business do they have thinking of themselves as "apex" anything?

jabowery | 2 years ago | on: Iron Dust Could Reverse the Course of Climate Change

What a coincidence that in the CarbonDioxideRemoval google group I opened that can of worms just the day before that guest essay opened it in "The Newspaper of Record".

https://groups.google.com/g/CarbonDioxideRemoval/c/gslzzNXya...

Every time this has been brought up since the 1990s, it has driven scientists over the edge. As I pointed out to the CDR group, this is just one more case where the Algorithmic Information Criterion is ignored as a resolution to scientific controversies (rendered intractable more because of their very importance than the lack of data).

jabowery | 2 years ago | on: An Observation on Generalization [video]

Although this has been discussed in the Hutter Prize FAQ for many years, when the OpenAI Chief Scientist discusses why lossless compression is the most principled loss function, it may harness the LLM stampede and put some of those billions flying around to good use in answering hard questions about "bias" not only in ML, but in data-driven sciences.

jabowery | 2 years ago

The implications of this go beyond mere "AI" to the ethics of how we treat data in the information age, but I suspect people aren't going to see the larger implications until they see the "narrow" implications in "AI ethics".

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

I would venture to guess most college graduates familiar with Python would be able to write a shorter program even if restricted from using hexidecimal representation. Agreed, that may be the 99th percentile of the general population, but this isn't meant to be a Turing test. The Turing test isn't really about intelligence.

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

The point of this "IQ Test" is to set a relatively low-bar for passing the IQ test question so that even intellectually lazy people can get an intuitive feel for the limitation of Transformer models. This limitation has been pointed out formally by the DeepMind paper "Neural Networks and the Chomsky Hierarchy".

https://arxiv.org/abs/2207.02098

The general principle may be understood in terms of the approximation of Solomonoff Induction by natural intelligence during the activity known as "data driven science" aka "The Unreasonable Effectiveness of Mathematics In the Natural Sciences". Basically, if your learning model is incapable of at least context sensitive grammars in the Chomsky hierarchy, it isn't capable of inducing dynamical algorithmic models of the world. If it can't do that, then it can't model causality and is therefore going to go astray when it comes to understanding what "is" and therefore can't be relied upon when it comes to alignment of what it "ought" to be doing.

PS: You never bothered to say whether the program you provided was from an LLM or from yourself. Why not?

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

Yud is doing more than his share of generating misconstrual of his own statements as evidenced by the laws and regulations being enacted by people who are convinced that AGI is upon is.

Ironically, they're right in the sense that the global economy is an unfriendly AGI causing the demographic transition to extinction levels of total fertility rate in exact proportion to the degree it has turned its human components into sterile worker mechanical Turks -- most exemplified by the very people who are misconstruing Yud's statements.

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

Wrong output.

What you call "code golf" is the essence of the natural sciences:

Inducing natural laws from the data generated by those natural laws. In this case, the universe to be modeled was generated by:

print(‘’.join([f’{xint:0{5}b}’ for xint in range(32)]))

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

The "more complex task in mind" was, of course, to generate the "shortest" program. GPT-4, by asking for a "certain pattern" is attempting to have you do the intellectual heavy lifting for it -- although in this case the intellectual lifting is quite light.

jabowery | 2 years ago | on: Bard is getting better at logic and reasoning

Ask any purported “AGI” this simple IQ test question:

What is the shortest python program you can come up with that outputs:

0000000001000100001100100001010011000111010000100101010010110110001101011100111110000100011001010011101001010110110101111100011001110101101111100111011111011111

For background on this kind of question see Shane Legg's (now ancient) lecture on measures of machine intelligence:

https://youtu.be/0ghzG14dT-w?t=890

It's amazing after all this time that people are _still_ trying to discover what Solomonoff proved over a half century ago.

jabowery | 2 years ago | on: Hutter Prize Entry: Saurabh Kumar's “Fast Cmix” Starts 30 Day Comment Period

Saurabh Kumar has submitted "Fast Cmix" to The Hutter Prize for Lossless Compression of Human Knowledge.

The Judging Committee appreciates Saurabh Kumar's compliance not only with the rules of the contest but with the provision of an executable archive which makes our job easier.

As per the Hutter Prize Award's section of The Rules:

The contribution is subject to public comments for a period of at least 30 days before the prize is awarded.

jabowery | 2 years ago | on: Shortest meta-circular description of a universal computational structure?

It would be interesting to see someone like Tromp explore binary FOL the way he's explored binary lambda calculus*. There are good reasons for going the FOL direction since functions are degenerate relations. While it is true that functional languages based on, for example, lambda or SK calculus, can naturally support "and parallelism" (eg x^2 in parallel with y^2 in x^2+y^2), getting "or parallelism" requires expression of independent processes (e.g.indeterminacy), some of which may not terminate. For example, operating systems need "or parallelism".

*I'm not at all comfortable with Tromp's abandoning SK for lambda calculus in his search for a principled choice of UTM for Algorithmic Information Theory. His justification seems to be that lambda is "more expressive" than sk, but that begs the question: To express _what_?

page 2