jabowery | 1 year ago | on: China Is Outspending the U.S. to Achieve the 'Holy Grail' of Clean Energy [video]
jabowery's comments
jabowery | 1 year ago | on: Panic at the Job Market
https://ota.polyonymo.us/others-papers/NetAssetTax_Bowery.tx...
When we got a law passed to privatize space launch services back in 1990
https://www.youtube.com/watch?v=boLdXiLJZoY
we were in the midst of a quasi-depression so I decided to address the problem of private capitalization of technology with the aforelinked proposal.
jabowery | 1 year ago | on: Brain overgrowth dictates autism severity, new research suggests
jabowery | 1 year ago | on: Brain overgrowth dictates autism severity, new research suggests
jabowery | 1 year ago | on: Compiling with Constraints
Any predicate can be considered a constraint. Types are constraints. While it may be reasonable to have syntactic sugars for type declarations that, at compile time, are transformed into predicates, it is unreasonable to lard a completely different kind of semantics on top of an already adequate semantic such as first order logic.
https://groups.google.com/g/comp.lang.prolog/c/8yJxmY-jbG0/m...
jabowery | 2 years ago | on: Georg Cantor and His Heritage
https://www.academia.edu/93528167/Interval_Arguments_Two_Ref...
jabowery | 2 years ago | on: Claude 3 model family
Claude 3 (as Double AI coding assistant): print('0000000001000100001100100001010011000111010000100101010010110110001101011100111110000100011001010011101001010110110101111100011001110101101111100111011111011111')
jabowery | 2 years ago | on: Learning Theory from First Principles [pdf]
It's ok* to depart from that starting point in creating subtheories but if you don't start there you'll end up with garbage like the last 50 years of confusion over what "The Minimum Description Length Principle" really means.
*It is, however, _not_ "ok" if what you are trying to do is come up with causal models. You can't get away from Turing complete codes if you're trying to model dynamical systems even though dynamical systems can be thought of as finite state machines with very large numbers of states. In order to make optimally compact codes you need Turing complete semantics that execute on a finite state machine that just so happens to have a really large but finite number of flipflops or other directed cyclic graph of universal (eg NOR, NAND, etc.) gates.
jabowery | 2 years ago | on: Deep distilling: Automated discovery of algorithms from data
You can tell everyone "who is anyone" is in hysterics when this gets virtually no attention at Salamandar's old stomping ground here at ycombinator.
jabowery | 2 years ago | on: John Walker, founder of Autodesk, has died
* The other person with an office opening into Keith's work area at Memex was Ron Resch https://www.historyofcg.com/pages/university-of-utah/
jabowery | 2 years ago | on: John Walker, founder of Autodesk, has died
Fallback positions from the idealized "roadmap" are what happens when VCs get involved with a system that offers that Zero To One advantage -- but you have to have a One to offer the VCs, which Memex didn't. The question then becomes how much of your road map can be recovered or, perhaps more to the point, do you even _want_ to recover in the light of ground truth experience? At present there is a lot of potential for Information Centric Networking that would be more likely realized in a Ship-Dumbed-Down-Decentralized-Xanadu1994 alternative universe than is likely to be realized now.
jabowery | 2 years ago | on: John Walker, founder of Autodesk, has died
Why was Brendan Eich under such pressure from VCs to throw together a scripting language over a weekend?
jabowery | 2 years ago | on: John Walker, founder of Autodesk, has died
Why didn't I step in and help poor Keith? Ever heard of Croquet's TeaTime?
https://dl.acm.org/doi/abs/10.1145/1094855.1094861
I was in a position to resurrect at least _that_ much of the original work I'd one at Viewtron Corp. of America based on David P. Reed's PhD thesis, and Reed was just down the street from us at Interval Research at that time, which rather tempted me away from helping Keith, even if I'd been authorized to do so, which I wasn't.
jabowery | 2 years ago | on: John Walker, founder of Autodesk, has died
jabowery | 2 years ago | on: Learning Universal Predictors
jabowery | 2 years ago | on: Learning Universal Predictors
There is actually more at stake here than machine learning. This gets to the root of "bias" in the scientific method. Imagine what horrors, what risks, what chaos would be ours if a truly objective information criterion for causal model selection were to exist! Why, virtually every "sociologist" would be hauled to Hume's Guillotine in a Reign of Terror!
https://github.com/jabowery/HumesGuillotine
But to be clear, Marcus and I have a disagreement about pragmatics of such an approach to dispute processing in the natural sciences. He believes, for example, that the dispute over climate change should be handled by the standard processes in place with academia. My approach differs, based on my hard won experience with reforming institutional incentives:
https://jimbowery.blogspot.com/2018/04/necessity-and-incenti...
When it comes to multi-trillion dollar scientific questions, the conflicts of interest become so intense that you really need to apply a gold standard for objectivity and that is the single number: How big is your executable archive of the data in evidence.
While I understand the machine learning world looms as a rival for "unbiased" academic research, it nevertheless remains true that even in this emerging "marketplace of ideas", there is no formal definition of "bias" that disciplines discourse and thereby guides development at the institutional, let alone technical level. Everyone is weighing in with their fuzzy notions of "bias" that betray intense motivations when there has been, for over 50 years, a very clear and present mathematical definition.
jabowery | 2 years ago | on: Learning Universal Predictors
jabowery | 2 years ago | on: The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest
jabowery | 2 years ago | on: Getting Lossless Compression Adopted for Rigorous LLM Benchmarking
Take, for instance, the unprincipled definition of "parameter count" not only in the LLM scaling law literature, but the Zoo of what statisticians called "Information Criteria for Model Selection". https://en.wikipedia.org/wiki/Model_selection#Criteria
The reductio ad absurdum of "parameter count" is arithmetic coding where an entire dataset can be encoded as a single "parameter" of arbitrary precision.
By contrast, the algorithmic bit of information (whether part of an executable instruction or program literal) is an unambiguous quantity up to the choice of instruction set. If you want to quibble about that instruction set choice, take it up with John Tromp https://tromp.github.io/cl/cl.html because what I'm about to propose obviates that along with a lot of other "arguments".
Since any executable archive of any kind of data can serve as a model of the world generating that data, it follows that any executable archive of any text corpus can serve as a language model with a rigorous "parameter count". Therefore, a procedure which runs LLM benchmarks against any such executable archive as a language model, contributes a uniquely rigorous data point to the literature on LLM scaling laws.
So, what I'm proposing is that authors of lossless compression algorithms consider adding a command-line option that, at the end of decompression, saves the state of the decompression process in a file that can be read back in and executed as a language model -- with the full understanding that these language models will perform very poorly on the vast majority of LLM benchmarks. The point is not to produce high quality language models. The point is to increase rigor in the research community by providing some initial data points that exemplify the approach.
jabowery | 2 years ago | on: Bayesians moving from defense to offense
https://jimbowery.blogspot.com/2017/07/fusion-energy-prize-a...
https://youtu.be/boLdXiLJZoY