Cladode | 1 year ago | on: Differentiable Logic Cellular Automata
Cladode's comments
Cladode | 1 year ago | on: Dart Macros and Focus
Some modern languages (Haskell, Scala) overcome the lacking expressivity for library writers with higher-kinded types and principled support for ad-hoc polymorphism (e.g. typeclasses), thus reducing the need for meta-programming. Notably, Haskell and Scala have unusually principled support for metaprogramming.
As a heuristic, I would suggest that using metaprogramming for small or medium sized normal ("business") code is a sign that something maybe be suboptimal, and it might be worth considering a different approach (either to the choice of implementing business logic or the chosen programming language.)
Cladode | 1 year ago | on: DeepSeek's AI breakthrough bypasses industry-standard CUDA, uses PTX
I wonder if you vould you point me to concrete examples where people write PTX rather than CUDA? I'm asking because I just learned CUDA since it's so much faster than Python!
Cladode | 1 year ago | on: Dualities in functional programming
Category theory is an API for mathematics that was developed with specific applications in mind that the API seeks to unify and make easier to think about. Those application domains are algebraic geometry, algebraic topology, homological/homotopical algebra. Every API comes with trade-offs: typically an API makes one domain easier, at the cost of making other domains harder. Example: CSS is Turing complete. And I think CSS is really good at helping with styling webpages. But I would not want to write a compiler is CSS.
Computer scientists, like myself, who read from Page 150 onwards have just found the API stylised for algebraic geometry, algebraic topology, homological/homotopical algebra, ... not that useful, for applications in computer science. Unlike the first 50 pages, which have been very useful. More specifically, we found the cost of using purely categorical APIs not worth the benefits in many application domains. Maybe we are missing something, maybe we overlooked something. But, given the investments since the 1990s of computer science into category theory, I'd like to see more evidence for!
To conclude with a concrete example: why would I write a compiler using an API for homotopical algebra?
Cladode | 5 years ago | on: Statement on New York Times Article
I think we are seeing a wounded animal's fight for survival ....
Cladode | 5 years ago | on: Statement on New York Times Article
Most important perhaps is that new media like Substack are in direct competition with traditional newpapers like the NYT. Coase's great insight (in: The Nature of the Firm, 1937) was that firms exist in order to reap economies of scale. Traditional newspapers reaped economies of scale from printing, paper distribution, subscriber and advertiser management. Essentially all of this is gone. What modern newspaper scale on is branding, and and selling influence, but this is in direct contradiction with strong journalists' interest (who do not like to be told by their editors what to write and how). Until recently, top journalists could not go alone, since they lacked the expertise to handle monetisation of their writing. This changed with the likes of Substack, which centralises (automates) subscriber management, and technical infrastructure, but without editorship. Hence, top writers are increasingly moving away from traditional newspapers to something like Substack, with Greenwald and Scott Siskind being two high-profile examples. They won't be the last.
Newspapers see the writing on the wall and fight back.
Cladode | 6 years ago | on: Mozilla’s plan to fix internet privacy
replaced Christian with
It has often been noted that many political concepts in the western world emerged out of Christianity, notably by Nietzsche and Carl Schmitt. That's probably not really directly a unique feature of Christianity but because religions themselves tend to co-evolve with successful states as narratives that help stabilise said states.Cladode | 6 years ago | on: Mozilla’s plan to fix internet privacy
woke/Fox News Cult.
I recommend a little less parochialism, and more historical scholarship.All populist politics needs to appeal to its clientele with simplistic, easy to grasp utopias that are claimed to be in easy grasp after "we" win power, whether it's the classless society of Marxism, notions of paradise in various religions (e.g. 72 seventy-two houri in Sunni-Islam), libertarianism's coercion-free optimal resource allocation, anarchism's absence of social hierarchy, Robespierre's "liberté, égalité, fraternité", and many others. "Woke" culture is clearly a contemporary evolution (rebranding) of the socialist tradition emerging out of the French revolution, and made politically potent by Lenin, Stalin & comrades, channeled into the modern western world via Gramsci's cultural hegemony, and nowadays spread by powerful branding organisations like Avaaz [1] and Purpose [2].
[1] https://en.wikipedia.org/wiki/Avaaz
[2] https://en.wikipedia.org/wiki/David_Madden_(entrepreneur)
Cladode | 6 years ago | on: Haskell Problems for a New Decade
Be that as it may, seL4 cannot currently be automatically verified, and it's not due to code size (8700 lines of C and 600 lines of assembler). It's hard to see what the obstacle to mechanisation could be but logical complexity.
Model theory is not some magical deduction-free paradise. The reason we care about model theory in logic is because we want to have evidence that the deductive system at hand is consistent. Building models of Peano arithmetic in ZFC for example, but also relating constructive and classical logic through double negation or embedding type theory in set theory, are all about relative consistency. It's easy to make a FOM (= foundation of mathematics) unsound, and logical luminaries including Frege, Church and Martin-Lof managed to do so. Those soundness proofs in essence transform a proof of falsity in one system to the other, and relative consistency is the best we can get in order to increase our confidence in a formal system. It is true that traditional model theory, as for example done in [1, 2, 3]. doesn't really foreground the deductive nature of ZFC, it's taken for granted, and reasoning proceeds at a more informal level. If you want to mechanise those proofs, you will have to go back to deduction.
[1] W. Hodges, Model Theory.
[2] D. Marker, Model Theory: An Introduction.
[3] C. C. Chang, H. J. Keisler Model theory.
Cladode | 6 years ago | on: Haskell Problems for a New Decade
TLC ... is commonly used to ...
at least as "deep" as those
used in seL4, and often deeper
What you are in essence implying here is that the SeL4 verification can be handled fully automatically by TLC. I do not believe that without a demonstration ... and please don't forget to collect your Turing Award!One problem with model-checking is that you handle loops by replacing loops with approximations (unrolling the loop a few times) and in effect only verifying properties that are not affected by such approximations. In other words extremely shallow properties. (You may use different words, like "timeout" or "unsound technique" but from a logical POV, it's all the same ...)
the model theory ...
rather than the semantic
rules of a model theory
All mathematics is deductive.
ZFC is a deductive theory, HOL is a deductive theory, HoTT is a deductive theory. MLTT is a deductive theory,
Quine's NF is a deductive theory.With that, what mathematicians call a model is but another deductive theory, e.g. the model theory of Peano Arithmetic happens in ZFC, another deductive theory. The deductive/non-deductive distinction is really a discussion of different kinds of algorithms. Deduction somehow involves building up proof trees from axioms and rules, using unification. It could be fair to say that concrete DPLL implementations (as opposed to textbook presentations) that are based on model enumeration, non-chronological backtracking, random restarts, clause learning, watched literals etc don't quite fit this algorithmic pattern. I am not sure exactly how to delineate deductive from non-deductive algorithms, that's why I think it's an interesting question.
SMT solvers are rarely used alone,
I agree, but model checkers, type checkers for dependent types , modern testing technques, and (interactive) provers all tend to off-load at least parts of their work to SAT/SMT solvers which makes the opposition between deductive and non-deductive methods unclear. * * *
BTW I am not arguing against fuzzing, concolic, model checking testing etc. All I'm saying is that they too have scalability limits, just that the scale involved here is not lines of code.Cladode | 6 years ago | on: Haskell Problems for a New Decade
In software verification,
a sound technique is ...
In other words, tests are not sound ... Anyway, we are quibbling about meaning of words, so this is unlikely to be fruitful.Cladode | 6 years ago | on: Haskell Problems for a New Decade
Cladode | 6 years ago | on: Haskell Problems for a New Decade
can check most properties
expressible in TLA+
Lamport's TLA contains ZF set theory. That makes TLA super expressive. Unless a major breakthrough has happened in logic that I have not been informed about, model checkers cannot verify complex TLA properties for large code bases fully autom atically. Let's be quantitative in our two dimensions of scalability:- Assume we 'measure' the complexity of a property by the number of quantifier alternations in the logical formula.
- Assume we 'measure' the complexity of a program by lines of code.
(Yes, both measures are simplistic.) What is the most complex property that has been established in TLA fully automatically with no hand tweaking etc for more than 50,000 LoCs? And how does that complexity compare to for example the complexity of the formula that has been used in the verification of SeL4?
So not deductive.
So first-order logic is not deductive, because it doesn't yield a direct proof of FALSE?There is an interesting question here: what is the precise meaning of "deductive"? Informally it means: building proof trees by instantiating axiom and rule scheme. But that is vague. What does that mean exactly? Are modern SAT/SMT solvers doing this? The field of proof complexity thinks of (propositional) proof systems simply as poly-time functions onto propositional tautologies.
Cladode | 6 years ago | on: Haskell Problems for a New Decade
Model checkers check deep
Which deep properties have you got in mind? DPLL is based
DPLL is based on a form of resolution, in real implementations you mostly simply enumerate models, and backtrack (maybe with some learning) if you decided to abandon a specific model.Cladode | 6 years ago | on: Haskell Problems for a New Decade
I'm not currently working with Lean, but I will start a large verification project in a few months. We have not yet decided which prover to go for. We may write our own.
Cladode | 6 years ago | on: Haskell Problems for a New Decade
Anyway, nobody doubts that Lean's logic is a dependently typed formalism: "Lean is based on a version of dependent type theory known as the Calculus of Inductive Constructions" [1]. I thought the discussion here was about using dependent types in programming languages. Note that logics and programming languages are not the same things.
[1] J. Avigad, L. de Moura, S. Kong, Theorem Proving in Lean.
Cladode | 6 years ago | on: Haskell Problems for a New Decade
remove the barrier between
types and terms then ...
... you will loose type inference, and hence Haskell becomes completely unusable in industrial practise.Cladode | 6 years ago | on: Haskell Problems for a New Decade
Lean. It's implemented in a
dependently typed programming
Lean is implemented in C++ [1, 2]. There's a message in there somewhere. The message is probably something along the lines of: if you want performance, use a low-level language.Cladode | 6 years ago | on: Haskell Problems for a New Decade
better scalability than
deductive
This is misleading. There are two notions of scalability:- Scalable to large code bases.
- Scalable to deep properties.
Deductive methods are currently the only ones that scale to deep properties. Model checking et al are currently the much better at scaling to large code bases. Note however two things: First, Facebook's infer tool which scales quite well to large code bases, is partly based on deductive methods. Secondly, and most importantly, under the hood in most current approaches, SAT/SMT solvers do much of the heavy lifting. Existing SAT/SMT solvers are invariably based on DPLL, i.e. resolution, i.e. a deductive method.
Cladode | 6 years ago | on: Haskell Problems for a New Decade
- Hoare logic soundly over-approximates
- Incorrectness logic soundly under-approximates
Sound under-approximation is extremely useful when you want to prove that a certain problem must arise in code, rather than may arise. The problem with logic based program verification has often been that your prover could not prove that a program was correct, but, due to over-approximation of traditional Hoare logic, the best you could learn from this failure was (simplifying a great deal) that something may go wrong. That is not particularly useful in practise, since often this is a false alert.
[1] https://www.iwls.org/iwls2025/