pinealservo | 9 years ago | on: Fuchsia: a new operating system
pinealservo's comments
pinealservo | 9 years ago | on: Fuchsia: a new operating system
It's true that seL4 is not a magic safety or security bullet; in particular, their FAQ says that they haven't completed extending their proofs to DMA safety via VT-d or SystemMMU on ARM, so DMA-accessing services would have to be separately verified. And its particular feature set may not be appropriate for all situations. But if its API does work for you, it would be foolish not to at least consider using it.
It is really unreasonable to dismiss seL4 simply because it took them a lot of effort to create it. That effort is now done and can be re-used and magnified by further effort.
pinealservo | 9 years ago | on: Lisp Quickstart
Racket is not just Scheme anymore, but it does still support various versions of Scheme as well as a number of Scheme-based languages. It has a package management system and a fairly large set of packages for basic tasks. The core system, base libraries, and major libraries are well documented and it's got a great built-in IDE.
Guile also does more than just Scheme, but it's been the "official extension language of GNU" for a very long time, so it's got bindings to a lot of libraries and it's used in some interesting applications and tools.
Chicken Scheme also comes with a package system, 'eggs', that has a pretty nice collection of practical packages available for easy installation. You can run your programs via the interpreter, or you can compile them via the compiler, which goes through C so it's got great C interop.
There are a lot of other options with a variety of strengths and weaknesses, but those are good ones to look at for starting out and doing scripting duties.
pinealservo | 10 years ago | on: New Adventures for Elm
The root of this particular family tree is essentially a fusion of the lambda calculus-inspired parts of Algol 60 and Lisp with some new ideas for syntax that were promoted by Peter Landin in his highly influential paper from 1966, "The Next 700 Programming Languages". He called this language ISWIM; it was studied extensively but never directly implemented. Landin was also associated with Dana Scott and Christopher Strachey in their foundational work on programming language semantics.
Robin Milner was interested in the relationship between the logic underlying the semantics of programming languages and the possibility of using computers to prove propositions in that logic (a logic developed by Dana Scott in support of his semantics work). At Stanford, he and a small team (including Whitfield Diffie, who later went into cryptography...) developed a system called Stanford LCF (for Logic of Computable Functions). This was an interactive system in which the user states a main goal in the logic, then splits it to subgoals and more subgoals until they can be solved directly. Proofs were represented directly by data structures and were built directly by the proof manipulation commands, which corresponded to the inference rules of the system.
Milner moved to Edinburgh in 1973, where he worked on a subsequent version of LCF, Edinburgh LCF. Stanford LCF was limited by the size of the proof data structures; for Edinburgh LCF Milner had the idea to forget the proofs, but store the results of them; i.e. the theorems. The proof steps would be performed, but not recorded. To ensure that theorems could only be constructed via valid proofs, a meta-language was developed with a static type system that would only allow data structures corresponding to valid theorems to be built. It also allowed more sophisticated proof development, since the meta-language was a full higher-order programming language modeled after Landin's ISWIM. Exceptions were included to deal with the possibility of failure of particular proof strategies. This version was implemented in Lisp.
LCF spread to other universities; it was early on split into two parallel tracks, ML and Caml. Both have continued to be strongly associated with implementation of theorem proving systems. ML and LCF have become Standard ML and HOL; Caml has become OCaml and Coq. A lot of programming language research has also gone into ML and OCaml as languages in themselves due to their close association with the logic underlying semantics of programming languages.
Meanwhile, a language called PAL was developed in 1968 at MIT in response to ISWIM and Strachey's ideas on programming languages. David Turner, while starting his Ph.D. research at Oxford, got access to the PAL sources and used a simplified form of the language as the basis for his lectures on functional programming at St. Andrews; he called this language SASL. It was initially just a blackboard language, but a colleague surprised him by implementing it in Lisp.
In 1976, Turner changed the semantics of SASL from eager to lazy, based on a lazy version Landin's SECD machine. Somewhat later in his career, he combined ideas from lazy SASL, a cut-down version of SASL called KRC, and the ML-originated Hindley-Milner type system to form a language called Miranda. And Miranda is one of the primary influences on Haskell.
So, this leaves out a lot of other languages such as HOPE and lazy-ML that also were developed in this space. But what's interesting to me is how all these languages, from ML to Haskell, are so strongly related to Algol and thus the Pascal, C, Java, etc. languages that are more familiar to industry.
That got a lot bigger than I intended; I hope someone takes some interest from the digression.
pinealservo | 10 years ago | on: WebAssembly: a binary format for the web
To summarize:
1. Much smaller representation, which will mean faster download times 2. More control-flow structure retained, which means less needs to be rebuilt by JIT-compilers. 3. As a corollary of #2, we get easier and more helpful 'decompiling' of code blobs 4. Another corollary of #2: easier analysis for safety properties
pinealservo | 10 years ago | on: WebAssembly: a binary format for the web
GC is generally a good thing precisely because it prevents use-after-free errors and enables programming at a higher level without worrying about memory management. There is often a performance cost to this, it's true, but for large classes of programs it's a cost that's worth paying.
pinealservo | 10 years ago | on: Why 5 x 3 = 5 and 5 and 5 Was Marked Wrong
Although it seems very common to define multiplication as repeated addition in dictionaries and materials for kids, it is in fact only a valid definition for a rather narrow conception of numbers, i.e. the natural numbers. It doesn't work without exceptions for the Integers, the Rationals, or the Reals. Considering that we want students to eventually be able to deal with the Real numbers, I think it would be better to avoid defining multiplication to be something that doesn't work outside of the Naturals! We would be in quite a pickle trying to explain the calculation of the area of a circle in terms of repeated addition...
By calling what they're teaching the 'repeated addition strategy' it seems like they've thought about this; it's indeed a strategy for calculating a product of two natural numbers. But that makes the marking off of a point all the more perplexing, because both repeated addition schemes are equally valid strategies for computing the same product, by virtue of the commutative property of multiplication! Which is indeed generally an axiom and not a derived theorem in the more general case of multiplication, because multiplication is not generally defined in terms of repeated addition. In general, the axioms only say that multiplication distributes over addition: https://en.wikipedia.org/wiki/Field_(mathematics)
My kids are actually going through this phase of their curriculum right now, and I know that here, at least, they do teach the commutative property of multiplication fairly quickly after multiplication is introduced. So I'm not really sure what pedagogical point of the grading of this assignment would be, but perhaps there is some point to it. Fortunately my kids have not run afoul of this kind of thing.
pinealservo | 10 years ago | on: Apple doesn’t allow Chaos Computer Club tvOS application
pinealservo | 10 years ago | on: GNU Hurd 0.7 has been released
An interesting data point, though: QNX had a very clean and modularized 'microkernel-like' network stack called 'io-net'. But due to throughput issues in some situations, they switched to a new architecture a few years ago called 'io-pkt'. This is essentially the kernel networking code from BSD transplanted to a process in QNX; one advantage of this is that there's a large stock of drivers available to port and a lot of people are familiar with the BSD networking model, but some of the lesser-used corners of it weren't fully debugged when I was doing network protocol hacking, and in general it made me sad.
Anyway, you can definitely run full desktop environments on it, and you can especially run a full tablet environment if you buy a recent Blackberry tablet, as RIM now owns QNX and uses it as their latest Blackberry OS. This was, unfortunately, a step backwards in their openness and embrace of Open Source.
pinealservo | 11 years ago | on: Patoline: A modern digital typesetting system
pinealservo | 11 years ago | on: Types Are The Truth
Types in general can be far more expressive than most people know. Very expressive types are not necessarily easy to work with, though.
pinealservo | 11 years ago | on: Types Are The Truth
pinealservo | 11 years ago | on: Types Are The Truth
Assuming you have some sort of specification for your program, you can use a formal system (such as a static type system) to describe aspects of the specification. It's up to you to correctly model the requirements in the system, but it's similarly up to you to model the requirements of the system in your (far larger and more complex) program. The types ensure that the program fits the aspects of the specification you were able to describe with them.
pinealservo | 11 years ago | on: Contiki: The Little-Known Open Source OS That Rules the Internet of Things
In other words, you're thinking like a hobbyist rather than a product designer, or someone making tools for products. That's fine, but realize that embedded development for actual products is a completely different world than hobby raspberry pi hacking. I think the hobbyist embedded development community is awesome, BTW, just rather different than commercial embedded development.
pinealservo | 12 years ago | on: Sugar Love
Maybe you've been reading particularly good advice about wealth, but I can assure you that far more than "nobody" tell the poor to just acquire the willpower to spend less and earn more.
I agree with your overall point, but I think the same issue plays out in the field of economic behavior of individuals as well as the eating behaviors. People who are advantaged in either scenario don't realize it and don't understand their own advantages, thus giving bad advice to the less-advantaged even when they mean well.
pinealservo | 12 years ago | on: Systems Past: The software innovations we actually use
I have some issues with the style of the rest of the article, though. It consists of a lot of very interesting and thesis-supporting facts, but they are couched in a lot of arbitrary statements ("only 8 software innovations...") of dubious facts that don't seem very well supported on their own.
I mean, yes, you say there are eight and then list eight, but I am not left convinced that those are the ONLY eight. You say that all languages (aside from a bit of backpedaling in the footnotes) are descended from FORTRAN, which is a pretty bold claim to make, but the justification you provide seems to reduce "descended from" to a mostly meaningless "probably borrowed some ideas from" that is hard to base any value judgement on. Surely not all ideas in FORTRAN were misguided!
The whole rest of the article continues in this pattern, distracting from basically good points with brash and sometimes bizarre (from my perspective, at least) statements that seem to belie a wide but very spotty understanding of computing history. Granted, it's been chaotic and not terribly well-documented, but that ought to give one second thoughts about writing with such a definitive and dismissive tone.
I want to repeat that I agree with the general premise, and I think that it's unfortunate that I came away from the article feeling like I disagreed with it due to the problems noted above. I had to re-read the intro to remember the intent. Hopefully this criticism is accepted in the constructive sense in which I offer it, as I think that there's some great insight there that could be more effectively conveyed.
pinealservo | 12 years ago | on: Systems Past: The software innovations we actually use
Most generally, a function is not a number, it is a relation between two sets. One can define such things in a number of ways, but one way is to describe the relation via a formal language such as the lambda calculus. The lambda calculus is not technically a "programming language", it is a formal system of meaningless symbols and a set of rules for forming, manipulating, and eliminating them.
Although there is no fixed relation between reduction rules in the lambda calculus and physical resources in a computer, one can still compare the number of reduction steps required to determine which form in the co-domain of a function defined in the lambda calculus corresponds to a specific form in its domain, and this will provide a reasonable stand-in for comparisons between resource usage of similar computer programs.
So, really, computation is not completely foreign to mathematics, and mathematical functions and "computing functions" are not completely different animals, just... distantly related? Some languages are more mathematical in their approach than others.
pinealservo | 12 years ago | on: Systems Past: The software innovations we actually use
So, they are not strictly functions in the normal set-theoretic sense, but they are (mostly?) mathematically accurate continuous functions between Scott Domains. As the semantics are not formally defined, there is a limit to what you can say about them, but there is an interesting paper entitled "Fast and Loose Reasoning is Morally Correct" that shows that treating Haskell as if it worked in terms of total set-theoretic functions is a reasonable thing to do in some practical circumstances in the use of Haskell.
If you want really pure, total set-theoretic functions in a programming language, you will have to go to a total functional language such as Coq or Agda. You lose Turing-completeness when you stick to total functions, though, and most people only type-check their functions rather than actually running them (this is not as crazy as it sounds--these languages are primarily used to help formulate formal proofs, which are finished when they pass the type-checker).
In any case, the bit in the blog about FORTRAN and everything conflating procedures and algebraic functions strikes me as nonsense, at least without further explanation to clarify/justify it.
pinealservo | 12 years ago | on: Why C/C++ Is My One True Love
On the one hand, C has a tiny runtime that can be omitted or replaced. On the other, it doesn't have all sorts of things that would be helpful but not require runtime overhead.
My biggest complaint is lack of a module system. #include is a stupid ugly hack. Second would be the way it doesn't let you portably define and work with low-level data representations in a standard way. I don't count manual bit masking and shifting for this.
I would also really prefer a more advanced type system, including parametric polymorphism integrated with the module system, but not via templates.
pinealservo | 12 years ago | on: Why C/C++ Is My One True Love