I’ve been a proponent of the somewhat unique utility of the concatenative calculus going on two decades now, so it is great to see a thoughtful attempt at a modern and practical implementation. Looking forward to seeing how it progresses.
> The first three intrinsics, swap, clone, and drop, directly correspond to the three common structural rules from proof theory... By restricting the use of these intrinsics, we can cover the full range of substructural logics.
> There will inevitably be some computations which can be more compactly or efficiently expressed in the untyped concatenative calculus than in the untyped lambda calculus, and visa versa.
What would be an example of a computation more compactly expressed in the untyped concatenative calculus than in the binary lambda calculus [1] (where compactness can be directly measured as size in bits) ?
I suppose that your programs can also be represented in binary with 3 bits encoding the 6 intrinsics plus the two symbols '[' and ']', and the end of a program could be encoded by an unmatched ']' at the end.
If you take a close look at that sentence, I was careful to limit the compactness claim to this pair of languages :-) I'm not surprised that there are pairs of languages where one is uniformly more compact or efficient than the other.
Formal methods have been used successfully for decades; it's not just a pipe dream. Perfect software should ideally be something like ultra-low-defect software, though (that's the term the AdaCore folks use).
There are also other projects that aim to make formal software development much easier [0][1] and of course there's SPARK Ada.
The formal verification aspect doesn't really have anything to do with Forth, but I do expect the multistack concatentive calculus to be much more ergonomic than applicative languages for linearity, which will play an important role in formal verification. We'll have to see how it all plays out, though.
But that's my goal. I'm not really interested in writing just another programming language. Even if I fail miserably, hopefully breaking the work up into small pieces and publishing as I go will at least inspire others with similar goals.
1. Is the "main stack" (the unnamed one, the one your program begins with focus on) specially identifiable in some way? I note further down this comment tree that, if the "main stack" is simply whichever one happens to be in focus, there are some exciting possibilities for coroutines that are simply resumed in focus of the stack that they treat as their main.
2. Are stack names always statically known in Dawn? Put differently, can you generate fresh stack names at runtime? If so, what does that look like, and how does that impact a theoretical reduction from multi-stack Dawn to single-stack Dawn? If not, can you envision additional intrinsics for supporting coroutines without requiring the user program to virtualize stacks manually?
3. Do you have any thoughts on concurrency in general in Dawn? I'm a big fan of the mental model offered by concurrent logic paradigms (especially Saraswat's concurrent constraint programming), where concurrent systems interact monotonically over shared logical variables. I think there's a subtle and exciting alignment between that concept and your named stacks. Does this line up with any of your thoughts?
4. (A bonus) What happens when an operation is attempted on an empty stack? I assume some form of exceptional case occurs, but in the face of concurrency, would it make sense to yield execution until something else puts an element onto the stack? (Probably not, because of FIFO semantics, but it's a fun question!)
This is just to notice that the [clone apply] clone apply for recursion is very close to the way the lambda calculus Y combinator reduces via graph rewrites, after is translated into a graph [1].The graphical reductions lead to just a pair of two trivalent nodes, one of them a sort of fanout (clone) and another an application [2]. There is no lambda term which can be directly translated into this graph, because of the wiring of the graph.
I like your project, I have been a fan of both Haskell and Factor, and happy to see somebody trying to unify their qualities. I like Haskell's type system and robust fundamentals, but I also like the effectively syntax-less nature of Factor, and the evaluation order being the same as read order.
I have a question: Why not base this project on Factor, which has a pretty decent existing implementation and compiler? (But few people use it..)
I think one could add better typing to Factor by creating a separate module for words (like docs and tests are now) which would describe the types of each word (and also other stipulations). Then these could be type checked separately.
As an aside, I always thought that it would be nice if a programming language could specify "stipulations", kinda like assertions but instead of having them inside word definitions (programs), have them outside (so that you could refer to them). For example, if something is a monad instance, the stipulation could specify that it has to follow monad laws. It would be then possible to automatically verify these stipulations (like QuickCheck does it, for example) or just assume they are true and use them for automated program deriving (or during compilation for optimization). A word annotated as having a certain type is just a special case of stipulation. Factor already has some stipulations like that in the form of additional word annotations (e.g. the word is free of side effects).
> to democratize formal software verification in order to make it much more feasible and realistic to write perfect software
to me that sounds like only people curious by nature or propense to try new programming languages as the language seems like an unfinished typed APL dialect, but perhaps I didn't get the idea enough.
Could you add some practical examples next to the theoretical points at each step?
were you considering explicitly splitting stack and queue (similarly to http://nsl.com/k/xy/xy.htm)? in the current "flat" notation the active word to be applied is implicit
"In fact, Turing completeness can be achieved with as few as two intrinsics", are you referring to the SK combinators, or in terms of the intrinsics in your language?
Dawn looks very different to program in than a traditional language. Will that always be true, or can you (and do you want to!) build layers on top that are more standard-looking?
[+] [-] jandrewrogers|4 years ago|reply
[+] [-] andrewflnr|4 years ago|reply
Dang, I think you buried the lede.
[+] [-] smaddox|4 years ago|reply
[+] [-] tromp|4 years ago|reply
What would be an example of a computation more compactly expressed in the untyped concatenative calculus than in the binary lambda calculus [1] (where compactness can be directly measured as size in bits) ?
I suppose that your programs can also be represented in binary with 3 bits encoding the 6 intrinsics plus the two symbols '[' and ']', and the end of a program could be encoded by an unmatched ']' at the end.
[1] https://tromp.github.io/cl/Binary_lambda_calculus.html
[+] [-] smaddox|4 years ago|reply
[+] [-] adenozine|4 years ago|reply
Okay.
I like Forth too, but that's laying it on a tad thick, imo.
[+] [-] MaxBarraclough|4 years ago|reply
There are also other projects that aim to make formal software development much easier [0][1] and of course there's SPARK Ada.
[0] https://github.com/zetzit/zz
[1] https://github.com/dafny-lang/dafny
[+] [-] smaddox|4 years ago|reply
But that's my goal. I'm not really interested in writing just another programming language. Even if I fail miserably, hopefully breaking the work up into small pieces and publishing as I go will at least inspire others with similar goals.
[+] [-] smaddox|4 years ago|reply
[+] [-] kryptiskt|4 years ago|reply
[+] [-] Twisol|4 years ago|reply
1. Is the "main stack" (the unnamed one, the one your program begins with focus on) specially identifiable in some way? I note further down this comment tree that, if the "main stack" is simply whichever one happens to be in focus, there are some exciting possibilities for coroutines that are simply resumed in focus of the stack that they treat as their main.
2. Are stack names always statically known in Dawn? Put differently, can you generate fresh stack names at runtime? If so, what does that look like, and how does that impact a theoretical reduction from multi-stack Dawn to single-stack Dawn? If not, can you envision additional intrinsics for supporting coroutines without requiring the user program to virtualize stacks manually?
3. Do you have any thoughts on concurrency in general in Dawn? I'm a big fan of the mental model offered by concurrent logic paradigms (especially Saraswat's concurrent constraint programming), where concurrent systems interact monotonically over shared logical variables. I think there's a subtle and exciting alignment between that concept and your named stacks. Does this line up with any of your thoughts?
4. (A bonus) What happens when an operation is attempted on an empty stack? I assume some form of exceptional case occurs, but in the face of concurrency, would it make sense to yield execution until something else puts an element onto the stack? (Probably not, because of FIFO semantics, but it's a fun question!)
[+] [-] clone_apply|4 years ago|reply
This coincidence is curious.
[1] you can check this at https://mbuliga.github.io/quinegraphs/lambda2mol.html#y_comb...
where you can see how the term Y (\x.x) reduces. In the λ>mol textarea delete what is written there and just copy/paste
(\g.((\x.(g (x x))) (\x.(g (x x))))) input
then click the λ>mol button. Then the start button. Stop whenever you want.
[2] the graph akin to a pair clone - apply can be seen at https://mbuliga.github.io/quinegraphs/quinelab.html#howtolab
where you copy/paste this graph as written in the mol notation:
A input y x^FOE x output y
"A" is the apply node and "FOE" is a clone node (say).
[+] [-] js8|4 years ago|reply
I have a question: Why not base this project on Factor, which has a pretty decent existing implementation and compiler? (But few people use it..)
I think one could add better typing to Factor by creating a separate module for words (like docs and tests are now) which would describe the types of each word (and also other stipulations). Then these could be type checked separately.
As an aside, I always thought that it would be nice if a programming language could specify "stipulations", kinda like assertions but instead of having them inside word definitions (programs), have them outside (so that you could refer to them). For example, if something is a monad instance, the stipulation could specify that it has to follow monad laws. It would be then possible to automatically verify these stipulations (like QuickCheck does it, for example) or just assume they are true and use them for automated program deriving (or during compilation for optimization). A word annotated as having a certain type is just a special case of stipulation. Factor already has some stipulations like that in the form of additional word annotations (e.g. the word is free of side effects).
[+] [-] dwenzek|4 years ago|reply
For instance, how is interpreted the main example of the introduction [1]?
Notably, how to interpret the second use of $y that makes sense only because the former y value has been duplicated in the first $y call?[1] https://www.dawn-lang.org/posts/introducing-dawn-(part-1)
[+] [-] nudpiedo|4 years ago|reply
> to democratize formal software verification in order to make it much more feasible and realistic to write perfect software
to me that sounds like only people curious by nature or propense to try new programming languages as the language seems like an unfinished typed APL dialect, but perhaps I didn't get the idea enough.
Could you add some practical examples next to the theoretical points at each step?
[+] [-] artemonster|4 years ago|reply
[+] [-] atsmyles|4 years ago|reply
[+] [-] dash2|4 years ago|reply
[+] [-] artemonster|4 years ago|reply
[+] [-] AnthonBerg|4 years ago|reply
[+] [-] kragen|4 years ago|reply
[+] [-] johnthescott|4 years ago|reply