Ada is a criminally underrated tool that is unfortunately probably doomed to perpetually take the backseat to Rust despite Rust not solving all the problems Ada does. It's really sad that so many people's idea of safe programming is pretty strictly limited to memory safety, and that because Ada's baseline memory safety (SPARK is a different story) isn't as robust as Rust's borrow checker (in the sense that it doesn't have a borrow checker in favor of just avoiding dynamic allocations whenever possible), that it's a relic of the past.
Ada's type system, SPARK, general clarity on behavior, etc. allows you to structure programs in a manner that makes it hard to Hold It Wrong, especially when dealing with embedded firmware that has all sorts of hardware gotchas. I haven't gotten the chance to use the Tasking primitives in anger yet, but I have a strong suspicion that they're going to bail my ass out of a lot of problems just like the rest of the language has.
My team started at a new employer and made the jump from C to Ada for MCU firmware. We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit. There was some initial hesitation but nobody has any interest in going back now. Rust was floated but we're all glad we didn't opt for it -- memory safety on a system that never allocates memory doesn't hold a candle to Ada's bitfield mapping/representation clauses, ranged types, decimal types, reference manual, formal verification options, concern from the powers that be about providing a stable and trustworthy environment for doing shit that you really don't want to get wrong, etc.
Ada is a lot of fun and a great thing which is ruined (and blessed) by the fact there's de facto only one implementation and company behind it out in the open, and that is semi-closed / license PITA. There were improvements over the years by AdaCore, but I think this altogether hurt the adoption of such a great language in general - no other wide open implementation (like Rust has). If you want to see an extreme example of such a hurt, take a look at Allegro CL and Franz; Imagine having that out in the open and what it'd do for CL, but at least CL has great alternatives in the open like SBCL, whereas Ada doesn't.
The popularity of a programming language is not always about what the language offers. I would say a comprehensive, well-documented, mature set of standard libraries for its target audience is far more important (notable examples like R, Python, and Go). Last time I checked, Ada doesn’t even have a de facto, high quality TLS/crypto library, let alone various essential protocol/format codecs, yet the core team (AdaCore I assume) puts a lot of resources into offering a few sophisticated flagship IDEs that potential hobbyists would never use (they already have vim, emacs or vscode). I understand that as a business they have to sell something for revenue and they cannot sell standard libraries. So, that’s probably a dilemma that we cannot have the nice things for Ada to take off.
For what it's worth, many Rust developers (including myself) are also Ada fans.
Note that ranged types, decimal types, etc. can fairly easily be emulated in Rust, with what I find is a clearer error mechanism.
SPARK is, of course, extremely cool :) There are several ways to work with theorem provers and/or model checkers in Rust, but nothing as nicely integrated as SPARK so far.
It suffered from high prices in compilers when it had an opportunity, plus Modula-2 and Object Pascal being good enough for those that cared about safety on 16 bit home computers.
It also didn't help that the UNIX vendors that had Ada compilers like Sun, it was an additional purchase on top of the development SKU that already had C and C++ included.
I'm a bit disappointed that we've ended up with Rust in the kernel but not Ada. The kernel relies on GCC and there's already an Ada compiler in GCC, so it wouldn't require adding another compiler as a build requirement like Rust does.
There's a couple of major advantages that Ada could have in the Linux over Rust for safe drivers:
1. Having the option to use SPARK for fully verified code provides a superset of the correctness guarantees provided by Rust. As mentioned in the parent comment, Rust focuses purely on memory safety whereas SPARK can provide partial or complete verification of functional correctness. Even without writing any contracts, just preventing things like arithmetic overflows is incredibly useful.
I never even knew that this "Ancient Language" had dependent types. Always thought it was a "modern" invention of snazzy newer academic languages like Idris, etc.
But, its easy to figure out why it didn't become popular. C/C++/any other top10 language all had free compilers available to everyone. Ada didn't during the explosive era of computing expansion. Also, not a problem nowadays with IDE auto-complete/snippets but the language was too verbose for older generation of programmers.
Memory safety and the borrow checker are useful even in the absence of dynamic memory allocation. This still doesn't bring rust and ada to the same place, but it is important to clarify that piece.
> We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit.
I really wanted to use Ada, at least learn it. Concepts are nice but I gave up when started looking into unicode support. It was wild, a bit discouraging. Or has the situation changed? What’s the unicode status in Ada?
Coming from the type theory side with only a passing glance at Ada, I am nevertheless sure: this is not what type theorists mean when they talk about dependently typed languages. Such languages derive from the formulation of Per Martin-Löf (also called Intuitionistic Type Theory), they include dependent sum and dependent product types, and they allow type checkers to prove complex statements about code. (The history of dependent types is intertwined with the history of formalizing mathematics; dependent types were designed to encode essentially arbitrary mathematical statements.)
The interesting feature of Ada here seems to be what it calls "subtype predicates". As you've explained, these come in a "dynamic" flavor, which are a nice syntax for runtime assertions, and a static flavor, which are compile-time checked but restricted to certain static expressions (per https://ada-lang.io/docs/arm/AA-3/AA-3.2#p15_3_3.2.4).
An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
I do believe (having heard from multiple sources) that Ada's type system was ahead of its time and its success in creating practical programs that are likely to be correct is probably underrated. But I'm not here just to legislate semantics; one should be aware that there is something vastly more powerful out there called "dependent types" (even if that power is not likely to come into most people's day-to-day).
(Unfortunately Wikipedia is quite poor on this topic; you will see, for example, that on the Talk page someone asked "Is Ada really dependently typed?" two years ago; no reply. And it makes no sense to say that Ada has "tactics" but not "proof terms"; tactics are a way of generating proof terms. There are many better resources out there (especially ones associated with the languages Agda, Coq (currently being renamed Rocq), and Lean, e.g. https://lean-lang.org/theorem_proving_in_lean4/dependent_typ...). But be warned, there is no "short version": dependent types cannot be explained in a sentence, and they are not something you will arrive at with enough "hacking away".)
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
It actually can be done in Ada, but not purely with the type system, instead we rely on SPARK, which converts Ada code and passes it through various automatic theorem provers. Some examples of fully proven sorting functions are here: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
You can also see from the above code just how good theorem provers and SPARK are now with the reasonably low number of assertions required to both prove that the output is sorted and prove that the input and output contain the same elements, not to mention all the hidden proofs relating to integer overflow, out-of-bounds access, etc..
You could maybe do all this with types and SPARK, but it's not the approach that would usually be taken.
Do note however that the "proof" part of dependent types requires being able to evaluate arbitrary parts of the program at "compile time". (As a fact of the matter, there isn't even a clean phase separation between compile time and run time in dependently-typed languages; the distinction can only be reintroduced after-the-fact via "program extraction".) So, in a sense, types may depend on values in a dependently-typed language but this is merely an elaborate form of meta-programming, it need not establish a true dependence from runtime values. Whether Ada qualifies as a 'true' dependently-typed language, in this view, would depend on how strong its forms of guaranteed compile-time evaluation and meta-programming are.
It does look like the "discriminant" system of Ada shares key similarities with what dependently-typed languages call a "dependent sum", a generalized record type where "later" elements of the record can depend on "earlier" ones.
"Dependent products", on the other hand, can be viewed as an extended version
of generic functions, although they may also suffice to account for e.g. the examples given by OP of Ada's runtime range types and runtime-sized arrays. The key here being that the representation of a type is indeed given as a "function" of the value parameters that are "depended" on.
Maybe they're not implying this kind of limited dependent type system but surely it is still dependently typed? It's just not the "full fat" dependent typing.
Another example of a language with limited dependent typing is Sail. It has "lightweight" dependent types for integers and array lengths (pretty similar to Ada from what it sounds like).
It's very good in my experience - it lets you do a lot of powerful stuff without having to have a PhD in formal verification (again, sounds similar to Ada).
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order.
Yeah you can't do that but you can have the type checker say things like "n^m is positive if n is positive or even" or "foo(x) -> (bits(n), bits(m)) with m+n=x" (not the actual syntax). I'm pretty sure you can't do that stuff in a type system without dependent types right?
Would you say Lean is a somewhat learnable language for somebody who only has cursory exposure to functional programming and static types? I’ve almost exclusively used typescript for the last few years, except for some clojure in the last few months.
Sometimes I find a neat language, but my very TS-oriented brain has a hard time getting into it.
For anyone interested in trying Ada, I've written an installer brings in the entire tool chain with a copy and paste command line on macOS and Linux: https://getada.dev
One other neat thing about discriminated records is that you're not limited to just a single field with a variable size, you can also write something like this:
type My_Record (A, B : My_Integer) is record
X : My_Array (1 .. A);
Y : My_Array (1 .. B);
end record;
A record that's created from this will have those arrays tightly packed rather than leaving space at the end of the first one like you might expect (this might be compiler dependant, but it's definitely how GCC does it). Also note that these values can be set at runtime, so this isn't just a generic in disguise (although in Ada you can also instantiate generics at runtime).
Very interesting. As a Zig enthusiast I've long suspected that Ada has the ability to express more invariants and types that we will want in Zig (some, like ranges, are planned, last I heard). I'm definitely interested to learn more about Ada.
I've seen a single company that does warehouse management software out of Sweden that advertises (in job listings) that they're using it. Otherwise, it's pretty slim pickings if you're not applying in its wheelhouse (high integrity systems -- aerospace, defense, medical, etc).
If you do microcontroller firmware development, I'd say it's perfectly reasonable to float it for a smaller project and just give it a spin. The language is significantly more modern/sane than C so you're not really exposing yourself to much talent risk. There's no gaping holes in the environment, experienced firmware devs will adjust easily, and new devs will feel more at home with the facilities provided.
Probably more jobs available for language VHDL (influenced by Ada) than Ada itself. Of course as a hardware description language you're on the hardware side of things. Also, worth noting it's more popular in Europe (Verilog seems to have won over in the US).
I am not versed in Ada, but Ada does not seem to have dependent types at all, in fact the author doesn't seem to understand what dependent types are. All his examples seem to revolve about arrays and bounded integers so I will stick to those example (although DT are far richer than that).
In a language with dependent types you don't merely have arrays with arbitrary bounds, but you get a proof that array access is always within bounds. AFAICT this is missing in Ada, even when SPARK is employed. Similarly with bounded integers. In a DT language don't get runtime bound checks, it's all compile time proofs.
In Ada, even the name of the feature makes it pretty clear that it's just runtime verification:
type My_Record (Top, Bottom : My_Integer) is record
Field : My_Array(Bottom .. Top);
end record
with Dynamic_Predicate => Bottom <= Top;
It's not a dynamic predicate in either a language with DT or refinement types! It's all in compile time proofs!
SPARK does attempt to prove certain properties of programs automatically, including things like bounds checking, which is great, but it's all best effort and it's nowhere at the same level of capability compared to when using DT (or even refinement types). Of course it's far more lightweight (although it can be argued that systems based on refinement types are just as lightweight).
It's very clear that people developing SPARK know a lot about type theory and formal verification, and use as much of the theory as possible to make verification of Ada programs as cheap and ergonomic as possible, but to claim that Ada has DT is quite a stretch. People are using DT to do formal verification of C programs but that doesn't mean that C has dependent types.
> you get a proof that array access is always within bounds
But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code. This is ultimately a kind of meta-programming. And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
One thing that’s kind of interesting about SPARK in particular - all the contracts get compiled to why3ml as an intermediate step before running through the solvers. If there are any VCs that can’t be discharged using the automatic provers, you can manually prove them using Coq: https://blog.adacore.com/using-coq-to-verify-spark-2014-code
I think the lines between what some consider true DT and what is possible w/ Ada might be more blurred than people expect.
You don't need explicit language support to do this -- it's a fairly common practice in videogame development, but we usually usually call it an arena/scratch buffer. Ryan Fleury has a wonderful article where goes at length about different memory management strategies [1].
It's just a static buffer that you can use for temporary allocations, e.g. to return an array of 50 int32's to a parent stack frame you can just allocate `50*sizeof(int32)` bytes on that buffer and return a pointer to the beginning, instead of a heap-allocated `std::vector<int32>`. Every allocation advances the head of the buffer, so you can have multiple allocations alive at the same time.
Every once in a while you need to free the buffer, which just means resetting its head back to the start - in the case of games the beginning of a new frame is a convenient point to do that.
This doesn't solve dynamic memory in general, as it can't be used for long-lived allocations, it just saves you a bunch of temporary heap allocations. It doesn't solve all problems that RAII does either, as RAII is often used for resources other than memory - files, sockets, locks, ...
Technically one does not need the second stack to implement this. One can use the main stack to place all dynamically sized things. The trick is then on the return copy the dynamically sized thing to insert it before the caller return address stored on the stack. The caller will see it then as if the new thing was allocated on its stack after the call.
But using the second stack is just simpler, avoids the extra copies and more compatible with the mainstream ABI.
Not really much other than tangential mentions. I did write an addendum to the OP going over the secondary stack as well as where you still use normal dynamic allocation in Ada: https://nytpu.com/gemlog/2024-12-27-2
Forth is another language with separate control flow and data stacks, for similar reasons, although at a much lower level - the stacks are a part of the language spec that is fully exposed to you, not just an implementation detail.
seabird|1 year ago
Ada's type system, SPARK, general clarity on behavior, etc. allows you to structure programs in a manner that makes it hard to Hold It Wrong, especially when dealing with embedded firmware that has all sorts of hardware gotchas. I haven't gotten the chance to use the Tasking primitives in anger yet, but I have a strong suspicion that they're going to bail my ass out of a lot of problems just like the rest of the language has.
My team started at a new employer and made the jump from C to Ada for MCU firmware. We control things that spin real fast and our previous experiences with C definitely resulted in some screwups that left you weak in the knees for a bit. There was some initial hesitation but nobody has any interest in going back now. Rust was floated but we're all glad we didn't opt for it -- memory safety on a system that never allocates memory doesn't hold a candle to Ada's bitfield mapping/representation clauses, ranged types, decimal types, reference manual, formal verification options, concern from the powers that be about providing a stable and trustworthy environment for doing shit that you really don't want to get wrong, etc.
Keyframe|1 year ago
typ|1 year ago
Yoric|1 year ago
Note that ranged types, decimal types, etc. can fairly easily be emulated in Rust, with what I find is a clearer error mechanism.
SPARK is, of course, extremely cool :) There are several ways to work with theorem provers and/or model checkers in Rust, but nothing as nicely integrated as SPARK so far.
docandrew|1 year ago
pjmlp|1 year ago
It also didn't help that the UNIX vendors that had Ada compilers like Sun, it was an additional purchase on top of the development SKU that already had C and C++ included.
LiamPowell|1 year ago
There's a couple of major advantages that Ada could have in the Linux over Rust for safe drivers:
1. Having the option to use SPARK for fully verified code provides a superset of the correctness guarantees provided by Rust. As mentioned in the parent comment, Rust focuses purely on memory safety whereas SPARK can provide partial or complete verification of functional correctness. Even without writing any contracts, just preventing things like arithmetic overflows is incredibly useful.
2. Representation clauses almost entirely eliminate the need for the error-prone manual (de-)serialisation that's littered all over Linux driver code: https://www.adaic.org/resources/add_content/standards/22rm/h...
lenkite|1 year ago
But, its easy to figure out why it didn't become popular. C/C++/any other top10 language all had free compilers available to everyone. Ada didn't during the explosive era of computing expansion. Also, not a problem nowadays with IDE auto-complete/snippets but the language was too verbose for older generation of programmers.
jjnoakes|1 year ago
mentalgear|1 year ago
cenamus|1 year ago
Ha, you could almost read this as a stuxnet joke
zerr|1 year ago
rad_gruchalski|1 year ago
jonlong|1 year ago
The interesting feature of Ada here seems to be what it calls "subtype predicates". As you've explained, these come in a "dynamic" flavor, which are a nice syntax for runtime assertions, and a static flavor, which are compile-time checked but restricted to certain static expressions (per https://ada-lang.io/docs/arm/AA-3/AA-3.2#p15_3_3.2.4).
An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order. I am pretty sure this cannot be done in Ada; checking at runtime does not count!
I do believe (having heard from multiple sources) that Ada's type system was ahead of its time and its success in creating practical programs that are likely to be correct is probably underrated. But I'm not here just to legislate semantics; one should be aware that there is something vastly more powerful out there called "dependent types" (even if that power is not likely to come into most people's day-to-day).
(Unfortunately Wikipedia is quite poor on this topic; you will see, for example, that on the Talk page someone asked "Is Ada really dependently typed?" two years ago; no reply. And it makes no sense to say that Ada has "tactics" but not "proof terms"; tactics are a way of generating proof terms. There are many better resources out there (especially ones associated with the languages Agda, Coq (currently being renamed Rocq), and Lean, e.g. https://lean-lang.org/theorem_proving_in_lean4/dependent_typ...). But be warned, there is no "short version": dependent types cannot be explained in a sentence, and they are not something you will arrive at with enough "hacking away".)
LiamPowell|1 year ago
It actually can be done in Ada, but not purely with the type system, instead we rely on SPARK, which converts Ada code and passes it through various automatic theorem provers. Some examples of fully proven sorting functions are here: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
You can also see from the above code just how good theorem provers and SPARK are now with the reasonably low number of assertions required to both prove that the output is sorted and prove that the input and output contain the same elements, not to mention all the hidden proofs relating to integer overflow, out-of-bounds access, etc..
You could maybe do all this with types and SPARK, but it's not the approach that would usually be taken.
zozbot234|1 year ago
It does look like the "discriminant" system of Ada shares key similarities with what dependently-typed languages call a "dependent sum", a generalized record type where "later" elements of the record can depend on "earlier" ones.
"Dependent products", on the other hand, can be viewed as an extended version of generic functions, although they may also suffice to account for e.g. the examples given by OP of Ada's runtime range types and runtime-sized arrays. The key here being that the representation of a type is indeed given as a "function" of the value parameters that are "depended" on.
touisteur|1 year ago
timhh|1 year ago
Another example of a language with limited dependent typing is Sail. It has "lightweight" dependent types for integers and array lengths (pretty similar to Ada from what it sounds like).
It's very good in my experience - it lets you do a lot of powerful stuff without having to have a PhD in formal verification (again, sounds similar to Ada).
> An example of something you can do in a dependently typed language is write a sorting function in such a way that the type checker proves that the output will always be in sorted order.
Yeah you can't do that but you can have the type checker say things like "n^m is positive if n is positive or even" or "foo(x) -> (bits(n), bits(m)) with m+n=x" (not the actual syntax). I'm pretty sure you can't do that stuff in a type system without dependent types right?
chamomeal|1 year ago
Would you say Lean is a somewhat learnable language for somebody who only has cursory exposure to functional programming and static types? I’ve almost exclusively used typescript for the last few years, except for some clojure in the last few months.
Sometimes I find a neat language, but my very TS-oriented brain has a hard time getting into it.
mlinksva|1 year ago
ajdude|1 year ago
Here is some previous discussion:
Show HN: Getada: rustup-like installer for Ada's toolchain/package manager 194 points | 115 comments: https://news.ycombinator.com/item?id=40132373
There is a quickstart with a link to a large tutorial: https://www.getada.dev/how-to-use-alire.html
If you wanna try out Ada without even installing anything, you can also check out https://learn.adacore.com/
LiamPowell|1 year ago
Validark|1 year ago
TypingOutBugs|1 year ago
seabird|1 year ago
If you do microcontroller firmware development, I'd say it's perfectly reasonable to float it for a smaller project and just give it a spin. The language is significantly more modern/sane than C so you're not really exposing yourself to much talent risk. There's no gaping holes in the environment, experienced firmware devs will adjust easily, and new devs will feel more at home with the facilities provided.
uticus|1 year ago
unknown|1 year ago
[deleted]
jghn|1 year ago
4ad|1 year ago
In a language with dependent types you don't merely have arrays with arbitrary bounds, but you get a proof that array access is always within bounds. AFAICT this is missing in Ada, even when SPARK is employed. Similarly with bounded integers. In a DT language don't get runtime bound checks, it's all compile time proofs.
In Ada, even the name of the feature makes it pretty clear that it's just runtime verification:
It's not a dynamic predicate in either a language with DT or refinement types! It's all in compile time proofs!SPARK does attempt to prove certain properties of programs automatically, including things like bounds checking, which is great, but it's all best effort and it's nowhere at the same level of capability compared to when using DT (or even refinement types). Of course it's far more lightweight (although it can be argued that systems based on refinement types are just as lightweight).
It's very clear that people developing SPARK know a lot about type theory and formal verification, and use as much of the theory as possible to make verification of Ada programs as cheap and ergonomic as possible, but to claim that Ada has DT is quite a stretch. People are using DT to do formal verification of C programs but that doesn't mean that C has dependent types.
zozbot234|1 year ago
But the way you get 'proofs' in dependently-typed languages is just by building a tree of function evaluations that demonstrate a given logical inference at compile time, and that disappear (are translated to a unit type) when 'program extraction' is performed on the code. This is ultimately a kind of meta-programming. And in a DT language you would still need runtime checks within your extracted code to ensure that program inputs provided at runtime satisfy the logical conditions that the "proof" part of the program relies on: the need for runtime checking is thus reduced to a bare minimum, but not fully eliminated.
docandrew|1 year ago
I think the lines between what some consider true DT and what is possible w/ Ada might be more blurred than people expect.
nmilo|1 year ago
alexvitkov|1 year ago
It's just a static buffer that you can use for temporary allocations, e.g. to return an array of 50 int32's to a parent stack frame you can just allocate `50*sizeof(int32)` bytes on that buffer and return a pointer to the beginning, instead of a heap-allocated `std::vector<int32>`. Every allocation advances the head of the buffer, so you can have multiple allocations alive at the same time.
Every once in a while you need to free the buffer, which just means resetting its head back to the start - in the case of games the beginning of a new frame is a convenient point to do that.
This doesn't solve dynamic memory in general, as it can't be used for long-lived allocations, it just saves you a bunch of temporary heap allocations. It doesn't solve all problems that RAII does either, as RAII is often used for resources other than memory - files, sockets, locks, ...
[1] https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
fpoling|1 year ago
But using the second stack is just simpler, avoids the extra copies and more compatible with the mainstream ABI.
nytpu|1 year ago
All my knowledge on it comes from some documentation on configuring the secondary stack for embedded targets and from comments+diagrams in the GCC GNAT source code (and maybe I saw a conversation on it on the comp.lang.ada Usenet group at some point…): https://docs.adacore.com/gnat_ugx-docs/html/gnat_ugx/gnat_ug... https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/ada/libgnat/...
unknown|1 year ago
[deleted]
int_19h|1 year ago
unknown|1 year ago
[deleted]
DEEP-MELTDOWN|1 year ago
[deleted]