top | item 45474547

(no title)

Veliladon | 4 months ago

> The ability to make number types that were limited in their range is really useful for certain classes of bugs.

Yes! I would kill to get Ada's number range feature in Rust!

discuss

order

weinzierl|4 months ago

It is worked on under the term "pattern types" mainly by Oli oli-obk Scherer I think, who has an Ada background.

Can't tell you what the current state is but this should give you the keywords to find out.

Also, here is a talk Oli gave in the Ada track at FOSDEM this year: https://hachyderm.io/@oli/113970047617836816

afdbcreid|4 months ago

AFAIK the current status is that it's internal to std (used to implement `NonNull` and friends) and not planned to be exposed.

There were some talks about general pattern type, but it's not even approved as an experiment, not to talk about RFC or stabilization.

pjmlp|4 months ago

That feature is actually from Pascal, and Modula-2, before making its way into Ada.

For some strange reason people always relate to Ada for it.

jdrek1|4 months ago

I would guess that Ada is simply more known. Keep in mind that tech exploded in the past ~3.5 decades whereas those languages are much older and lost the popularity contest. If you ask most people about older languages, the replies other than the obvious C and (kind of wrong but well) C++ are getting thin really quickly. COBOL, Ada, Fortran, and Lisp are probably what people are aware of the most, but other than that?

Veliladon|4 months ago

For me it's because I learned Ada in college.

18 year old me couldn't appreciate how beautiful a language it is but in my 40s I finally do.

sehugg|4 months ago

Turbo Pascal could check ranges on assignment with the {$R+} directive, and Delphi could check arithmetic overflow with {$Q+}. Of course, nobody wanted to waste the cycles to turn those on :)

prerok|4 months ago

I would argue that was one of the reasons why those languages lost.

I distinctly remember arguments for functions working on array of 10. Oh, you want array of 12? Copy-paste the function to make it array of 12. What a load of BS.

It took Pascal years to drop that constraint, but by then C had already won.

I never ever wanted the compiler or runtime to check a subrange of ints. Ever. Overflow as program crash would be better, which I do find useful, but arbitrary ranges chosen by programmer? No thanks. To make matters worse, those are checked even by intermediate results.

I realize this is opinioned only on my experience, so I would appreciate a counter example where it is a benefit (and yes, I worked on production code written in Pascal, French variant even, and migrating it to C was hilariously more readable and maintainable).