top | item 45475118

(no title)

plainOldText | 4 months ago

Nim was inspired by Ada & Modula, and has subranges [1]:

  type
    Age = range[0..200]

  let ageWorks = 200.Age
  let ageFails = 201.Age
Then at compile time:

  $ nim c main.nim
  Error: 201 can't be converted to Age
[1] https://nim-lang.org/docs/tut1.html#advanced-types-subranges

discuss

order

wucke13|4 months ago

I know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).

Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:

https://www.sqlite.org/testing.html#tension_between_fuzz_tes...

nine_k|4 months ago

Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.

jordanb|4 months ago

This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"

Ada's compile time verification is very good. With SPARK it's even better.

Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.

vlovich123|4 months ago

I like how better more reliable code is more expensive to certify and the problem is the code and not the certification criteria/process being flawed.

naasking|4 months ago

> as it inserts run-time checks that are not easily traceable to source code

Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.

zeroq|4 months ago

Can you help me understand the context in which this would be far more beneficial from having a validation function, like this in Java:

  int validate(int age) { 
    if (age <= 200) return ago;
    else throw Error();
  }

  int works = validate(200);
  int fails = validate(201);

  int hmmm = works + 1;

dwattttt|4 months ago

To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.

If it fails at run time, it could be the reason you get paged at 1am because everything's broken.

lock1|4 months ago

Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.

It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"

  class LEQ200 { ... }
  LEQ200 validate(int age) throws Exception {
      if (age <= 200) return age;
      else throw Exception();
  }

  LEQ200 works = validate(200);
  // LEQ200 fails = validate(201);
  // LEQ200 hmmm = works + 1; // Error in Java
  LEQ hmmm = works.add(1); // Throws an exception or use Haskell's Either-type / Rust's Result-type
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.

These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.

baq|4 months ago

Yeah it’s something that code would compile down to. You can skip Java and write assembly directly, too.

jb1991|4 months ago

It’s a question of compile time versus runtime.

arzig|4 months ago

What happens when you add 200+1 in a situation where the compiler cannot statically prove that this is 201?

plainOldText|4 months ago

Your example also gets evaluated at comptime. For more complex cases I wouldn't be able to tell you, I'm not the compiler :) For example, this get's checked:

  let ageFails = (200 + 2).Age
  Error: 202 can't be converted to Age
If it cannot statically prove it at comptime, it will crash at runtime during the type conversion operation, e.g.:

  import std/strutils

  stdout.write("What's your age: ")
  let age = stdin.readLine().parseInt().Age
Then, when you run it:

  $ nim r main.nim
  What's your age: 999
  Error: unhandled exception: value out of range: 999 notin 0 .. 200 [RangeDefect]

mr_00ff00|4 months ago

How does this work for dynamic casting? Say like if an age was submitted from a form?

I assume it’s a runtime error or does the compiler force you to handle this?

ajdude|4 months ago

If you're using SPARK, it'll catch at compile time if there's ever a possibility that it would fit within that condition. Otherwise it'll throw an exception (constraint_error) during runtime for you to catch.