They've discovered how to write dynamically-typed code correctly, or at least, a philosophy of it. It's not "discovering static typing" because that doesn't come up in static type languages. (Typescript is, for this particular purpose, still effective a dynamically typed language.)
I remember writing Python and Perl where functions largely just aimed you passed them the correct types (with isolated exceptions where it may have made sense) years before JavaScript was anything but a browser language for little functionality snippets. It's a dynamic language antipattern for every function to be constantly defensively checking all of it's input for type correctness, because despite being written for nominal "correctness", it's fragile, inconsistent between definitions, often wrong anyhow, slow, and complicates every function it touches, to the point it essentially eliminates the advantages of dynamic language in the first place.
Dynamic languages have to move some responsibility for being called with correct arguments to the caller, because checking the correctness of the arguments correctly is difficult and at times simply impossible. If the function is called with the wrong arguments and blows up, you need to be blaming the caller, not the called function.
I observe that in general this seems to be something that requires a certain degree of programming maturity to internalize: Just because the compiler or stack trace says the problem is on line 123 of program file X, does not mean the problem is actually there or that the correct fix will go there.
I’ve seen something similar happen in Rust as well (and I do consider it an antipattern).
Some libraries take a `TryFrom<RealType>` as input, instead of RealType. Their return value is now polluted with the Error type of the potential failure.
This is a pain to work with when you’re passing the exact type, since you basically need to handle an unreachable error case.
Functions should take the raw types which they need, and leave conversation to the call site.
It's annoying, but not for the error handling. To the contrary, I think the error handling is actually improved by this pattern.
If you manually convert beforehand you easily run into working with a Result<Result<T, E>, E>.
What I find annoying about the pattern is that it hinders API exploration through intellisense ("okay, it seems I need a XY, how do I get one of them"), because the TryFrom (sort of) obscures all the types that would be valid. This problem isn't exclusive to Rust though, very OO APIs that only have a base class in the signature, but really expect some concrete implementation are similarly annoying.
Of course you can look up "who implements X"; it's just an inconvenient extra step.
And there is merit to APIs designed like this - stuff like Axum in Rust would be much more significantly more annoying to use if you had to convert everything by hand.
Though often this kind of design feels like a band aid for the lack of union types in the language.
It's definitely pretty annoying, though not because of the errors. Actually the errors might be the biggest benefit even.
If the conversion fails I can't continue with the function call.
I think there is an important observation in it though: That dynamic, loosely-typed languages will let you create code that "works" faster, but over the long run will lead to more ecosystem bloat - because there are more unexpected edge cases that the language drops onto the programmer for deciding how to handle.
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
In my experience, this only holds true for small scripts. When you're doing scientific computing or deep learning with data flowing between different libraries, the lack of type safety makes development much slower if you don't maintain strict discipline around your interfaces.
If we're trying to solve problems with good design, use endpoint1 and endpoint2 and then the function sorts them. Having max and min is itself a bad design choice, the function doesn't need the caller to work that out. Why should the caller have to order the ends of the interval? It adds nothing but the possibility of calling the function wrong. So in this this case:
export function clamp(value: number, endpoint1: number, endpoint2: number): number {
return Math.min(Math.max(value, Math.min(endpoint1, endpoint2)), Math.max(endpoint1, endpoint2));
}
Well, if your language has a sufficiently strong type system (namely, dependent types), you can take proofs of some properties as arguments. Example in Lean:
def clamp (value min max : Float) {H : min < max} : Float := ...
In a compiled language, it takes one or two machine instructions to test
assert!(b >= a);
Works in C, C++, Go, Rust...
Amusingly, nowhere in the original article is it mentioned that the article is only about Javascript.
Languages should have compile time strong typing for at least the machine types: integers, floats, characters, strings, and booleans. If user defined types are handled as an "any" type resolved at run time, performance is OK, because there's enough overhead dealing with user defined structures that the run time check won't kill performance.
(This is why Python needs NumPy to get decent numeric performance.)
Many libraries throw an exception, panic, or silently swap the parameters at runtime.
To detect this at compile time, you would need either min and max to be known at compile time, or a type system that supports value-dependent types. None of the popular language support this. (My language named 'Bau', which is not popular of course, support value-dependent types to avoid array-bound checks.)
You don't need to. One if statement to check that is not a problem. The problem occurs when you have a bunch of other ifs as well to check all kinds of other stuff that a type system would handle for you like nullability, incorrect types etc.
Personally I just write JS like a typed language. I follow all the same rules as I would in Java or C# or whatever. It's not a perfect solution and I still don't like JS but it works.
‘’’
export function clamp(value: number | string, min: number | string, max: number | string): number {
if (typeof value === 'string' && Number.isNaN(Number(value))) {
throw new Error('value must be a number or a number-like string');
}
if (typeof min === 'string' && Number.isNaN(Number(min))) {
throw new Error('min must be a number or a number-like string');
}
if (typeof max === 'string' && Number.isNaN(Number(max))) {
throw new Error('max must be a number or a number-like string');
}
if (Number(min) > Number(max)) {
throw new Error('min must be less than or equal to max');
}
return Math.min(Math.max(value, min), max);
}
‘’’
> Oh, look, somebody just re-discovered static typing.
If you're going to smug, at least do it when you're on the right side of the technology. The problem the article describes has nothing to do with the degree of static typing a language might have. You can make narrow, tight, clean interfaces in dynamic languages; you can make sprawling and unfocused ones in statically-typed languages.
The problem is one of mindset --- the way I'd do it, an insufficient appreciation of the beauty of parsimony. Nothing to do with any specific type system or language.
Yep, I’ve seen this in Swift with a dozen overloads for functions and class initializers to support umpteen similar, but different, types as input. Sloppy schema design reveals itself in combinatorial explosions of type conversions
jerf|5 months ago
I remember writing Python and Perl where functions largely just aimed you passed them the correct types (with isolated exceptions where it may have made sense) years before JavaScript was anything but a browser language for little functionality snippets. It's a dynamic language antipattern for every function to be constantly defensively checking all of it's input for type correctness, because despite being written for nominal "correctness", it's fragile, inconsistent between definitions, often wrong anyhow, slow, and complicates every function it touches, to the point it essentially eliminates the advantages of dynamic language in the first place.
Dynamic languages have to move some responsibility for being called with correct arguments to the caller, because checking the correctness of the arguments correctly is difficult and at times simply impossible. If the function is called with the wrong arguments and blows up, you need to be blaming the caller, not the called function.
I observe that in general this seems to be something that requires a certain degree of programming maturity to internalize: Just because the compiler or stack trace says the problem is on line 123 of program file X, does not mean the problem is actually there or that the correct fix will go there.
iwontberude|5 months ago
WhyNotHugo|5 months ago
Some libraries take a `TryFrom<RealType>` as input, instead of RealType. Their return value is now polluted with the Error type of the potential failure.
This is a pain to work with when you’re passing the exact type, since you basically need to handle an unreachable error case.
Functions should take the raw types which they need, and leave conversation to the call site.
_bent|5 months ago
What I find annoying about the pattern is that it hinders API exploration through intellisense ("okay, it seems I need a XY, how do I get one of them"), because the TryFrom (sort of) obscures all the types that would be valid. This problem isn't exclusive to Rust though, very OO APIs that only have a base class in the signature, but really expect some concrete implementation are similarly annoying.
Of course you can look up "who implements X"; it's just an inconvenient extra step.
And there is merit to APIs designed like this - stuff like Axum in Rust would be much more significantly more annoying to use if you had to convert everything by hand. Though often this kind of design feels like a band aid for the lack of union types in the language.
_bent|5 months ago
xg15|5 months ago
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
renmillar|5 months ago
jmull|5 months ago
whilenot-dev|5 months ago
roenxi|5 months ago
kuruczgy|5 months ago
Animats|5 months ago
Amusingly, nowhere in the original article is it mentioned that the article is only about Javascript.
Languages should have compile time strong typing for at least the machine types: integers, floats, characters, strings, and booleans. If user defined types are handled as an "any" type resolved at run time, performance is OK, because there's enough overhead dealing with user defined structures that the run time check won't kill performance.
(This is why Python needs NumPy to get decent numeric performance.)
thomasmg|5 months ago
To detect this at compile time, you would need either min and max to be known at compile time, or a type system that supports value-dependent types. None of the popular language support this. (My language named 'Bau', which is not popular of course, support value-dependent types to avoid array-bound checks.)
fph|5 months ago
sfn42|5 months ago
Personally I just write JS like a typed language. I follow all the same rules as I would in Java or C# or whatever. It's not a perfect solution and I still don't like JS but it works.
Smaug123|5 months ago
croes|5 months ago
IshKebab|5 months ago
DarkNova6|5 months ago
‘’’ export function clamp(value: number | string, min: number | string, max: number | string): number { if (typeof value === 'string' && Number.isNaN(Number(value))) { throw new Error('value must be a number or a number-like string'); } if (typeof min === 'string' && Number.isNaN(Number(min))) { throw new Error('min must be a number or a number-like string'); } if (typeof max === 'string' && Number.isNaN(Number(max))) { throw new Error('max must be a number or a number-like string'); } if (Number(min) > Number(max)) { throw new Error('min must be less than or equal to max'); } return Math.min(Math.max(value, min), max); } ‘’’
quotemstr|5 months ago
If you're going to smug, at least do it when you're on the right side of the technology. The problem the article describes has nothing to do with the degree of static typing a language might have. You can make narrow, tight, clean interfaces in dynamic languages; you can make sprawling and unfocused ones in statically-typed languages.
The problem is one of mindset --- the way I'd do it, an insufficient appreciation of the beauty of parsimony. Nothing to do with any specific type system or language.
Rumudiez|5 months ago