top | item 43970195

(no title)

Munksgaard | 9 months ago

I agree that the terminology is not ideal, but think there's a huge difference between JS' "weak types", i.e. abundant implicit conversions, and e.g. Elixirs "strong types", where `1 + "foo"` is a runtime error. I don't care if we call the latter something else though. Any good suggestions?

That said, I prefer having both strong and static typing, but that's another argument.

discuss

order

zbentley|9 months ago

I'd suggest "high-cast" and "low-cast". They draw attention to the thing that people usually mean when they talk about strong (not static) typing: whether operations in a language bias towards automatically coercing types so that a non-type-error result can be produced or not. High-cast languages tend towards requiring explicit type conversion; low-cast languages tend towards both implicit conversion and more complex behaviors when more than one type is supplied to a given operation. Also, the terms pun nicely with "high-cost" and "low-cost".

That said, it's still a spectrum and there's a lot of subjectiveness here. Everyone agrees that '1 + "foo"' is meaningless, but what about string multiplication? If a language documents that an integer multiplied by a string repeats the string, is that weakly typed/low-cast, or is it just documented multiplication operator behavior? If string multiplication is a whole separate operator, is that more strongly typed (and if so, are we all gonna be able to sleep at night since that means Perl 5 is more strongly typed than Python)?

That subjectiveness extends into the domain of hidden runtime costs, as well. Theoretically, any iterable of hashable items can be passed to a language's implementation of "HashSet::union(items)". But the implementation/performance of "union()" might differ based on the type of the iterable: should we be allowed to pass a lazy iterator which produces values after arbitrary custom computations? Many languages say "yes" here, but some consider collecting/each-ing the iterator something that must be explicit so the cost/exhaustion/side-effectfulness of the iteration is made clear. How about unioning a set with a vector, versus another set? Very different algorithmic behavior happens inside the union if another hash set is supplied instead of, say, a static array or linked list; while the complexity for nonlazy unions is always O(N), the average complexity/wallclock performance may be very different. Rust's stdlib, for example, discourages this kind of heterogenous union (not, I suspect, out of a desire for high-cast-explicitness, but because it wants to encourage use of its lazy O(1) union system instead). Are the answers to that question part of the high-cast/low-cast (or strong/weak type system) spectrum, or are they just specific choices made by each language's collections library? Ask 10 programmers, and I suspect you'll get a lot of different answers.