Your elevator should not have automatic doors, doors are restrictive. They stop you from quickly jumping out of the elevator if you decide that you actually want to stay at the first floor.
Sure, we’ve seen some pretty gnarly accidents, and there is no reasonable situation where risking death is a sane choice.
But ask yourself: is it the elevator's job to prevent an accident? If you think so, I suggest you never leave your home again, as safety is your own concern.
Like and subscribe for other posts like “knife handles? What an idiot” and “never wear a helmet you coward”.
Unironically the biggest flame-wars I ever saw on forums back in the day was on whether or not mandatory bike helmets made cycling safer or more dangerous.
And of course they only believe this because they don't lose a finger when they write outside of bounds or they don't fall down the shaft once a pointer was accidentally null
Did you even read the second half of the post? The author's answer to your concerns is testing. He suggests relying on tests rather than on strict type system that forces you to design everything upfront.
Personally, I can see arguments for both approaches - stricter types or more tests.
> The rules of the language insist that when you use a nullable variable, you must first check that variable for null. So if s is a String? then var l = s.length() won’t compile. ...
> The question is: Whose job is it to manage the nulls. The language? Or the programmer? ...
> And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!
> You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs.
Am I reading or quoting this wrong?
Just some pros of static type checking: you can't forget to handle the null cases (how can you confirm your tests didn't forget some permutation of null variables somewhere?), it's 100% exhaustive for all edge cases and code paths across the whole project, it handholds you while refactoring (changing a field from being non-null to null later in a complex project is going to be a nightmare relying on just tests especially if you don't know the code well), it's faster than waiting for a test suite to run, it pinpoints to the line where the problem is (vs having to step through a failed test), and it provides clear, concise, and accurate documentation (instead of burying this info across test files).
And the more realistic comparison is most programmers aren't going to be writing lots of unhappy path tests for null edge cases any way so you'll be debugging via runtime errors if you're lucky.
Static typing here is so clearly better and less risky to me that I think expecting tests instead is...irresponsible? I try to be charitable but I can't take it seriously anymore if I'm honest.
> Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug.
That's why learning more academic, 'non-practical' aspects of computer science is sometimes beneficial. Otherwise very few will naturally develop the abstract thinking that allows them to see uncaught exception and null pointer are exactly the same 'kind of bug.'
Anyway the author got it completely upside down. The stricter mental model of static typing came first (in more academic languages like Haskell and Ocaml). Then Java etc. half-assed them. Then we have Swift and Kotlin and whatever trying to un-half-ass them while keeping some terminology from Java etc. to not scare Java etc. programmers.
I suppose it's somewhat accurate to claim that Haskell and Ocaml historically preceded Java (or even Objective-C). But Java wasn't inspired by those academic languages, but C: a then widely used real-world language with only partial static types.
(Not saying Java's attempt to remedy C's problems wasn't half-assed — it was.) The trend to plug holes is primarily motivated by empirical evidence of bug classes. Not by elegance of academic research.
As Bjarne Stroustrup famously quipped:
> “There are only two kinds of languages: the ones people complain about and the ones nobody uses.”
Swift, Kotlin, Rust, C++ are attempt to become languages that everyone complains about, not Haskell or Ocaml.
I could not agree less. The line of reasoning here is: relying on the type system to prevent error lets people writer fewer tests. Tests are the only way to assure quality. Therefore fewer tests is bad, and powerful type systems are bad too, because they cause you to act against software quality.
Furthermore, Uncle Bob sets up this weird opposition between the programmer and the type system, as if the latter is somehow fighting the former, rather than being a tool in their hand.
I think that sadly this is just the narrative of a man whose life’s work consists of convincing people that there is a silver bullet, and it is TDD.
While I consider Uncle Bob a bad programmer, there is some merit to this article. This paragraph was particularly prescient:
>But before you run out of fingers and toes, you have created languages that contain dozens of keywords, hundreds of constraints, a tortuous syntax, and a reference manual that reads like a law book. Indeed, to become an expert in these languages, you must become a language lawyer (a term that was invented during the C++ era.)
And this was written before Swift gained bespoke syntax for async-await, actors, some SwiftUI crap, actor isolation, and maybe other things, honestly, I don't even bother to follow it anymore.
I agree, but i think his point applies more to haskell than, say, kotlin. There is a balance between type strictness and productivity and if you go too far in one direction you get horribly buggy code and if you go too far in the other direction you have a language that is grindingly slow to develop in.
Another thing I dont think a lot of people appreciate either is that types have sharp diminishing returns catching the kind of bugs tests are good at catching and vice versa.
What I read between the lines: “I have such a fragile ego that I feel offended when a tool points out a mistake I made. I feel intellectually rewarded by doing the same busywork over and over again. I don’t want to change the way I do my work at all. I feel left behind when people other than me have great ideas about language design.”
What I read between your lines: "I don't want to think about the code at all. It should only compile if it has no bugs. I don't like rapid prototyping. I feel stupid when people other than me feel they can program effectively with fewer safeguards."
I understand what the author says, but in my experience, "Nullable Types" and "Open/Sealed Classes" are two different subjects and...
1) For "Nullable Types", I see that it is VERY good to think about if some type can be null or not, or use a type system that does not allow nulls, so you need some "unit" type, and appropriately handle these scenarios. I think it is ok the language enforces this, it really, really helps you to avoid bugs and errors sooner.
2) For "Open/Sealed Classes", my experience says you never (or very rarely) know that a class will need to be extended later. I work with older systems. See, I don't care if you, the original coder, marked this class as "sealed", and it does not matter if you wrote tons of unit tests (like the author advocates), my customer wants (or needs) that I extend that class, so I will need to do a lot of language hacks to do it because you marked as sealed. So, IMHO, marking a class as "open" or "sealed" works for me as a hint only; it should not limit me.
sealed classes are just retrofitting sum types onto the JVM. If Kotlin could have used "enum" for it then they probably would have, like Swift and Rust did.
The main point of sealed classes is exhaustive `when` expressions:
return when (val result = something()) {
Result.Success -> // ...
Result.Failure -> // ...
}
If another subclass appeared at runtime, then the code would fall off the end of that when expression.
> The question is: Whose job is it to manage the nulls. The language? Or the programmer?
> These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.
I'm going to try to best to hide my rage with just how awful this whole article is, and try to focus my criticism. I can imagine that reasonable people can disagree as to whether `try` should be required to call a function that throws, or whether classes should be sealed by default.
But good god man, the null reference problem is so obvious, it's plain and simply a bug in the type system of every language that has it. There's basically no room for disagreement here: If a function accepts a String, and you can pass null to it, that's a hole in the type system. Because null can't be a String. It doesn't adhere to String's contract. If you try to call .length() on it (or whatever), your program crashes.
The only excuse we've had in the past is that expressing "optional" values is hard to do in a language that doesn't have sum types and generics. And although we could've always special-cased the concept of "Optional value of type T" in languages via special syntax (like Kotlin or Swift do, although they do have sum types and generics), no language seems to have done this... the only languages that seem to support Optionals are languages that do have sum types and generics. So I get it, it's "hard to do" for a language. And some languages value simplicity so much that it's not worth it to them.
But nowadays (and even in 2017) there's simply no excuse any more. If you can pass `null` to a function that expects a valid reference, that language is broken. Fixing this is not something you lump in with "adding a language feature for every class of bug", it's simply the correct behavior a language should have for references.
Author argues against strong typing systems and language features to prevent classes of bugs and instead encourages developers to "writing lots of tests" for things that a type system would prevent.
The authors thesis seems to be that it's preferable to rely on the programmer who wrote bugs to write even more bugs in tests in order to have some benefit over a compiler or type system that can prevent these things from happening in the first place?
So obviously it's an opinion and he's entitled to it, but (in my own opinion) it is so so so, on-its-face, just flat out wrong, I'm concerned that that it's creating developers who believe that writing so many tests (that languages and compilers save you time (and bugs) in writing) is a valid solution to preventing null pointer defeferences.
There's also another strong argument against favoring tests over types, which is maintainability. If I go ahead and change a variable type from being non-nullable to nullable, I instantly get a complete list of all the places where I have to handle that, which makes it much much faster to generalize the logic. But in a dynamic language, all tests that were written by the previous developer was written at a time when this variable was assumed to never be null and uses non-null values for all test vectors, so good luck using that to find the places that need to be fixed.
On top of that, every test that could have been omitted due to a type system incurs an extra maintenance tax that you have to pay when you change the API.
I think you you're missing the point though, he did not say that adding those constraints was a bad idea per se, but they are leading to a bad path. If the path is "just add more guard-rails" then we will get to a point where we lose the "soft" part of "software" and you'll find yourself restarting each time you need to change something because at that point the language you chose was already the first wall of a bad program architecture (ease of change).
I think that it can sometimes be helpful to have some protective functions, as long as it is not excessive. It should not be mandatory and should not be too difficult to avoid. I program in C, and I do not use most of the warnings since they are excessive (in my opinion), but the warning about implicitly using a integer as a pointer is sensible (and should be an error) since you can easily explicitly override it (with the specific pointer type that you need) if that is really what you meant.
Tests are good regardless of what the programming languages tries to do, since not all bugs can be avoided by the programming language, and not all bugs should be avoided by the programming language. It should let you to write the program, instead of to stop you.
However, there are situations where due to the working of the implementation, there is no reasonable way to compile it when it specifies something which is then violated, or there might be certain extra things that may be required in the compiled code (making it bigger and/or slower in certain circumstances) when certain things are open. For example, consider if a different calling convention is needed for functions that can throw an exception (although in this case it should not require a try block if the function that calls it is also says it can throw an exception), or if some things in functions in a open class will need to be compiled in a special way to consider the possibility that other parts of the class will be overridden (both of these examples are hypothetical, I do not know whether or not any of this is actually true). It is also possible that certain optimizations are possible or not possible depending on if a class is open or something is nullable etc. Due to such things like that, specifying such features in the program might sometimes be necessary in order for the compiler to work correctly.
I like the idea of in-language control of loose/strict typing.
A project, "main", makefile or compiler parameters, should be able to set minimum strictness for everything it composes/references.
Individual modules should be able to set their looseness, and be parsed and implemented accordingly, with runtime dispatching, template expansion, etc. instead of compile time.
When the two collide, we get an error, and the developer decides which one needs to give way, or making an explicit exception, for the current state of development. So all typing choices are always clear and enforced.
Language highlighting of loose vs. strict would also be nice.
Not quite the same as safe/unsafe, but similar.
But I also like the idea of a typed Forth, so what do I know?
for me there is a clear problem in all those languages. The exception paradigma opens a second way to exit a function. This is clearly a burden for every programmer. it is also a burden for the machine. you have to have RTTI, Inconvinient stack undwindings and perhaps gerneric types. Also nullable types are a but of a letdown. first we specify a "reference" kind type to neverhave to deal with null violations, then we allow NULL to express a empty state. Better have Result return types that carry a clear message: Result and Value. Also have real Reference type with empty and value. by accessing a empty value you get back the default value. i think c# has mastered that realy nice, but far from perfect
Initially I was impressed by the null detection. Then I found out about defaults. Way worse than null.
C and Go can demand a bit of ceremony with manual error checks. Things get bad if you forget to do the checks.
Java and Checked exceptions forced error-checking, which is a little verbose, and most of the time you can't 'handle' (for any meaning other than log) them, so you just rethrow.
C# went with unchecked exceptions. But with default values, there's no need to throw! Avoid Java's messy NPE encounters by just writing the wrong value to the database.
But there are multiple ways to have execution leave a function.
The function could return.
The function could be aborted by SIGKILL.
The function could be aborted by a non-SIGKILL signal that preserves subsequent execution invariants.
The function could be aborted by a non-SIGKILL signal that doesn't preserve subsequent execution invariants (SIGSEGV in most--but not all--cases).
The function could abort(2) (not always the same as SIGABRT, but usually).
The function could overflow its stack (not always the same as abort(2)).
The computer could lose power.
...and that's without violating the spirit of the law with weird instruction-level stuff (e.g. is pop/jmp the same as ret? If I move the stack pointer and re-set all the argument registers, did I return?)
Surely needing to change some class declarations is better than bugs that take all day to track down?
And sure as a programmer i can consider every npe case along with all the others but if the language can take car3 of that for me, I’ll let it
I disagree with the article, but also some of these examples are complete straw-men. In Kotlin you have nullable types, and the type checker will complain if you use it as a non-nullable type. But you can always just append !! after your expression and get exactly the same behavior as in Java and get a null pointer exception, you don't have to handle it gracefully as the author is suggesting. Tests checking that you gracefully handle nulls in a language without null types are fucking tedious and boring to write. I would take a language with null types over having to write such tests any day.
Kotlin's final-by-default is also just that - a default. In Java you can just declare your classes `final` to get the same behavior, and if you don't like final classes then go ahead and declare all of then open.
I also disagree with the author's claim that languages with many features requires you to be a "language lawyer", and that more simplistic languages are therefore better. It's of course a balance, and there are examples of languages like C++ and Haskell where the number of features become a clear distraction, but for simpler things like null types and final-by-default, the language is just helping you enforce the conventions that you would anyway need when working with a large code base. In dynamically typed languages you just have to be a "convention lawyer" instead, and you get no tool support.
Your last sentence makes a very good point. And Uncle Bob's tool is to have loads of tests.
I suppose it's all just a balance: simplicity versus expressiveness, foot guns versus inflexibility, conciseness versus ceremony, dev velocity versus performance in production.
I'm okay with shifting some of the burden of common errors from developer into the language if that improves reliability or maintainability. But Martin has a point in that no guard rails can ever prevent all bugs and it can be annoying if languages force lots of new ceremony that seems meaningless.
> Now, ask yourself why these defects happen too often. ... It is programmers who create defects – not languages.
> And what is it that programmers are supposed to do to prevent defects? ... TEST!
Unfortunately, altering people's behavior by telling/commanding/suggesting that they do so, whether or not supported by perfect reasoning, rarely if ever succeeds.
It's overwhelmingly the case that people, including programmers, do what they do in reaction to the allowances and bounds of a system and so it is far more effective to alter the system than attempt to alter the people.
total yikes for the entire thing. "What if a function needs to return null" or "throw an error" is not a fundamentally different concept than "what if a function needs to return a totally different type".
Isn't that the classic argument "Real C programmers don't write defaults!" ?
The one that companies have spent billions of dollars fixing, including creating new restrictive languages?
I mean, I get the point of tests, but if your language obviates the need for some tests, it's a win for everyone. And as for the "how much code will I need to change to propagate this null?", the type system will tell you all the places where it might have an impact; once it compiles again, you can be fairly sure that you handled it in every place.
> For example, in Swift, if you declare a function to throw an exception, then by God every call to that function, all the way up the stack, must be adorned with a do-try block, or a try!, or a try?.
Funnily enough, Uncle Bob himself evangelised and popularised the solution to this. Dependency Inversion. (Not to be confused with dependency injection or IOC containers or Spring or Guice!) Your call chains must flow from concrete to abstract. Concrete is: machinery, IO, DBs, other organisation's code. Abstract is what your product owners can talk about: users, taxes, business logic.
When you get DI wrong, you end up with long, stupid call-chains where each developer tries to be helpful and 'abstract' the underlying machinery:
(Don't forget to double each of those up with file-names prefixed with I - for 'testing'* /s )
Now when you simply want to call userService.isUserSubscriptionActive(user), of course anything below it can throw upward. Your business logic to check a user subscription now contains rules on what to do if a pooled connection is feeling a little flakey today. It's at this point that Uncle Bob 2017 says "I'm the developer, just let me ignore this error case".
What would Uncle Bob 2014 have said?
Pull the concrete/IO/dependency stuff up and out, and make it call the business logic:
UserController:
user? <- (UserRepository -> PostgresConnectionPoolFactory -> PostgresConnectionPool -> PostgresConnection)
// Can't find a user for whatever reason? return 404, or whatever your coding style dictates
result <- UserService.isUserSubscriptionActive(user)
return result
The first call should be highly-decorated with !? or whatever variant of checked-exception you're using. You should absolutely anticipate that a DB call or REST call can fail. It shouldn't be particularly much extra code, especially if you've generalised the code to 'get thing from the database', rather than writing it out anew for each new concern.
The second call should not permit failure. You are running pure business logic on a business entity. Trivially covered by unit tests. If isUserSubscriptionActive does 'go wrong', fix the damn code, rather than decorating your coding mistake as a checked Exception. And if it really can't be fixed, you're in 'let it crash' territory anyway.
* I took a jab at testing, and now at least one of you's thinking: "Well how do I test UserService.isUserSubscriptionActive if I don't make an IUserRepository so I can mock it?" Look at the code above: UserService is passed a User directly - no dependency on UserRepository means no need for an IUserRepository.
I was rewriting a mod for Rimworld recently. As Rimworld is built on Unity, it's all some sort of C#. I heard people say it's a wrong kind of C#, but since a) I had no choice and b) I never wrote any C# before I cannot tell.
First, C# proudly declares itself strongly-typed. After writing some code in Zig (a project just before this one, also undertaken as a learning opportunity, and not yet finished), I was confused. This is what is called strong-typed? C# felt more like Python to me after Zig (and Rust). Yes there are types. No, they are not very useful in limiting expression of absurdity or helping expression of intent.
Second, test. How do you write tests for a mod that depends on an undocumented 12 year old codebase plus of half a dozen of other mods? Short answer - it's infeasible. You can maybe extract some kind of core code from your mod and test that, but that doesn't help the glue code which is easily 50-80% in any given mod.
So what's left? I have great temptation to extract that core part and rewrite it in Zig. If Unity's C#-flavor FFI would work between linux and windows, if marshalling data would not kill performance outright, if it won't scare off potential contributors (and it will of course), if, if...
I guess I wanted to say that the tests are frequently overrated and not always possible. If language itself lends a hand, even as small and wimpy as C#'s, don't reject it as some sort of abomination.
kace91|21 days ago
Sure, we’ve seen some pretty gnarly accidents, and there is no reasonable situation where risking death is a sane choice.
But ask yourself: is it the elevator's job to prevent an accident? If you think so, I suggest you never leave your home again, as safety is your own concern.
Like and subscribe for other posts like “knife handles? What an idiot” and “never wear a helmet you coward”.
xnorswap|21 days ago
Unironically the biggest flame-wars I ever saw on forums back in the day was on whether or not mandatory bike helmets made cycling safer or more dangerous.
raverbashing|21 days ago
And of course they only believe this because they don't lose a finger when they write outside of bounds or they don't fall down the shaft once a pointer was accidentally null
checkmatez|21 days ago
Personally, I can see arguments for both approaches - stricter types or more tests.
seanwilson|21 days ago
> The question is: Whose job is it to manage the nulls. The language? Or the programmer? ...
> And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!
> You test that your system does not emit unexpected nulls. You test that your system handles nulls at it’s inputs.
Am I reading or quoting this wrong?
Just some pros of static type checking: you can't forget to handle the null cases (how can you confirm your tests didn't forget some permutation of null variables somewhere?), it's 100% exhaustive for all edge cases and code paths across the whole project, it handholds you while refactoring (changing a field from being non-null to null later in a complex project is going to be a nightmare relying on just tests especially if you don't know the code well), it's faster than waiting for a test suite to run, it pinpoints to the line where the problem is (vs having to step through a failed test), and it provides clear, concise, and accurate documentation (instead of burying this info across test files).
And the more realistic comparison is most programmers aren't going to be writing lots of unhappy path tests for null edge cases any way so you'll be debugging via runtime errors if you're lucky.
Static typing here is so clearly better and less risky to me that I think expecting tests instead is...irresponsible? I try to be charitable but I can't take it seriously anymore if I'm honest.
SebastianKra|21 days ago
Discussed here, two years before this article was written: https://www.destroyallsoftware.com/talks/ideology
unknown|21 days ago
[deleted]
raincole|21 days ago
That's why learning more academic, 'non-practical' aspects of computer science is sometimes beneficial. Otherwise very few will naturally develop the abstract thinking that allows them to see uncaught exception and null pointer are exactly the same 'kind of bug.'
Anyway the author got it completely upside down. The stricter mental model of static typing came first (in more academic languages like Haskell and Ocaml). Then Java etc. half-assed them. Then we have Swift and Kotlin and whatever trying to un-half-ass them while keeping some terminology from Java etc. to not scare Java etc. programmers.
repelsteeltje|21 days ago
(Not saying Java's attempt to remedy C's problems wasn't half-assed — it was.) The trend to plug holes is primarily motivated by empirical evidence of bug classes. Not by elegance of academic research.
As Bjarne Stroustrup famously quipped:
> “There are only two kinds of languages: the ones people complain about and the ones nobody uses.”
Swift, Kotlin, Rust, C++ are attempt to become languages that everyone complains about, not Haskell or Ocaml.
n4bz0r|21 days ago
sevensor|21 days ago
Furthermore, Uncle Bob sets up this weird opposition between the programmer and the type system, as if the latter is somehow fighting the former, rather than being a tool in their hand.
I think that sadly this is just the narrative of a man whose life’s work consists of convincing people that there is a silver bullet, and it is TDD.
meindnoch|21 days ago
>But before you run out of fingers and toes, you have created languages that contain dozens of keywords, hundreds of constraints, a tortuous syntax, and a reference manual that reads like a law book. Indeed, to become an expert in these languages, you must become a language lawyer (a term that was invented during the C++ era.)
And this was written before Swift gained bespoke syntax for async-await, actors, some SwiftUI crap, actor isolation, and maybe other things, honestly, I don't even bother to follow it anymore.
pydry|21 days ago
Another thing I dont think a lot of people appreciate either is that types have sharp diminishing returns catching the kind of bugs tests are good at catching and vice versa.
raverbashing|21 days ago
repelsteeltje|21 days ago
> [...] (a term that was invented during the C++ era.)
...like it's some sort of relic, or was in 2017
virtualized|21 days ago
direwolf20|21 days ago
ZeroClickOk|21 days ago
1) For "Nullable Types", I see that it is VERY good to think about if some type can be null or not, or use a type system that does not allow nulls, so you need some "unit" type, and appropriately handle these scenarios. I think it is ok the language enforces this, it really, really helps you to avoid bugs and errors sooner.
2) For "Open/Sealed Classes", my experience says you never (or very rarely) know that a class will need to be extended later. I work with older systems. See, I don't care if you, the original coder, marked this class as "sealed", and it does not matter if you wrote tons of unit tests (like the author advocates), my customer wants (or needs) that I extend that class, so I will need to do a lot of language hacks to do it because you marked as sealed. So, IMHO, marking a class as "open" or "sealed" works for me as a hint only; it should not limit me.
mh2266|21 days ago
The main point of sealed classes is exhaustive `when` expressions:
If another subclass appeared at runtime, then the code would fall off the end of that when expression.ninkendo|20 days ago
> These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.
I'm going to try to best to hide my rage with just how awful this whole article is, and try to focus my criticism. I can imagine that reasonable people can disagree as to whether `try` should be required to call a function that throws, or whether classes should be sealed by default.
But good god man, the null reference problem is so obvious, it's plain and simply a bug in the type system of every language that has it. There's basically no room for disagreement here: If a function accepts a String, and you can pass null to it, that's a hole in the type system. Because null can't be a String. It doesn't adhere to String's contract. If you try to call .length() on it (or whatever), your program crashes.
The only excuse we've had in the past is that expressing "optional" values is hard to do in a language that doesn't have sum types and generics. And although we could've always special-cased the concept of "Optional value of type T" in languages via special syntax (like Kotlin or Swift do, although they do have sum types and generics), no language seems to have done this... the only languages that seem to support Optionals are languages that do have sum types and generics. So I get it, it's "hard to do" for a language. And some languages value simplicity so much that it's not worth it to them.
But nowadays (and even in 2017) there's simply no excuse any more. If you can pass `null` to a function that expects a valid reference, that language is broken. Fixing this is not something you lump in with "adding a language feature for every class of bug", it's simply the correct behavior a language should have for references.
andrewjf|26 days ago
The authors thesis seems to be that it's preferable to rely on the programmer who wrote bugs to write even more bugs in tests in order to have some benefit over a compiler or type system that can prevent these things from happening in the first place?
So obviously it's an opinion and he's entitled to it, but (in my own opinion) it is so so so, on-its-face, just flat out wrong, I'm concerned that that it's creating developers who believe that writing so many tests (that languages and compilers save you time (and bugs) in writing) is a valid solution to preventing null pointer defeferences.
jddj|21 days ago
ulrikrasmussen|21 days ago
On top of that, every test that could have been omitted due to a type system incurs an extra maintenance tax that you have to pay when you change the API.
duesabati|21 days ago
zzo38computer|20 days ago
Tests are good regardless of what the programming languages tries to do, since not all bugs can be avoided by the programming language, and not all bugs should be avoided by the programming language. It should let you to write the program, instead of to stop you.
However, there are situations where due to the working of the implementation, there is no reasonable way to compile it when it specifies something which is then violated, or there might be certain extra things that may be required in the compiled code (making it bigger and/or slower in certain circumstances) when certain things are open. For example, consider if a different calling convention is needed for functions that can throw an exception (although in this case it should not require a try block if the function that calls it is also says it can throw an exception), or if some things in functions in a open class will need to be compiled in a special way to consider the possibility that other parts of the class will be overridden (both of these examples are hypothetical, I do not know whether or not any of this is actually true). It is also possible that certain optimizations are possible or not possible depending on if a class is open or something is nullable etc. Due to such things like that, specifying such features in the program might sometimes be necessary in order for the compiler to work correctly.
matej-almasi|21 days ago
ulrikrasmussen|21 days ago
Nevermark|20 days ago
A project, "main", makefile or compiler parameters, should be able to set minimum strictness for everything it composes/references.
Individual modules should be able to set their looseness, and be parsed and implemented accordingly, with runtime dispatching, template expansion, etc. instead of compile time.
When the two collide, we get an error, and the developer decides which one needs to give way, or making an explicit exception, for the current state of development. So all typing choices are always clear and enforced.
Language highlighting of loose vs. strict would also be nice.
Not quite the same as safe/unsafe, but similar.
But I also like the idea of a typed Forth, so what do I know?
Surac|21 days ago
mrkeen|21 days ago
Initially I was impressed by the null detection. Then I found out about defaults. Way worse than null.
C and Go can demand a bit of ceremony with manual error checks. Things get bad if you forget to do the checks.
Java and Checked exceptions forced error-checking, which is a little verbose, and most of the time you can't 'handle' (for any meaning other than log) them, so you just rethrow.
C# went with unchecked exceptions. But with default values, there's no need to throw! Avoid Java's messy NPE encounters by just writing the wrong value to the database.
zbentley|18 days ago
The function could return.
The function could be aborted by SIGKILL.
The function could be aborted by a non-SIGKILL signal that preserves subsequent execution invariants.
The function could be aborted by a non-SIGKILL signal that doesn't preserve subsequent execution invariants (SIGSEGV in most--but not all--cases).
The function could abort(2) (not always the same as SIGABRT, but usually).
The function could overflow its stack (not always the same as abort(2)).
The computer could lose power.
...and that's without violating the spirit of the law with weird instruction-level stuff (e.g. is pop/jmp the same as ret? If I move the stack pointer and re-set all the argument registers, did I return?)
qwertytyyuu|21 days ago
ulrikrasmussen|21 days ago
Kotlin's final-by-default is also just that - a default. In Java you can just declare your classes `final` to get the same behavior, and if you don't like final classes then go ahead and declare all of then open.
I also disagree with the author's claim that languages with many features requires you to be a "language lawyer", and that more simplistic languages are therefore better. It's of course a balance, and there are examples of languages like C++ and Haskell where the number of features become a clear distraction, but for simpler things like null types and final-by-default, the language is just helping you enforce the conventions that you would anyway need when working with a large code base. In dynamically typed languages you just have to be a "convention lawyer" instead, and you get no tool support.
repelsteeltje|21 days ago
I suppose it's all just a balance: simplicity versus expressiveness, foot guns versus inflexibility, conciseness versus ceremony, dev velocity versus performance in production.
I'm okay with shifting some of the burden of common errors from developer into the language if that improves reliability or maintainability. But Martin has a point in that no guard rails can ever prevent all bugs and it can be annoying if languages force lots of new ceremony that seems meaningless.
djoldman|21 days ago
> And what is it that programmers are supposed to do to prevent defects? ... TEST!
Unfortunately, altering people's behavior by telling/commanding/suggesting that they do so, whether or not supported by perfect reasoning, rarely if ever succeeds.
It's overwhelmingly the case that people, including programmers, do what they do in reaction to the allowances and bounds of a system and so it is far more effective to alter the system than attempt to alter the people.
fedeb95|21 days ago
mh2266|21 days ago
total yikes for the entire thing. "What if a function needs to return null" or "throw an error" is not a fundamentally different concept than "what if a function needs to return a totally different type".
o_nate|21 days ago
nitnelave|21 days ago
The one that companies have spent billions of dollars fixing, including creating new restrictive languages?
I mean, I get the point of tests, but if your language obviates the need for some tests, it's a win for everyone. And as for the "how much code will I need to change to propagate this null?", the type system will tell you all the places where it might have an impact; once it compiles again, you can be fairly sure that you handled it in every place.
mrkeen|21 days ago
Funnily enough, Uncle Bob himself evangelised and popularised the solution to this. Dependency Inversion. (Not to be confused with dependency injection or IOC containers or Spring or Guice!) Your call chains must flow from concrete to abstract. Concrete is: machinery, IO, DBs, other organisation's code. Abstract is what your product owners can talk about: users, taxes, business logic.
When you get DI wrong, you end up with long, stupid call-chains where each developer tries to be helpful and 'abstract' the underlying machinery:
(Don't forget to double each of those up with file-names prefixed with I - for 'testing'* /s )Now when you simply want to call userService.isUserSubscriptionActive(user), of course anything below it can throw upward. Your business logic to check a user subscription now contains rules on what to do if a pooled connection is feeling a little flakey today. It's at this point that Uncle Bob 2017 says "I'm the developer, just let me ignore this error case".
What would Uncle Bob 2014 have said?
Pull the concrete/IO/dependency stuff up and out, and make it call the business logic:
The first call should be highly-decorated with !? or whatever variant of checked-exception you're using. You should absolutely anticipate that a DB call or REST call can fail. It shouldn't be particularly much extra code, especially if you've generalised the code to 'get thing from the database', rather than writing it out anew for each new concern.The second call should not permit failure. You are running pure business logic on a business entity. Trivially covered by unit tests. If isUserSubscriptionActive does 'go wrong', fix the damn code, rather than decorating your coding mistake as a checked Exception. And if it really can't be fixed, you're in 'let it crash' territory anyway.
* I took a jab at testing, and now at least one of you's thinking: "Well how do I test UserService.isUserSubscriptionActive if I don't make an IUserRepository so I can mock it?" Look at the code above: UserService is passed a User directly - no dependency on UserRepository means no need for an IUserRepository.
lstodd|21 days ago
First, C# proudly declares itself strongly-typed. After writing some code in Zig (a project just before this one, also undertaken as a learning opportunity, and not yet finished), I was confused. This is what is called strong-typed? C# felt more like Python to me after Zig (and Rust). Yes there are types. No, they are not very useful in limiting expression of absurdity or helping expression of intent.
Second, test. How do you write tests for a mod that depends on an undocumented 12 year old codebase plus of half a dozen of other mods? Short answer - it's infeasible. You can maybe extract some kind of core code from your mod and test that, but that doesn't help the glue code which is easily 50-80% in any given mod.
So what's left? I have great temptation to extract that core part and rewrite it in Zig. If Unity's C#-flavor FFI would work between linux and windows, if marshalling data would not kill performance outright, if it won't scare off potential contributors (and it will of course), if, if...
I guess I wanted to say that the tests are frequently overrated and not always possible. If language itself lends a hand, even as small and wimpy as C#'s, don't reject it as some sort of abomination.