Great lessons! Although it didn't include one that I had to learn the hard way, multiple times:
Your throwaway prototype will be the codebase for the project.
In my career, I've seen multiple throwaway prototypes that were hacked together. None of them ended up in the waste bin.
"Just add X and Y and you're done". "But this is just a quick hack to show the interface, there is nothing behind it!" "Yes, and the interface is fine, so why start from scratch? Just add the rest".
Now I know: I never build throwaway prototypes ever again, it's always a base for the project to come.
Once I wrote non-prototype, non-throwaway code to start a large project. After about a month of work on that, due to a dead disk and my own stupidity, I lost all of it. It was throwaway code after all. I was about as upset and angry as I ever get.
Then I sat down and rewrote it in about a week, and it was much better the second time, and turned out to be the start of a code base that's still in daily use and evolving 25 years later.
The lesson I took from that is, don't write throwaway code, but given a chance, throw it away anyway.
Once I wrote a quick demo of a classic content-heavy portal with our in-house portal system (at the time).
It took me 2 days over a weekend.
Shipped for the demo with some partner, forgot about it.
6 months later, an angry customer wrote to my boss about the "Energy Saving Monitoring System" somebody sold to them stating that some kind of SSO wasn't working as expected.
My boss handed the case to me and I, flabbergasted as I was, proceeded to delve deeper into the strange report about that system we were sure we've never written, let alone sold to somebody.
Long story short, it was my quick demo of a content heavy portal, turned into that monster of a "Energy Saving Monitoring System" - whatever that is - by the partner we demoed it to, and sold to several customers as a finished product.
Some researchers wanted a 128-bit space for the binary address, Cerf (recalled) ... But others said, "That's crazy," because it's far larger than necessary, and they suggested a much smaller space. Cerf finally settled on a 32-bit space that was incorporated into IPv4 and provided a respectable 4.3 billion separate addresses.
"It's enough to do an experiment," he said. "The problem is the experiment never ended."
In a somewhat troubled company I worked at, and where I honestly stayed a bit too long, I once resorted to writing a prototype, in bash, and in the most idiosyncratic way possible with two goals in mind:
1) Make sure that I didn't have to handle some messy deployment when it inevitably would become rebranded to production code.
2) Make sure nobody in their right mind would try to extend the functionality and keep me responsible for fixing the inevitable wreck.
Obviously it was put into production, still is, and has not been extended with anything except the initial functionality.
I don't know if that counts as a success or a failure, but at least deployment was never an issue!
Keep your development environment deterministic and instantly rebuildable.
You should ideally be able to take your hard drive out, put in a new one, install a fresh OS, run 1 script, and be completely up and running in less than an hour.
As an added bonus, this means that you can replicate your development environment on any machine very quickly and easily, which makes losing your laptop during travel only an inconvenience rather than a travesty.
I've hacked together some scripts for that here [1], although I really should convert them to ansible or something.
I've always agreed in principal, but fell short in practice until I started using Nix[1]. Nix makes it significantly easier to pull off by verifying the hashes of all inputs, eliminating non-determinism in builds (timestamps, usernames, etc), eliminating dependencies on OS packages/files/etc, and allowing you to pin to a specific snapshot of all packages. Pardon the evangelism, I'm just a happy user.
I agree. It's a game changer when you can easily replicate all or specific parts of your dev setup with one command, everywhere (remote server, new laptop, dev vm, even containers).
I wrote a script for this too (actually using Ansible), but then everything got a bit out of hand and now it's a framework (as those things go): https://freckles.io
I’ve had to rebuild my machine at work twice and some people look at you like you told them your dog died. We never had it automated but we did have it written down.
One of those, we got a bad batch of laptops. Over a summer five of us had to go through the same thing. The only rough part was the whole disk encryption. That took longer than the rest combined.
"You should ideally be able to take your hard drive out, put in a new one, install a fresh OS, run 1 script, and be completely up and running in less than an hour."
this is great if you can pull it off but if you have a lot of disjointed systems plus maybe custom hardware it gets really hard to automate the process. In general I agree though. I also have learned to just live with the defaults of my IDE(s) instead of setting them up to my taste every time.
One thing I have learned in almost 30 years of SW development: work somewhere where the CEO has at least a basic idea of your work and sees value in your work, not just cost. Work somewhere where people care for the craft and it’s not only about “business” goals.
> Keep a record of "stupid errors that took me more than 1 hour to solve"
I’ve been doing this for the last three years, but electronically, and it’s amazing. In my home folder there’s a giant text file that’s like my personal log file. It gets a new line at least once an hour, when cron pops up a box and I type in what I’m working on. And for those times when I’m working on stuff that’s perplexing, I write to it as a stream of consciousness — “tried this, didn’t work” “wonder if it could be due to xyz” “yep, that was it.”
I’ve used it a few dozen times to either look up how I solved a weird problem, or why I decided to do something a certain way. I’ve even built pie charts of how I spent my time in the last quarter by sampling it Monte Carlo style.
I keep a log of "stuff I had to look up how to do". Mine isn't "stupid errors" so much as stuff like "what are the flags to get the compiler to show you the assembly code it produces".
The description of cognitive dissonance is wrong. It's not holding two things in your head at once[0], it's the feeling you get when you believe two contradictory things.
Yes, and it's also not as severe or "a sign of being stupid" as many think. Cognitive dissonance occurs in situations as simple as walking to a store and discovering the store is closed.
> Learn when you can't code anymore. Learn when you can't process things anymore. Don't push beyond that, it will just make things worse in the future.
I cannot agree more. (18 years developer here). On those days I just kept pushing, I mostly ended up correcting my poorly-created code the next day. When you know you're done, be done, or at least switch to the lowest mental energy task you can do. Sometimes staring into the screen is that lowest mental task.
> Sometimes staring into the screen is that lowest mental task
For the sake of your eyes (and perhaps well-being), don't just stare into the screen without doing anything. If you have a free moment, take your eyes off the screen and look out the window (or at nearest wall if you have none)
> People get pissed/annoyed about code/architecture because they care
> ... Yeah, you don't like that hushed solution 'cause you care" was one of the nicest things someone told about myself.
This is very similar to a foundation that we repeat at work. Everyone is trying to make things better, we might not agree on the how, but we must agree on the intent. When you assume good intent, conversations go much, much better. It can be a great way to frame disagreements. "We are disagreeing here because we care."
> For a long time, I kept a simple programming rule: The language I'm playing at home should not be the same language I'm using at work. This allowed me to learn new things that later I applied in the work codebase.
Though I've found this helpful in expanding what I can do, or how well I can do it, what I find using a different language at home than at work does best is prevent burnout.
I can be reckless, and play with the language, because I don't have the tired patterns of my day trying to rigidly enforce best practices and cover all edge cases.
My mind can relax and do the wrong thing if I want, and I don't have the autoformatter or linter in mind. I don't have a dozen rules that will spit red errors if I step out of line.
Those things are all good for work. The application must be solid, and the user deserves stability.
But when I'm hacking together a game for myself that I never intend to release, I just want to experience the fun side of programming without the hard part.
Haven't read all of these yet, but a couple stood out as matching recent experiences...
"Future thinking is future trashing": we had a good engineer with a bad tendency to write complex generic code when simpler specific code would work. In one case, he took a data structure and wrote a graph implementation on top of it - it was only ever used once, by him, and I recently rewrote it in about 15 lines of structure-specific code. In another case, he wrote a query builder to make it easy to join data from a new service with our legacy DB, which makes the high-level code simple, but the full implementation much more complicated, and means incoming engineers can't obviously see what implementation to use. His last week at the company was... last week.
"Understand and stay way of cargo cult": Today, I was reviewing someone's code, when I saw a language construct I didn't recognize, and which made no sense to me. I asked him what it was meant to do, and if he could give me a link to somewhere which described the construct - he said no, he'd mostly just copied it from someone else's code. I asked 'someone else', and he pretty much said it was useless, he's worked in so many languages that he gets confused about language idiom/construct sometimes, he writes his code like he'd write a letter to a friend, and it wasn't like anyone was reviewing his code at the time anyway. I said I wasn't going to take his stuff out of the existing codebase, but I wouldn't let anyone copy his useless code in future, and linked him to https://blog.codinghorror.com/coding-for-violent-psychopaths...
> You can be sure that if a language brings a testing framework -- even minimal -- in its standard library, the ecosystem around it will have better tests than a language that doesn't carry a testing framework, no matter how good the external testing frameworks for the language are.
D not only has it as a library, it's a core feature of the language. As incongruous as it sounds, builtin unittests have been utterly transformative to how D code is written. Built in documentation generation has had the same effect.
This could make great bulletin board material in some other format, but there's just too much here for that.
I started my reply by cutting and pasting the gems with my little follow-up, but quickly realized that would take all day. There just so much good stuff here. So instead I narrowed it down to just one:
Code is humans to read. ALWAYS. Optimization is what compilers do. So find a smarted way to explain what you're trying to do (in code) instead of using shorter words.
This really strikes a nerve because in 40 years of programming, most of my problems have not been with building code myself, but with trying to figure out what my predecessor had written. I bet I've wasted 20 years of my life refactoring the crap of others who would have benefited greatly by reading posts like this one first.
There's lots to think about. Lots to smile and agree with. And lots to disagree with (OP reminds me a little of myself, a caveman who often claims to know "the one right way" :-)
If you don't have time to read this now, bookmark it and come back later. One good idea from this can make a big difference.
"This really strikes a nerve because in 40 years of programming, most of my problems have not been with building code myself, but with trying to figure out what my predecessor had written. I bet I've wasted 20 years of my life refactoring the crap of others who would have benefited greatly by reading posts like this one first."
I also curse a lot about code my predecessors have written but I bet my successors will curse at my code too :-)
> Learn to recognize toxic people; stay away from them
> You'll find people that, even if they don't small talk you, they will bad mouth everything else -- even some other people -- openly.
So true. It has happened to me so often - actually at most brown-field projects - that when I get onboarded, someone is telling me in which bad state this and that is, even how bad this and that person is supposed to be. Although I tend to be optimistic, when the project seems to be nicely challenging at first (not only from the code), I somehow take over this negative attitude for the rest of the project.
I think people should be able to informally talk about projects, even gossip sometimes. But it has its limits, especially team/project leads should IMHO be really careful about taking negative stances on certain projects/people/teams. In fact the limits to mobbing are fluid.
For me this is the single biggest annoyance in any job.
> So true. It has happened to me so often - actually at most brown-field projects - that when I get onboarded, someone is telling me in which bad state this and that is, even how bad this and that person is supposed to be.
I certainly don't want to encourage any sort of toxicity like this. I will say, however, that I recently started a new job. People haven't been particularly welcoming so far but I finally managed to bond with someone over a discussion of the skill gap that exists between some of the employees and the lack of quality code produced by a few of the stubborn veterans.
It wasn't just a bonding experience either. I was able to learn more about the nature of the organization and how I can help to improve things in the future. It also had a calming effect because I was originally pretty intimidated by some of these veterans but now I know they're just flawed humans like the rest of us.
I think the most difficult lesson is that good software is like cheese or wine. You have to let it age, to find the proper way to approach the problem; adding more developers will not make the process faster.
Something I learned: K.I.S.S. is the most important principle in technology, but keeping technology simple while still adding value and utility is really hard.
Example: how can we improve on the design of the bicycle, but keep it simple? You could add expensive composite materials, design a new gearing mechanism, create new complex geometries based on wind tunnel experiments... all of those are more advanced, but complex; simple is a tough nut to crack.
> When you're designing a function, you may be tempted to add a flag. Don't do this. Here, let me show you an example: Suppose you have a messaging system and you have a function that returns all the messages to an user, called getUserMessages. But there is a case where you need to return a summary of each message (say, the first paragraph) or the full message. So you add a flag/Boolean parameter called retrieveFullMessage. Again, don't do that.'Cause anyone reading your code will see getUserMessage(userId, true) and wonder what the heck that true means.
What about in languages with named parameters? I think keyword arguments solve this issue pretty well.
Another option is to use an enum, so it looks like getUserMessage(userId, MessageResultOption.FullMeesage). Even if there's only two options, it's more readable than a boolean.
I'd question this specific case in a different way, though: summary vs detail sounds like different object types, which means different return types and so there should be two different methods (eg: getUserMessageSummaries(userId) and getUserMessageDetail(userId) ). It's perhaps a bit more work up front, but when consuming this you don't end up with a bunch of properties that may or may not be null depending on how you called this (instead, they're just not there on the summary response), and in a typed language it will simply fail to compile (making your mistake very obvious).
Similarly: over time almost every boolean becomes an enum. I'm a bit surprised the article doesn't mention enum parameters as a possible alternative to booleans.
In this example the call-site could instead look like `getUserMessages(userId, MessageStyle.SUMMARY)`. Naming the enum is always harder than naming the boolean but that's kinda the point.
Perhaps another way to word it is: "Adding a boolean parameter is a strong indication that you're about to cross the single-responsibility threshold. Try to think of different ways to structure the code in which the bool isn't needed."
This one's definitely a weakness I still have. I read that and thought "yeah, I do this in a lot of places and often end up regretting it."
I think named parameters don't actually help, because the underlying problem with the boolean (aside from the mystery about what it means when looking at code calling it) is that it implies a potential separate "mode" of operation for the function. That the single function might actually serve two different purposes. It doesn't always imply that I imagine, but it's pretty likely.
I'm guilty of this in my code, and I know that my code quality has suffered for it - for some reason, the way he put it in this article gave me a moment of introspection there, and it's something I'm going to try and take away from it and improve my own code with in the future.
I agree. The only problem I see with “boolfixes” (as an old colleague called them) is the call site readability issue that the author pointed out. If you have the purpose of the bool right there at the call site, as with mandatory keyword arguments in Python, that issue goes away.
Everyone that works in project A uses properly configured IDE, so when you take a look at code you'll actually see something like that: `getClient(liveMode: true)` instead of just `getClient(true)`.
Is there really a point in creating Enum or writing more complex internal logic if you use proper IDE?
Only benefit I can see is doing CR's via Github like system - but still, I do not believe in CR without actually pulling, viewing, and testing code on own computer.
If the name is mandatory, as it is in Smalltalk, then this basically solves the issue.
If the name is optional, as it usually is in eg. Python, then there is a temptation for the writer to omit it, which means the reader won't know what "True" or "False" stands for.
Are there languages besides Smalltalk and its descendants with mandatorily-named parameters?
> Data flows beat patterns
(This is personal opinion) When you understand how the data must flow in your code, you'll end up with better code than if you applied a bunch of design patterns.
this one is priceless.. I had so many arguments in my past with cocky devs about this topic. Don't get me wrong, patterns are good, but it's only a small part of coding.
Also patterns aren't there to follow them as they where the law, but bend them to your needs. One doesn't even necessary needs to be implemented to 100% as it is in the "book".
So yes, this part of the post i'm very fond of, it's really wise.
I cringe whenever I hear someone talking about "what pattern to use here" (I cry if the answer is "singleton").
Patterns are a way of describing code. They are meant as a common language to use when explaining a codebase to another dev. If you are writing code with the express intention of "using X pattern", you're doing it backwards.
Most of them are awesome, but some are questionable.
> don't add flags/Boolean parameters to your functions.
Many languages have strongly-typed enums for them, makes calling code readable, e.g. getUserMessages( user, eMessageFlags.Full ). Especially useful for more than 1 flags.
> ALWAYS use timezones with your dates
UTC usually does the trick?
> UTF8 won the encoding wars
UTF8 used on the web a lot, but OP seems to be generalizing their experience over all programming, not just web development.
> when your code is in production, you can't run your favorite debugger.
No but I can run my second favorite, WinDBG. A crash dump .dmp file is often more useful for fixing the bug than any reasonable amount of logging.
> Optimization is for compilers
Compilers are not that smart. They never change RAM layout. I've recently improved C++ project performance by an order of magnitude replacing std::vector everywhere with std::array (the size was fixed, and known at compile time). Before I did, profiler showed 35% time used in malloc/free, but the cost of vectors was much more because RAM latency.
Type systems exist on a spectrum. A rich type system is better at documenting your code than comments are if the compiler and it's type system are sound: the comments can be wrong, the types cannot.
A sufficiently expressive type system can check your logic for you. Push as much of your propositions and predicates into the types as possible.
Data types are better than exceptions, return codes, etc for the majority of unexceptional cases. Conditions and restarts are better than exceptions. Sometimes crashing is not an option when your computer is flying in the air with a human.
In addition to specifications -- often a bunch of boxes with arrows will not be sufficient. There are a range of tools to check your designs from automated model checking to interactive proofs. When the problem is difficult to understand on its own then the solution is probably even harder -- a good sign you should consider using one of these tools to make the problem clear and guide you to the right solution.
I agree with everything you said, except
>: the comments can be wrong, the types cannot.
If you're arguing comments are valuable but type constraints are a better way to express requirements than I agree and sorry for the long post. But I've seen a lot of comment hate on hackernews and I feel like I have to defend comments.
I really think comments can be especially valuable. They're valuable when reviewing code because they tell the reader what the developer was attempting to do, not necessarily what they did.
Not everything is expressible in every type system. Even something as simple as "this reference can't be null" isn't expressible is many languages.
Comments are easier to read than type names, a sentence is much more expressive than code. Especially code that is tricky due to performance or domain complexity.
Type and method names can rot too. If you have a method that starts with get that someone adds a little bit of updating logic to making the method name misleading.
And comments can be kept up to date easily by just adding an item to your code review checklist "Ensure comments match code".
Of course, if a compiler and its type system are sound, then the fact that certain errors are eliminated follows trivially, but, in practice, this does not prevent types (and especially those of the rich type systems you are advocating) having bugs in their implementation. Take, for example, floating-point numbers in Excel.
Even something as fundamental as floating point types need documentation, which is why we have IEEE 754.
[+] [-] koonsolo|6 years ago|reply
Your throwaway prototype will be the codebase for the project.
In my career, I've seen multiple throwaway prototypes that were hacked together. None of them ended up in the waste bin.
"Just add X and Y and you're done". "But this is just a quick hack to show the interface, there is nothing behind it!" "Yes, and the interface is fine, so why start from scratch? Just add the rest".
Now I know: I never build throwaway prototypes ever again, it's always a base for the project to come.
[+] [-] hirundo|6 years ago|reply
Then I sat down and rewrote it in about a week, and it was much better the second time, and turned out to be the start of a code base that's still in daily use and evolving 25 years later.
The lesson I took from that is, don't write throwaway code, but given a chance, throw it away anyway.
[+] [-] trumbitta2|6 years ago|reply
6 months later, an angry customer wrote to my boss about the "Energy Saving Monitoring System" somebody sold to them stating that some kind of SSO wasn't working as expected.
My boss handed the case to me and I, flabbergasted as I was, proceeded to delve deeper into the strange report about that system we were sure we've never written, let alone sold to somebody.
Long story short, it was my quick demo of a content heavy portal, turned into that monster of a "Energy Saving Monitoring System" - whatever that is - by the partner we demoed it to, and sold to several customers as a finished product.
[+] [-] lhuser123|6 years ago|reply
"It's enough to do an experiment," he said. "The problem is the experiment never ended."
https://www.networkworld.com/article/2227543/software-why-ip...
[+] [-] lostmyoldone|6 years ago|reply
1) Make sure that I didn't have to handle some messy deployment when it inevitably would become rebranded to production code.
2) Make sure nobody in their right mind would try to extend the functionality and keep me responsible for fixing the inevitable wreck.
Obviously it was put into production, still is, and has not been extended with anything except the initial functionality.
I don't know if that counts as a success or a failure, but at least deployment was never an issue!
[+] [-] kstenerud|6 years ago|reply
Keep your development environment deterministic and instantly rebuildable.
You should ideally be able to take your hard drive out, put in a new one, install a fresh OS, run 1 script, and be completely up and running in less than an hour.
As an added bonus, this means that you can replicate your development environment on any machine very quickly and easily, which makes losing your laptop during travel only an inconvenience rather than a travesty.
I've hacked together some scripts for that here [1], although I really should convert them to ansible or something.
[1] https://github.com/kstenerud/virtual-builders
https://github.com/kstenerud/ubuntu-dev-installer
[+] [-] smilliken|6 years ago|reply
[1] https://nixos.org/nix/
[+] [-] _frkl|6 years ago|reply
I wrote a script for this too (actually using Ansible), but then everything got a bit out of hand and now it's a framework (as those things go): https://freckles.io
[+] [-] hinkley|6 years ago|reply
One of those, we got a bad batch of laptops. Over a summer five of us had to go through the same thing. The only rough part was the whole disk encryption. That took longer than the rest combined.
[+] [-] pmarreck|6 years ago|reply
THANK YOU!
This is not a property that is exploited nearly enough!
[+] [-] maxxxxx|6 years ago|reply
this is great if you can pull it off but if you have a lot of disjointed systems plus maybe custom hardware it gets really hard to automate the process. In general I agree though. I also have learned to just live with the defaults of my IDE(s) instead of setting them up to my taste every time.
[+] [-] wil421|6 years ago|reply
[+] [-] nunez|6 years ago|reply
[+] [-] fnord123|6 years ago|reply
This is bad advice. NEVER use timezones with your dates because there is only one timezone: UTC. Anything else is a presentation format.
Of course, if you are in a system that already committed the cardinal sin of storing datetimes with timezones then you have to deal with it. Alas.
[+] [-] maxxxxx|6 years ago|reply
[+] [-] physicles|6 years ago|reply
I’ve been doing this for the last three years, but electronically, and it’s amazing. In my home folder there’s a giant text file that’s like my personal log file. It gets a new line at least once an hour, when cron pops up a box and I type in what I’m working on. And for those times when I’m working on stuff that’s perplexing, I write to it as a stream of consciousness — “tried this, didn’t work” “wonder if it could be due to xyz” “yep, that was it.”
I’ve used it a few dozen times to either look up how I solved a weird problem, or why I decided to do something a certain way. I’ve even built pie charts of how I spent my time in the last quarter by sampling it Monte Carlo style.
[+] [-] AnimalMuppet|6 years ago|reply
[+] [-] unimpressive|6 years ago|reply
Which is a pretty well formed solution in this vein, unfortunately written in Perl.
[+] [-] stormthebeach|6 years ago|reply
[+] [-] strken|6 years ago|reply
[0] that's closer to https://en.wikipedia.org/wiki/Working_memory
[+] [-] simplify|6 years ago|reply
[+] [-] robodale|6 years ago|reply
I cannot agree more. (18 years developer here). On those days I just kept pushing, I mostly ended up correcting my poorly-created code the next day. When you know you're done, be done, or at least switch to the lowest mental energy task you can do. Sometimes staring into the screen is that lowest mental task.
[+] [-] diggan|6 years ago|reply
For the sake of your eyes (and perhaps well-being), don't just stare into the screen without doing anything. If you have a free moment, take your eyes off the screen and look out the window (or at nearest wall if you have none)
[+] [-] bobm_db|6 years ago|reply
> Documentation is a love letter to your future self
This seems like a brilliant way of transcending what feels like a drudge-job, making it into something really meaningful.
[+] [-] sethammons|6 years ago|reply
> ... Yeah, you don't like that hushed solution 'cause you care" was one of the nicest things someone told about myself.
This is very similar to a foundation that we repeat at work. Everyone is trying to make things better, we might not agree on the how, but we must agree on the intent. When you assume good intent, conversations go much, much better. It can be a great way to frame disagreements. "We are disagreeing here because we care."
[+] [-] shakna|6 years ago|reply
Though I've found this helpful in expanding what I can do, or how well I can do it, what I find using a different language at home than at work does best is prevent burnout.
I can be reckless, and play with the language, because I don't have the tired patterns of my day trying to rigidly enforce best practices and cover all edge cases.
My mind can relax and do the wrong thing if I want, and I don't have the autoformatter or linter in mind. I don't have a dozen rules that will spit red errors if I step out of line.
Those things are all good for work. The application must be solid, and the user deserves stability.
But when I'm hacking together a game for myself that I never intend to release, I just want to experience the fun side of programming without the hard part.
[+] [-] vmlinuz|6 years ago|reply
"Future thinking is future trashing": we had a good engineer with a bad tendency to write complex generic code when simpler specific code would work. In one case, he took a data structure and wrote a graph implementation on top of it - it was only ever used once, by him, and I recently rewrote it in about 15 lines of structure-specific code. In another case, he wrote a query builder to make it easy to join data from a new service with our legacy DB, which makes the high-level code simple, but the full implementation much more complicated, and means incoming engineers can't obviously see what implementation to use. His last week at the company was... last week.
"Understand and stay way of cargo cult": Today, I was reviewing someone's code, when I saw a language construct I didn't recognize, and which made no sense to me. I asked him what it was meant to do, and if he could give me a link to somewhere which described the construct - he said no, he'd mostly just copied it from someone else's code. I asked 'someone else', and he pretty much said it was useless, he's worked in so many languages that he gets confused about language idiom/construct sometimes, he writes his code like he'd write a letter to a friend, and it wasn't like anyone was reviewing his code at the time anyway. I said I wasn't going to take his stuff out of the existing codebase, but I wouldn't let anyone copy his useless code in future, and linked him to https://blog.codinghorror.com/coding-for-violent-psychopaths...
[+] [-] WalterBright|6 years ago|reply
D not only has it as a library, it's a core feature of the language. As incongruous as it sounds, builtin unittests have been utterly transformative to how D code is written. Built in documentation generation has had the same effect.
[+] [-] edw519|6 years ago|reply
This could make great bulletin board material in some other format, but there's just too much here for that.
I started my reply by cutting and pasting the gems with my little follow-up, but quickly realized that would take all day. There just so much good stuff here. So instead I narrowed it down to just one:
Code is humans to read. ALWAYS. Optimization is what compilers do. So find a smarted way to explain what you're trying to do (in code) instead of using shorter words.
This really strikes a nerve because in 40 years of programming, most of my problems have not been with building code myself, but with trying to figure out what my predecessor had written. I bet I've wasted 20 years of my life refactoring the crap of others who would have benefited greatly by reading posts like this one first.
There's lots to think about. Lots to smile and agree with. And lots to disagree with (OP reminds me a little of myself, a caveman who often claims to know "the one right way" :-)
If you don't have time to read this now, bookmark it and come back later. One good idea from this can make a big difference.
[+] [-] maxxxxx|6 years ago|reply
I also curse a lot about code my predecessors have written but I bet my successors will curse at my code too :-)
[+] [-] blablabla123|6 years ago|reply
> You'll find people that, even if they don't small talk you, they will bad mouth everything else -- even some other people -- openly.
So true. It has happened to me so often - actually at most brown-field projects - that when I get onboarded, someone is telling me in which bad state this and that is, even how bad this and that person is supposed to be. Although I tend to be optimistic, when the project seems to be nicely challenging at first (not only from the code), I somehow take over this negative attitude for the rest of the project.
I think people should be able to informally talk about projects, even gossip sometimes. But it has its limits, especially team/project leads should IMHO be really careful about taking negative stances on certain projects/people/teams. In fact the limits to mobbing are fluid.
For me this is the single biggest annoyance in any job.
[+] [-] ShamelessC|6 years ago|reply
I certainly don't want to encourage any sort of toxicity like this. I will say, however, that I recently started a new job. People haven't been particularly welcoming so far but I finally managed to bond with someone over a discussion of the skill gap that exists between some of the employees and the lack of quality code produced by a few of the stubborn veterans.
It wasn't just a bonding experience either. I was able to learn more about the nature of the organization and how I can help to improve things in the future. It also had a calming effect because I was originally pretty intimidated by some of these veterans but now I know they're just flawed humans like the rest of us.
[+] [-] js8|6 years ago|reply
[+] [-] peterwwillis|6 years ago|reply
Example: how can we improve on the design of the bicycle, but keep it simple? You could add expensive composite materials, design a new gearing mechanism, create new complex geometries based on wind tunnel experiments... all of those are more advanced, but complex; simple is a tough nut to crack.
[+] [-] chamilto|6 years ago|reply
What about in languages with named parameters? I think keyword arguments solve this issue pretty well.
[+] [-] gregmac|6 years ago|reply
I'd question this specific case in a different way, though: summary vs detail sounds like different object types, which means different return types and so there should be two different methods (eg: getUserMessageSummaries(userId) and getUserMessageDetail(userId) ). It's perhaps a bit more work up front, but when consuming this you don't end up with a bunch of properties that may or may not be null depending on how you called this (instead, they're just not there on the summary response), and in a typed language it will simply fail to compile (making your mistake very obvious).
[+] [-] ryanianian|6 years ago|reply
In this example the call-site could instead look like `getUserMessages(userId, MessageStyle.SUMMARY)`. Naming the enum is always harder than naming the boolean but that's kinda the point.
[+] [-] joncp|6 years ago|reply
[+] [-] EdgarVerona|6 years ago|reply
I think named parameters don't actually help, because the underlying problem with the boolean (aside from the mystery about what it means when looking at code calling it) is that it implies a potential separate "mode" of operation for the function. That the single function might actually serve two different purposes. It doesn't always imply that I imagine, but it's pretty likely.
I'm guilty of this in my code, and I know that my code quality has suffered for it - for some reason, the way he put it in this article gave me a moment of introspection there, and it's something I'm going to try and take away from it and improve my own code with in the future.
[+] [-] physicles|6 years ago|reply
[+] [-] m4r|6 years ago|reply
Is there really a point in creating Enum or writing more complex internal logic if you use proper IDE?
Only benefit I can see is doing CR's via Github like system - but still, I do not believe in CR without actually pulling, viewing, and testing code on own computer.
[+] [-] rntz|6 years ago|reply
If the name is optional, as it usually is in eg. Python, then there is a temptation for the writer to omit it, which means the reader won't know what "True" or "False" stands for.
Are there languages besides Smalltalk and its descendants with mandatorily-named parameters?
[+] [-] blinky1456|6 years ago|reply
Which is super useful and easy to read.
getUserMessag({userId:1234, retriveFullMessage:true})
https://codeburst.io/es6-destructuring-the-complete-guide-7f...
[+] [-] therufa|6 years ago|reply
this one is priceless.. I had so many arguments in my past with cocky devs about this topic. Don't get me wrong, patterns are good, but it's only a small part of coding. Also patterns aren't there to follow them as they where the law, but bend them to your needs. One doesn't even necessary needs to be implemented to 100% as it is in the "book". So yes, this part of the post i'm very fond of, it's really wise.
[+] [-] cgrealy|6 years ago|reply
Patterns are a way of describing code. They are meant as a common language to use when explaining a codebase to another dev. If you are writing code with the express intention of "using X pattern", you're doing it backwards.
[+] [-] Const-me|6 years ago|reply
> don't add flags/Boolean parameters to your functions.
Many languages have strongly-typed enums for them, makes calling code readable, e.g. getUserMessages( user, eMessageFlags.Full ). Especially useful for more than 1 flags.
> ALWAYS use timezones with your dates
UTC usually does the trick?
> UTF8 won the encoding wars
UTF8 used on the web a lot, but OP seems to be generalizing their experience over all programming, not just web development.
> when your code is in production, you can't run your favorite debugger.
No but I can run my second favorite, WinDBG. A crash dump .dmp file is often more useful for fixing the bug than any reasonable amount of logging.
> Optimization is for compilers
Compilers are not that smart. They never change RAM layout. I've recently improved C++ project performance by an order of magnitude replacing std::vector everywhere with std::array (the size was fixed, and known at compile time). Before I did, profiler showed 35% time used in malloc/free, but the cost of vectors was much more because RAM latency.
[+] [-] agentultra|6 years ago|reply
A sufficiently expressive type system can check your logic for you. Push as much of your propositions and predicates into the types as possible.
Data types are better than exceptions, return codes, etc for the majority of unexceptional cases. Conditions and restarts are better than exceptions. Sometimes crashing is not an option when your computer is flying in the air with a human.
In addition to specifications -- often a bunch of boxes with arrows will not be sufficient. There are a range of tools to check your designs from automated model checking to interactive proofs. When the problem is difficult to understand on its own then the solution is probably even harder -- a good sign you should consider using one of these tools to make the problem clear and guide you to the right solution.
[+] [-] JamesBarney|6 years ago|reply
I really think comments can be especially valuable. They're valuable when reviewing code because they tell the reader what the developer was attempting to do, not necessarily what they did.
Not everything is expressible in every type system. Even something as simple as "this reference can't be null" isn't expressible is many languages.
Comments are easier to read than type names, a sentence is much more expressive than code. Especially code that is tricky due to performance or domain complexity.
Type and method names can rot too. If you have a method that starts with get that someone adds a little bit of updating logic to making the method name misleading.
And comments can be kept up to date easily by just adding an item to your code review checklist "Ensure comments match code".
[+] [-] mannykannot|6 years ago|reply
Even something as fundamental as floating point types need documentation, which is why we have IEEE 754.