blakehaswell's comments

blakehaswell | 8 months ago | on: Checklists are hard, but still a good thing

One thing that might not be obvious about checklists is how they're used.

I used to think checklists were used by reading the item, then doing the thing. I literally thought of them as a recipe that you would follow. Complete a step, check the box, repeat... This is typically referred to as a "read, do" checklist. In aviation this style of checklist is typically reserved for non-normal operations—procedures that you wouldn't use often enough to commit to memory.

The other style of checklist is "do, confirm". In this style you complete a procedure by memory, and read through a checklist to ensure you didn't miss anything (no box ticking, you just read the items and confirm to yourself that they're complete). In aviation this is the style used for normal operations, and for the initial action-items in an emergency (which although not commonly used, must be committed to memory because they are time-critical).

Because you're expecting that the procedure is completed by memory, a "do, confirm" checklist can be extremely brief. You do not need to write in detail what each step involves, you just need a word or two to name the step. Additionally, they're an extremely low operational burden; it takes a couple of seconds to read through a "do, confirm" checklist but the upside of catching common errors is significant.

blakehaswell | 3 years ago | on: Musk ‘killed’ Twitter’s official checkmarks after only a few hours

I feel for everyone working there who is suffering whiplash from being pulled from one direction to the other.

What can you possibly learn about a new feature in a few hours apart from the gut reaction of a mob? If that's enough to change your mind, what evidence was there in the first place that the feature was a good idea? Probably none.

blakehaswell | 3 years ago | on: Will serving real HTML content make a website faster?

Citation needed. Whether the server is serving HTML or JSON it still needs to serialise the data, so I don't think serialising JSON is going to be significantly faster than serialising HTML. Plus then the client needs to deserialise that JSON before it can render HTML, so all of that work related to (de)serialising JSON is work which doesn't even need to happen if the server is rendering HTML. Not to mention the work on the client to parse and evaluate the JS which needs to happen before it can even start rendering HTML.

As for data across the wire, GZIP is a thing so again I would want to see real world performance numbers to back your claims.

blakehaswell | 4 years ago | on: The Only Unbreakable Law

I really enjoyed this, I think looking the organisational structure through time is a good take that I haven't really seen addressed so clearly before.

I would have liked to see an exploration of the "through time" lens on some of the more micro code-organisation structures he talked about at the end like class hierarchies. It's definitely a common problem in legacy code—there's some idea you want to express but the existing structures make that very difficult and so you end up twisting your idea to fit, further ossifying the existing structures.

I've also seen cases where the organisation structure was changed to affect some change, but the existing code structure makes that so difficult that the software doesn't actually change to reflect the new structure at all, and instead the new organisation is just slowed down by coordination costs at the organisational level as well as different coordination costs at a technical level.

blakehaswell | 4 years ago | on: SPAs Were a Mistake

> did Facebook, Apple, Amazon, Netflix, Google, etc all make a terrible engineering mistake?

Is that so impossible? There are many other considerations that go into technology choices at these companies. There are trade-offs involved, and for companies with huge teams of developers the considerations need to be very different than for small–medium sized groups of developers.

I would argue that a smaller group of developers can focus much more on user experience and engineering efficiency, whereas a large company has a organisational scaling issues and a significant bureaucracy to support. At a large company, engineering considerations come second to very many things. It would actually be surprising if the trade-offs and choices those companies made were correct for other very different companies.

blakehaswell | 4 years ago | on: Starting with microservices

What exactly do you mean by multiple entry points? Do you have multiple processes which run independently but are co-located in the same repository or are you talking about something else?

blakehaswell | 4 years ago | on: Modern JavaScript has made the web worse (2020)

> great speed in iterating new features […] makes our customers happy

For me this is a leap. I can't think of many examples of software which I use where new features have actually made me happy. Normally it's just change which forces me to learn something new while I'm in the middle of trying to accomplish something actually productive.

I think our industry over-estimates the value of "new features". But in my experience 90% of new features provide neutral or negative value. If—instead of a new feature—the software I used released a performance improvement, then that would actually help make me more productive.

blakehaswell | 4 years ago | on: Why I Hate Frameworks (2005)

> Exactly what I did. After reaching top still somewhat technical position at the company I got so fed up and burned out that I quit. Went on my own and been this way ever since for some 20+ years.

I'd love to hear more about your journey. What did going out on your own look like? What type of work do you do? How long did it find you to figure out the kind of work that satisfied you and paid the bills?

blakehaswell | 4 years ago | on: The compiler will optimize that away

At the end of the day our software is going to run on real machines, sending data across real networks. I feel like we'll always be leaving performance on the table if we don't understand the characteristics of the hardware we're developing on.

I can build a cubby house in the backyard (low performance system) without understanding the the physical characteristics on the materials I'm using, but if I want to build a sky-scraper (high performance system) I need to understand the actual physical materials and how to use them effectively.

blakehaswell | 4 years ago | on: The compiler will optimize that away

C++ provides programmer control over memory layout, which is something managed languages don't provide. So although C++ is an older language and it isn't designed for modern machines, it does allow you the control to make performant programs for modern machines.

blakehaswell | 5 years ago | on: HN was down

Me too. I was trying to browse HN on my phone earlier and my first instinct was that my WiFi was having a moment. It's a testament to how reliable HN is.

blakehaswell | 5 years ago | on: An incomplete list of complaints about real code

I was thinking "modern C++" is potentially a good fit for avoiding most of these complaints (i.e. if you solely use smart pointers). Given their other post about technology choices[0] I'd guess it's a safe assumption.

That said, I don't think any language is without trade-offs. As the author says:

> Hint: if you can't find something stunningly wrong with your "chosen one" language, you probably haven't been using it long enough...

[0]: http://rachelbythebay.com/w/2020/09/24/feedback/

blakehaswell | 5 years ago | on: Continuous delivery makes JVM JIT an anti-pattern

> If you actually don't need it (i. e., your hot methods are never overridden), then the JIT will trivially compile those "virtual" method calls as non-virtual ones.

But isn't that the thrust of this article? Of course the JIT can optimise a monomorphic call-site. The question is, in reality, what percentage of the time will it be optimised for your users?

blakehaswell | 5 years ago | on: Code Budgets

I agree. I don’t think lines of code is necessarily the right thing to budget, especially if you limit it to “lines of code written by our team”.

But there is something interesting in this idea. Our software is growing immeasurably complex. We’re piling on more and more layers of abstraction. Ostensibly the goal is to make our lives easier. But in reality we’ve got companies throwing hundreds or thousands of developers at the problem of building a wiki or a social network, and they end up building these Rube Goldberg machines where 99% of the effort is spent on things other than the actual problem users want the software to solve.

Moore’s law has been totally offset by Wirth’s law[0], and as an industry I don’t see us doing anything about it. Even if lines of code is the wrong constraint, we could do with learning how to do more with less.

[0]: https://en.wikipedia.org/wiki/Wirth%27s_law

blakehaswell | 5 years ago | on: We need a new media system

My concern with this is that it gives the "rain skeptics" a platform to spread their views, and presents "rain skeptics" as a legitimate alternative to the "rain scientists". The problem with this is that once people are given a platform and presented as legitimate, then they can convince people not based on the logic of their arguments but on things which have nothing to do with the matter under debate (e.g. "I like the way he speaks, not like that snooty scientist").

I think people I know have been convinced in this manner. I don't think people necessarily reflect on why they were convinced. They just know that they agree with the "rain skeptics".

I'm not convinced that giving "rain skeptics" a platform would be the right thing to do if they only make up a small percentage of the population. If they make up half the population, I have no idea.

EDIT: My experience above is probably not helped by weak moderation in debates. Moderators with a backbone would probably help in cases where "rain skeptics" are given a platform. Sadly that seems to be rare.

page 1