mtzet's comments

mtzet | 6 months ago | on: The Framework Desktop is a beast

A normal desktop with non-soldered components is more repairable, cheaper and can also run on stock Linux?

The only selling point is the form factor and a massive amount of GPU memory, but a dGPU offers more raw compute.

mtzet | 1 year ago | on: Write Your Own Virtual Machine (2022)

I don’t get this take. Is it so hard to understand that a computer operates on a giant array of bytes?

I think the hard thing to understand is that C’s pointer syntax is backwards (usage follows declaration is weird).

I also think understanding how arrays silently decay to pointers and how pointer arithmetic works in C is hard: ptr+1 is not address+1, but address+sizeof(*ptr)!

Pointers are not hard. C is just confusing, but happens to be the lingua franca for “high level” assembly.

mtzet | 1 year ago | on: FreeBSD: How Can We Make It More Attractive to New Users?

So let's focus on the case where I'm setting up a bunch of bare-metal hosts as servers. What's the value proposition of using FreeBSD over Debian/Ubuntu if we're not counting familarity?

Either experience will be CLI first, so this is a tie.

ZFS integration is one point. If that's important to you, then you'd want to pick a distro like Ubuntu with first-class support. All major development happens on the Linux on ZFS branch as far as I understand, so this should be okay.

As the original post points out, FreeBSD used to have unique features as selling points: zfs, dtrace, the network stack (before SMP became ubiquitous?), kqueue, jails. I'm sure there are others. But these days it seems Linux has caught up with developments like ebpf, cgroups, namespaces and io_uring.

I'm sure the fragmented nature of Linux means that some of these low-level techs are easier to use on FreeBSD. The counterpoint is that the higher-level stack is more well-supported on Linux. You may not have to care too much about the details of namespaces and cgroups if high-level docker/kubernetes/... tooling works for you.

What am I missing?

mtzet | 1 year ago | on: Garbage collection for systems programmers (2023)

> special case where all memory can be easily handled in arenas That seems to be an unfair bar to set. If _most_ objects are easily allocated by an arena, then that still removes most of the need for GC.

I like Jai's thesis that there's four types of memory allocations, from most common to least common:

1. Extremely short lived. Can be allocated on the function stack.

2. Short lived + well-defined lifetime (per frame/request). Can be allocated in a memory arena.

3. Long lived + well-defined owner. Can be managed by a subsystem-specific pool.

4. Long lived + unclear owner. Needs a dynamic memory management approach.

If you want to make the claim that tracing GCs surpass manual memory management in general, you should compare to a system written with this in mind, not one that calls malloc/free all over the place. I guess it might be more fair if you compare tracing GC with modern c++/rust practices.

I agree that for most systems, it's probably much more _practical_ to rely on tracing GC, but that's a very different statement.

mtzet | 2 years ago | on: Ask HN: How can I learn about performance optimization?

Most software in the industry is slow because it's doing a lot of stuff that it shouldn't. Often times additional "optimization" layers adds caching, but makes getting to the root of the issue harder. The biggest win is primarily getting rid of things you don't need and secondarily operating on things in batch.

My playbook for optimizing in the real world is something like this: 1. Understand what you're actually trying to compute end-to-end. The bigger the chunk you're trying to optimize, the greater the potential for performance.

2. Sketch out what an optimal process would look like. What data do you need to fetch, what computation do you need to do on this, how often does this need to happen. Don't try to be clever and micro-optimize or cache computations. Just focus on only doing the things you need to do in a simple way. Use arrays a lot.

3. Understand what the current code is actually doing. How close to the sketch above are you? Are you doing a lot of I/O in the middle of the computation? Do you keep coming back to the same data?

If you want to understand the limits of how fast computers are, and what optimal performance looks like I'd recommend two talks that come with a very different perspective from what you usually hear:

1. Mike Acton's talk at cppcon 2014 https://www.youtube.com/watch?v=rX0ItVEVjHc

2. Casey Muratori's talk about optimizing a grass planting algorithm https://www.youtube.com/watch?v=Ge3aKEmZcqY

mtzet | 2 years ago | on: Arena allocator tips and tricks

Processors doing out-of-order execution doesn't change the semantics of the code. That's very different from the example where gcc just throws away the assignment.

The idea that he just needs to accommodate the compiler people is silly. Compilers exist to serve programmers, not the other way around. It's entirely reasonable to disagree with the compiler developers and use a flag to disable behaviour your don't want.

mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go

> I’m not a go developer. How does go document how a function can fail?

There's no magic to it. Errors are values, so it's a part of the function signature that there's an error code to check. In C++ any function can throw an exception and there's no way of knowing that it wont.

It's true that go doesn't document what _kinds_ of errors it can throw, but at least I know there's something to check.

mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go

I'm not arguying that go has modern tech, but rather that it has modern sensibilities. This means not trying to force 90s style OOP, preferring static linking for easier deployment, including a build system and package manager with the compiler and preferring static types with type inference to dynamic types.

This differentiates go, rust, zig, odin etc., from languages like C++, Java, C#, Python etc. I think it makes sense to describe that difference as one of modern sensibilities.

mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go

I agree that go and rust have different areas, but that was less clear when they were getting started. Back then go was trying to figure out what it meant by 'systems programming language' and rust had a similar threading model.

Another point is that they do share similarities, which might we might now just describe as being 'modern': They're generally procedual -- you organize your code into modules (not classes) with structs and functions, they generally like static linking, type inference for greater ergonomics, the compiler includes the build system and a packager manager, there's a good formatter.

The above are points for both rust and go compared to C/C++, Python, Java, etc.

So why do I like go? I think mostly it's that it makes some strong engineering trade-offs, trying to get 80% for 20% of the price. That manifests itself in a number of ways.

It's not the fastest language, but neither is it slow.

I really dislike exceptions because there's no documentation for how a function can fail. For this reason I prefer go style errors, which are an improvement on the C error story. Yes it has warts, but it's 80% good enough.

It's a simple language with batteries included. You can generally follow the direction set and be happy. It leads itself to simple, getting-things-done kind of code, rather than being over-abstracted. Being simple also makes for great compile times.

mtzet | 2 years ago | on: Why I Left Rust

> as best as I understand it, because of the content of JeanHeyd's blog post on reflection in Rust.

I'm having trouble finding it. Can anyone link this post?

mtzet | 2 years ago | on: Unity to lay off 8% of its workforce

Off the charts? Compared to completely at will maybe. You generally have to pay 3 months severance + 1 month per 3 years of employment for a maximum of 6 months severance after 9 years of employment. That's it.

mtzet | 2 years ago | on: Paul Graham on Twitter's anti-Substack measures: “It's a mistake.”

As someone who knows nothing at all about the success of PayPal, can you elaborate on why?

I'm also curious about his role in the success of Tesla and SpaceX. I personally find those to be two of the most interesting startups in a long while, and have been inclined to think that Musk being involved in both to be unlikely to be a coincidence.

mtzet | 3 years ago | on: Use GNU Emacs

I agree. Emacs out-of-the-box is terrible, but my Doom Emacs config is all 50 lines of code. For that I get:

- IDE-like features via LSP

- The best git porcelain out there: magit. Even when I'm not using emacs, I come back to magit for code-browsing (recursive blame) and staging hunks.

- Emacs/vim's fantastic buffer/window concept, where open files are not owned by their windows. I miss this whenever I use anything else.

- project support to quickly grep across all files or jump to files

- Very mature vi keybindings, with their infinite composability

I still sometimes find that it's either too rigid or too manual at certain things, but I could say the same for CLion and VSCode. I still come back to CLion for its refactoring tool, the 3-way merge window and the debugger integration.

It is a bit messy though. It's very well done for what it is, but it's cobbled together from many disparate components. It seems that it should be possible to create the same type of experience from a simpler, more coherent system. I rarely update the base system, but when I do, I've occasionally had to google for some exotic elisp error and add a fix here or there.

mtzet | 3 years ago | on: Functional Core, Imperative Shell (2012)

On the other hand, I've found that a lot of the caching that happens organically from each component building up it's own state of the world can lead to poor performance of its own. You often end up reading single values from random places in memory.

You can often optimize a single, large state transformation by utilizing that it does a lot of similiar work. You can also often get a big performance boost by batching up computations.

Half of all performance problems are solved by introducing a cache. The other half is solved by removing one.

mtzet | 3 years ago | on: Trunk-Based Development: Game Changers

I agree about being pragmatic and doing what the business needs are at the moment!

But that's also why I argue for having less process up front and creating it as needed. The master branch exists solely for the aesthetic reason of having a linear history from version to version. This is obviously incompatible with a non-linear version history.

Simplify the initial process by getting rid of master and tagging directly on the release branch.

In fact, the merge from release branch to master is a bit suspicious. I'd expect the contents of the merge to be precisely the contents of the release branch, regardless of what was on master previously. If not, what am I releasing?

mtzet | 3 years ago | on: Trunk-Based Development: Game Changers

Gitflow merges everything to a linear 'master' branch, from which releases are tagged. That means you can't create new 1.x releases after 2.0 has been tagged.

In TBD you'd have most new development happening on trunk, but there's no reason you can't spin off a release/v1.x branch as well as a release/v2.x branch from the trunk. The point is just that these shouldn't merge back into trunk -- they're spun off forever.

You can cherry-pick patches onto release branches, but you need to figure out what that means on a case-by-case basis. There's no guarantee that the patch will apply cleanly, and there's no guarantee that it will make sense, even if it does apply cleanly. No branching strategy can fix this for you.

mtzet | 3 years ago | on: Trunk-Based Development: Game Changers

To me the argument makes more sense in reverse:

Gitflow solves a problem few teams actually have. Throwing away a working (simpler) process of having a single trunk when there's no real benefit doesn't seem like a great idea.

It seems to me that Gitflow just adds a lot of ceremony of keeping multiple branches somewhat in sync through merges left and right.

I'm fine with the argument of keeping existing processes because they work, whatever they are.

mtzet | 3 years ago | on: Testing my system code in /usr/ without modifying /usr/

> IOW, if I build a system with yocto for an embedded system, which is also a very useful candidate for a read-only system image installation, I cannot use this feature for this use-case because ...

I agree that this is precisely a nice use-case, but why is this a problem? I agree with Lennarts reasoning that it makes little sense if the files are arbitrarily split between / and /usr. In any case, Yocto has had support for usrmerge for a while now[1].

[1] https://docs.yoctoproject.org/ref-manual/features.html?highl...

mtzet | 3 years ago | on: Testing my system code in /usr/ without modifying /usr/

> Systemd has this problem

How does systemd have a problem? Systemd doesn't care about having a read-only rootfs at all, except that it supports it and now ships a little tool that's useful if you happen to use them. Fedora and Ubuntu doesn't ship a read-only rootfs(1).

(1) Well, Fedora Silverblue is an experimental Fedora-variant that uses a read only rootfs. But that point still stands.

page 1