mtzet | 6 months ago | on: The Framework Desktop is a beast
mtzet's comments
mtzet | 1 year ago | on: Write Your Own Virtual Machine (2022)
I think the hard thing to understand is that C’s pointer syntax is backwards (usage follows declaration is weird).
I also think understanding how arrays silently decay to pointers and how pointer arithmetic works in C is hard: ptr+1 is not address+1, but address+sizeof(*ptr)!
Pointers are not hard. C is just confusing, but happens to be the lingua franca for “high level” assembly.
mtzet | 1 year ago | on: FreeBSD: How Can We Make It More Attractive to New Users?
Either experience will be CLI first, so this is a tie.
ZFS integration is one point. If that's important to you, then you'd want to pick a distro like Ubuntu with first-class support. All major development happens on the Linux on ZFS branch as far as I understand, so this should be okay.
As the original post points out, FreeBSD used to have unique features as selling points: zfs, dtrace, the network stack (before SMP became ubiquitous?), kqueue, jails. I'm sure there are others. But these days it seems Linux has caught up with developments like ebpf, cgroups, namespaces and io_uring.
I'm sure the fragmented nature of Linux means that some of these low-level techs are easier to use on FreeBSD. The counterpoint is that the higher-level stack is more well-supported on Linux. You may not have to care too much about the details of namespaces and cgroups if high-level docker/kubernetes/... tooling works for you.
What am I missing?
mtzet | 1 year ago | on: Garbage collection for systems programmers (2023)
I like Jai's thesis that there's four types of memory allocations, from most common to least common:
1. Extremely short lived. Can be allocated on the function stack.
2. Short lived + well-defined lifetime (per frame/request). Can be allocated in a memory arena.
3. Long lived + well-defined owner. Can be managed by a subsystem-specific pool.
4. Long lived + unclear owner. Needs a dynamic memory management approach.
If you want to make the claim that tracing GCs surpass manual memory management in general, you should compare to a system written with this in mind, not one that calls malloc/free all over the place. I guess it might be more fair if you compare tracing GC with modern c++/rust practices.
I agree that for most systems, it's probably much more _practical_ to rely on tracing GC, but that's a very different statement.
mtzet | 2 years ago | on: Ask HN: How can I learn about performance optimization?
My playbook for optimizing in the real world is something like this: 1. Understand what you're actually trying to compute end-to-end. The bigger the chunk you're trying to optimize, the greater the potential for performance.
2. Sketch out what an optimal process would look like. What data do you need to fetch, what computation do you need to do on this, how often does this need to happen. Don't try to be clever and micro-optimize or cache computations. Just focus on only doing the things you need to do in a simple way. Use arrays a lot.
3. Understand what the current code is actually doing. How close to the sketch above are you? Are you doing a lot of I/O in the middle of the computation? Do you keep coming back to the same data?
If you want to understand the limits of how fast computers are, and what optimal performance looks like I'd recommend two talks that come with a very different perspective from what you usually hear:
1. Mike Acton's talk at cppcon 2014 https://www.youtube.com/watch?v=rX0ItVEVjHc
2. Casey Muratori's talk about optimizing a grass planting algorithm https://www.youtube.com/watch?v=Ge3aKEmZcqY
mtzet | 2 years ago | on: Arena allocator tips and tricks
The idea that he just needs to accommodate the compiler people is silly. Compilers exist to serve programmers, not the other way around. It's entirely reasonable to disagree with the compiler developers and use a flag to disable behaviour your don't want.
mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go
There's no magic to it. Errors are values, so it's a part of the function signature that there's an error code to check. In C++ any function can throw an exception and there's no way of knowing that it wont.
It's true that go doesn't document what _kinds_ of errors it can throw, but at least I know there's something to check.
mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go
This differentiates go, rust, zig, odin etc., from languages like C++, Java, C#, Python etc. I think it makes sense to describe that difference as one of modern sensibilities.
mtzet | 2 years ago | on: Gopher Wrangling: Effective error handling in Go
Another point is that they do share similarities, which might we might now just describe as being 'modern': They're generally procedual -- you organize your code into modules (not classes) with structs and functions, they generally like static linking, type inference for greater ergonomics, the compiler includes the build system and a packager manager, there's a good formatter.
The above are points for both rust and go compared to C/C++, Python, Java, etc.
So why do I like go? I think mostly it's that it makes some strong engineering trade-offs, trying to get 80% for 20% of the price. That manifests itself in a number of ways.
It's not the fastest language, but neither is it slow.
I really dislike exceptions because there's no documentation for how a function can fail. For this reason I prefer go style errors, which are an improvement on the C error story. Yes it has warts, but it's 80% good enough.
It's a simple language with batteries included. You can generally follow the direction set and be happy. It leads itself to simple, getting-things-done kind of code, rather than being over-abstracted. Being simple also makes for great compile times.
mtzet | 2 years ago | on: Why I Left Rust
mtzet | 2 years ago | on: Why I Left Rust
I'm having trouble finding it. Can anyone link this post?
mtzet | 2 years ago | on: Unity to lay off 8% of its workforce
mtzet | 2 years ago | on: Paul Graham on Twitter's anti-Substack measures: “It's a mistake.”
I'm also curious about his role in the success of Tesla and SpaceX. I personally find those to be two of the most interesting startups in a long while, and have been inclined to think that Musk being involved in both to be unlikely to be a coincidence.
mtzet | 3 years ago | on: Use GNU Emacs
- IDE-like features via LSP
- The best git porcelain out there: magit. Even when I'm not using emacs, I come back to magit for code-browsing (recursive blame) and staging hunks.
- Emacs/vim's fantastic buffer/window concept, where open files are not owned by their windows. I miss this whenever I use anything else.
- project support to quickly grep across all files or jump to files
- Very mature vi keybindings, with their infinite composability
I still sometimes find that it's either too rigid or too manual at certain things, but I could say the same for CLion and VSCode. I still come back to CLion for its refactoring tool, the 3-way merge window and the debugger integration.
It is a bit messy though. It's very well done for what it is, but it's cobbled together from many disparate components. It seems that it should be possible to create the same type of experience from a simpler, more coherent system. I rarely update the base system, but when I do, I've occasionally had to google for some exotic elisp error and add a fix here or there.
mtzet | 3 years ago | on: Functional Core, Imperative Shell (2012)
You can often optimize a single, large state transformation by utilizing that it does a lot of similiar work. You can also often get a big performance boost by batching up computations.
Half of all performance problems are solved by introducing a cache. The other half is solved by removing one.
mtzet | 3 years ago | on: Trunk-Based Development: Game Changers
But that's also why I argue for having less process up front and creating it as needed. The master branch exists solely for the aesthetic reason of having a linear history from version to version. This is obviously incompatible with a non-linear version history.
Simplify the initial process by getting rid of master and tagging directly on the release branch.
In fact, the merge from release branch to master is a bit suspicious. I'd expect the contents of the merge to be precisely the contents of the release branch, regardless of what was on master previously. If not, what am I releasing?
mtzet | 3 years ago | on: Trunk-Based Development: Game Changers
In TBD you'd have most new development happening on trunk, but there's no reason you can't spin off a release/v1.x branch as well as a release/v2.x branch from the trunk. The point is just that these shouldn't merge back into trunk -- they're spun off forever.
You can cherry-pick patches onto release branches, but you need to figure out what that means on a case-by-case basis. There's no guarantee that the patch will apply cleanly, and there's no guarantee that it will make sense, even if it does apply cleanly. No branching strategy can fix this for you.
mtzet | 3 years ago | on: Trunk-Based Development: Game Changers
Gitflow solves a problem few teams actually have. Throwing away a working (simpler) process of having a single trunk when there's no real benefit doesn't seem like a great idea.
It seems to me that Gitflow just adds a lot of ceremony of keeping multiple branches somewhat in sync through merges left and right.
I'm fine with the argument of keeping existing processes because they work, whatever they are.
mtzet | 3 years ago | on: Testing my system code in /usr/ without modifying /usr/
I agree that this is precisely a nice use-case, but why is this a problem? I agree with Lennarts reasoning that it makes little sense if the files are arbitrarily split between / and /usr. In any case, Yocto has had support for usrmerge for a while now[1].
[1] https://docs.yoctoproject.org/ref-manual/features.html?highl...
mtzet | 3 years ago | on: Testing my system code in /usr/ without modifying /usr/
How does systemd have a problem? Systemd doesn't care about having a read-only rootfs at all, except that it supports it and now ships a little tool that's useful if you happen to use them. Fedora and Ubuntu doesn't ship a read-only rootfs(1).
(1) Well, Fedora Silverblue is an experimental Fedora-variant that uses a read only rootfs. But that point still stands.
The only selling point is the form factor and a massive amount of GPU memory, but a dGPU offers more raw compute.