mindajar's comments

mindajar | 3 years ago | on: Ubuntu stops shipping Flatpak by default

bzr lost because it was poorly-architected, infuriatingly slow, and kept changing its repository storage format trying (and failing) to narrow the gap with Mercurial and Git performance. Or, at least that's why I gave up on it, well before GitHub really took off and crushed the remaining competition.

For my own sanity I began avoiding Canonical's software years ago, but to me they always built stuff with shiny UI that demoed well but with performance seemingly a distant afterthought. Their software aspirations always seemed much larger than the engineering resources/chops/time they were willing to invest.

mindajar | 4 years ago | on: M1 MacBook Pros Speakers Have a Crackling Problem

I don't think it's a hardware problem. After a few days/weeks of uptime, coreaudiod sometimes gets into a state where, regardless of output volume, it glitches and pops periodically until you force-quit it. It's like the daemon's internal state degrades such that it's just on the edge of able to feed audio buffers to the hardware fast enough, and then all bets are off as to whether other system load will tip it over into glitching.

Things seem worse if you have and switch between multiple audio devices. I filed a Radar on this (or a very similar bug) many OS releases ago that's still open with no feedback. But "restart your computer" also solves the problem, whether or not you follow the other placebo troubleshooting nonsense.

mindajar | 5 years ago | on: Apple’s M1 processor and the full 128-bit integer product

Yeah. To me it looks like macOS goes so deep into sleep it disconnects the external display. On wake, the system rediscovers the external and resizes the desktop across both displays. With a bunch of apps/windows open, half your apps simultaneously resizing all their windows can peg all CPU cores for a number of seconds.

(It's still way faster than the same set of apps on an Intel Mac laptop, where it could sometimes take on the order of 30 seconds to get to a usable desktop after a long sleep. On Intel Macs it seemed more obvious that the GPU was the bottleneck)

mindajar | 5 years ago | on: M1 Macs Review

The original Rosetta was also a licensed technology that (presumably) cost Apple a pretty penny. If Rosetta 2 is all in-house tech, that probably bodes well for sticking around longer than the original.

mindajar | 6 years ago | on: Apple's long processor journey

It already does, in the T2. It seems to me like future T* chips will run more and more of the system, leaving the x86 to become something like the MBP’s discrete GPU option. And then, like discrete GPUs, eventually an x86 CPU is only available in top-end models.

mindajar | 6 years ago | on: Is Catalina a Good Upgrade Yet?

If TM detects an I/O error while writing to a networked backup volume, it aborts the backup and forces a filesystem check on the remote disk image at the start of the next backup. This fails, because the filesystem actually is corrupt and has been for some time. You had no idea, of course, because TM only reports four-horsemen level errors in the UI, and silently eats the rest. You even had a good run of "successful" backups afterwards, so long as you managed to avoid changing any files in directories with damaged metadata in your backup.

Anyway, as your digital life flashes before your eyes, TM helpfully offers to delete all your backups and start with a fresh disk image now, or delete all your backups and start with a fresh disk image later. And you can't blame it, really, because the filesystem has had some unknown, nonzero amount of corruption for an unknown, nonzero amount of time. For all anyone knows or can prove, the whole backup is random garbage.

This fractal of failure is specific to backups over WiFi. Backups to USB-connected disks aren't that reliable either, because HFS+, but USB backups have all the nines of reliability compared to network backups.

mindajar | 6 years ago | on: Why Discord is switching from Go to Rust

If there's an example of getting great game performance with a GC language, Unity isn't it. Lots of Unity games get stuttery, and even when they don't, they seem to use a lot of RAM relative to game complexity. Kerbal Space Program even mentioned in their release notes at one point something about a new garbage collector helping with frame rate stuttering.

I started up KSP just now, and it was at 5.57GB before I even got to the main menu. To be fair, I hadn't launched it recently, so it was installing its updates or whatever. Ok, I launched it again, and at the main menu it's sitting on 5.46GB. (This is on a Mac.) At Mission Control, I'm not even playing the game yet, and the process is using 6.3GB.

I think a better takeaway is that you can get away with GC even in games now, because it sucks and is inefficient but it's ... good enough. We're all conditioned to put up with inefficient software everywhere, so it doesn't even hurt that much anymore when it totally sucks.

mindajar | 6 years ago | on: The few remaining uses of the name “Macintosh”

APFS is “optimized for SSD”, so performance is miserable on hard disks. For awhile Apple held off upgrading the system disk if it was an HDD, but that’s no longer an option if you want to run the latest OS.

mindajar | 6 years ago | on: Apple’s iPhone Software Shakeup After Buggy iOS 13 Debut

That makes it suck slightly less, but TM can pretty much never be "fast" the way it works now, with the number of files we all have these days. In the best case, if a folder doesn't change, TM can hardlink to the previous backup of that folder. If even one file has changed, it has to copy the changed files and then hardlink every unchanged file in the folder, so TM chokes on any large folder where even a single file changes often. If you watch with `fs_usage -f pathname backupd`, you can see TM take dozens of minutes backing up the Mail/Messages/Calendars folders, or cache folders with ridiculous numbers of files like Slack's. Any folder with thousands or tens of thousands of files is putting TM into its worst-case performance corner.

Personally, I find TM bordering on unusable for backups these days. Maybe I'd feel differently if I still had a desktop Mac, but with a bunch of files on a laptop that moves around and sleeps, having to tether to a USB disk for several hours to do a backup is ridiculous. And network backups are even slower and frequently self-corrupt, especially over WiFi. The glimmer of hope is that APFS replication has some really cool new features in 10.15. It's clearly still very much a work in progress, and some big parts are missing, but the stuff that's released seems to clearly point toward a future where we can do block-based copies of filesystems or even deltas between snapshots. Which is much better for performance on spinning hard disks than the eternal disk seeking and grinding that file-based backup solutions all do.

mindajar | 9 years ago | on: A ZFS developer’s analysis of Apple’s new APFS file system

I think the point is that if Apple had to remove DTrace for legal reasons, the impact to Apple's business and customers would be minimal. If a hypothetical AppleZFS shipped on a billion devices and then Apple lost a patent lawsuit, Apple would be totally screwed. A filesystem can't just be pulled out of the OS without shipping a new one and converting every device. Apple could potentially be in a position where they couldn't sell any new devices at all, and ZFS support in other OSes wouldn't help at all.

mindajar | 9 years ago | on: A ZFS developer’s analysis of Apple’s new APFS file system

Today you can verify backups on OS X with "tmutil verifychecksums", at least on 10.11. The UI to this could be improved, but user data checksums don't necessarily need to be a filesystem feature. On a single-disk device, the FS doesn't have enough information to do anything useful about corrupt files anyway.

mindajar | 9 years ago | on: APFS in Detail

It was a stealthy feature addition that went totally unannounced, but as of 10.11, Time Machine stores file checksums in the backup. See 'tmutil verifyChecksums'.

mindajar | 9 years ago | on: APFS in Detail

Do checksums actually need to be in the filesystem, though? It does seem like an important feature, but couldn't they be done at a higher level, like the way Spotlight indexing works on the Mac today?

mindajar | 9 years ago | on: APFS in Detail

If you assume Apple cares about having a disk format in common with other platforms, sure, I'd agree that's probably possible. But I don't think they do; they seem to care a lot more about things like a unified codebase across their platforms, the energy-efficiency initiatives they've been pushing for a few years, owning the key tech in the products, etc.

One slide in the WWDC talk deck showed a bunch of divergent Apple storage technologies across all their platforms that are being replaced by APFS. If ZFS has to fork into weird variants to run well on the phone or watch, that seems less appealing than a single codebase optimized for just the stuff Apple products do.

mindajar | 9 years ago | on: APFS in Detail

Is moving data between computers that way a thing that non-technical people do often? FAT-formatted USB sticks seem to be good enough for that, but e-mail/Dropbox/file sharing/cloud sharing/AirDrop have much better UX for the average person.
page 1