maaarghk's comments

maaarghk | 2 months ago | on: Making Google Sans Flex

Hmm, my first reaction was the same as yours. But I have quite bad eyesight and looking at the "regular 400 at 16px" example on the page reminded me that I definitely sometimes find myself squinting trying to work out whether a character is a parenthesis or a brace (Droid Sans Mono). So I suppose it'd probably be quite helpful to have a brace that's very visually distinct from parenthesis even if it's not particularly pretty on its own.

maaarghk | 1 year ago | on: Stories from the Internet

oh man! every so often for the past decade I've tried to remember "rinkworks". I recognised it immediately from your post. I remember this being one of the first websites I would read as a kid, 20 odd years ago. cheers for the nostalgia buzz!

maaarghk | 1 year ago | on: Dotnet9x: Backport of .NET 2.0 – 3.5 to Windows 9x

I think the title really undersells it, the video is worth a watch. It's really an impressive effort just to get .NET 3.5 code running on Windows 95, seemingly just to be able to say he could.

HN is usually good for this kind of thing: it looks like NDP is an internal name for dotnet, does anyone here who remembers what it stood for?

maaarghk | 2 years ago | on: Making small games, which is fun in itself

I played this for hours last night, I'm going to need to block it :) Set myself a target of 2048 for nostalgic reasons, turns out it takes a long time (for me anyway).

You're right about the dictionary, actually the whole time I kept wondering about how annoying it must have been to choose a dictionary for this game. Even though not accidentally making non-obscure words without noticing is part of the challenge, accidentally making obscure words is annoying!

Maybe I just don't know enough words - but looking through my game log, I was annoyed by "cony", "smit", "huic", "yipe", "nome", "torii", "agon", "mairs", "imido" and "sial", some of which don't display a definition when you click them, but all of which appear in all the scrabble dictionaries referenced on the website you just linked. Meanwhile I was sad to discover vape is so far only in one scrabble dictionary :) And annoyed to discover "oxalic", which is also in all the dictionaries on that site, was not accepted.

I guess there's a spectrum between "advanced scrabble player level vocabulary" and "fun word game", because I imagine (and suspect you have probably had feedback along these lines) _not_ allowing a word which is obscure but still unambiguously used in the modern era would be worse UX overall - the sort that's more likely to make you rage-quit.

I can see why you'd try to get a bit of wordle-esque shareability out of the daily mode even though I like the classic mode more myself. But I think the tutorial popup isn't as comprehensive as it needs to be for someone's first game to be fun. The first time I clicked the link I did an abysmal job at the daily challenge, I think it wasn't obvious that swaps didn't need to be neighbouring like the given example. Something that might be better is to make an interactive tutorial for first-time visitors - come up with a 5x5 board that is quickly solved and demonstrates several strategies and then walk the player through clearing it. I also think the help popup being one click away would be useful.

I would also have liked the help popup to let me know that progress is saved if you close the page, I ended up checking in an incognito window because I had no time to keep playing but wanted to come back and try to reach the target I'd set myself another time!

Anyway - criticism and suggestions aside - well done, it is a fun game and concept!

maaarghk | 2 years ago | on: Windows NT 3.1 on Dec Alpha AXP

IIRC - he mentioned that someone _else_ had a DEC machine, and actually used it as their dev box. The dev with the DEC box person developed the kernel panic code, aka the blue screen of death - and blue was chosen because that's the default screen colour when the DEC box is turned on. The idea was to reset the colour to the default before printing the kernel panic message.

So while DEC NT is sort of a footnote, it did have this pretty profound influence : )

maaarghk | 2 years ago | on: The Microcontroller That Just Won’t Die

Haha, I had a feeling from the title it would be about the 8051. I've only recently learned about it. I ended up with a bunch of TTL / LVDS / eDP display panel driver boards on hand. They're based around a family of Realtek chips which are hugely powerful for having in some cases a single digit dollar cost - the silicon has a whole bunch of peripherals like analogue video decoders, HDMI/VGA/DisplayPort decoders, colour processing, an OSD generator and signal muxer, DDC/CI interface, IrDA demodulator, PWM / DAC for audio, and many many more things; all with parameters configurable by an embedded 8051-compatible processor (by the same principle as memory mapped peripheral). As someone with next to no serious experience in embedded software, trying to write software to target these devices has been quite the departure and an eye opener in many ways.

The development tools just feel so antiquated. The Keil compiler mentioned in the article has a per-seat license cost in the thousands and runs on Windows, and it feels like it has not received any serious upgrades since the mid-2000s. It runs fine on Wine (with a free trial license, of course), but has basically unusable UX on a hidpi screen. Of course I can pretty much get away with coding in vscode and writing a Makefile which calls `wine ~/.wine/drive_c/Keil/BIN/C52.exe` but it's not ideal, plus, my trial license will of course expire, and this is a hobby project.

I tried switching to using SDCC. Preface, my honest opinion is that the small handful of people who maintain this are doing a wonderful job and have been for years - it's a thankless task for a small audience. But for serious features that a modern day user might expect, like code banking, the implementation is inflexible, supports half of the implementation methods that Keil does, and generates larger code. And of course, there are currently very few people capable of making contributions to improve the situation. The documentation is extensive but split across PDF files for the compiler, TXT files for the linker, unstyled HTML files for the simulator, and various README and errata "NOTES" files for other components.

Meanwhile the only copies of the original Intel documentation for the 8051 I could find were scanned images of a printed book. A lot of random entry level tutorials for beginners are dotted around the net, on websites like http://8052mcu.com/ or in Youtube video lectures uploaded by universities based in non-English speaking countries; but high quality written reference guides seem to be difficult to find. Of course, maybe it's not as bad as that, but just was not easy for me to grok; I realised in hindsight I have the assumptions of von Neumann architecture more or less internalised, so it took a while to get my head around the concept of having three separate address spaces (one for code, another for internal RAM, another for external RAM).

I would not be surprised if this were the case for an equally old but now-niche chip, like the Z80 and its derivatives. But given the neighbouring comments estimating just how widespread this MCU is (billions of units per year?!) it does seem kinda surprising that modern open source embedded development tools of the kind available for platforms like the RP2040, STM32, ESP8266, etc, just haven't reached the 8051 platform. (edit to add: I don't think it's necessarily bad if development tools are simply "old", fwiw, and I do think software can be finished. But in this case there is something of a gulf between the open source solution and the paid solution, and progress to close this has only slowed, not accelerated.)

My only guess as to why (as a layperson) is that the Harvard architecture plus 8-bit stack address space makes it difficult to target with modern compiler tools or something. Of course the modern derivatives being heavily burdened by IP rights also can't help; I suppose the only people who have access to datasheets detailed enough to implement simulators / advanced compiler features, have a day job which affords them access to the "good enough for enterprise" Keil solution : )

maaarghk | 2 years ago | on: Man spends entire career mastering crappy codebase

The web stack might give you more transferable skills but how did you calculate the trade off vs the 50% pay cut? Was the salary not actually very high and at a low ceiling compared to web work - i.e. you expect your salary to soon exceed what you made working in HFT? Or was there a serious risk of layoffs happening far in advance of your retirement? If the goal is to support yourself after retirement, are you saying in certain circumstances halving your salary 10 years into a tech career ultimately optimises your entire-career-earnings?

maaarghk | 2 years ago | on: Tax prep firms shared ‘extraordinarily sensitive’ data about taxpayers with Meta

I don't understand why you'd be confused after reading the original source[1]. The authors explain at length why they consider it to be Meta's problem, and it's not hard to understand - Meta make misleading claims about their own ability to detect and filter personal information. It also appears the detail sent was a lot less obfuscated than you indicate here.

If you only got as far as the press release[2] then I can understand your view:

> * Tax prep companies shared extraordinarily sensitive personal and financial information with Meta, which used the data for diverse advertising purposes

> TaxAct, H&R Block, and TaxSlayer each revealed, in response to this Congressional inquiry, that they shared taxpayer data via their use of the Meta Pixel and Google’s tools. Although the tax prep companies and Big Tech firms claimed that all shared data was anonymous, the FTC and experts have indicated that the data could easily be used to identify individuals, or to create a dossier on them that could be used for targeted advertising or other purposes.

This paragraph is woolly and does not appear to support the claim in the bullet point. But the full report has much strong wording on page 2: "Meta also confirmed that it used the data to target ads to taxpayers, including for companies other than the tax prep companies themselves, and to train Meta's own AI algorithms".

The logic of this claim, via page 19, appears to be: Meta says if their sensitive information filtering algorithm detected personal information, the information would not have been used for advertising, and they'd have sent a notification to the tax prep firms. They also confirmed the negative case: if no notification was received by the tax prep firm, then no filtering of their data took place. Meta was asked to provide copies of notifications they had sent to the tax prep firms and they did not do so. So the assumption is that none were sent, therefore no filtering took place, and the data were used as a signal in the advertising algorithm.

I don't find it to be an unequivocal confirmation, but the sources don't support your claim that this article is misleading or your claim that there's no reason to consider it a problem of the tech companies involved.

[1] https://www.warren.senate.gov/imo/media/doc/Attacks%20on%20T...

[2] https://www.warren.senate.gov/oversight/reports/in-new-repor...

maaarghk | 3 years ago | on: Rails on Docker

I'm not gonna lie, I didn't read all that, but the node example alone proves you either didn't read the guy you replied to or haven't been coding long enough to grok the problem.

What if you want to start a new project using the latest postgres version because postgres has a new feature that will be handy, but you already maintain another project that uses a postgres feature or relies on behaviour that was removed/changed in the latest version? You're going to set up a whole new VM on the internet to be a staging environment and instead of setting up a testing and deployment pipeline you're going to just FTP / remote-ssh into it and change live code?

you define an apps entire chain of dependencies including external services in a compose file / set of kube manifests / terraform config for ecs. Then in the container definition itself you lock down things like C library and distro versions: maybe you use specially patched imagemagick on one project or a pdf generator on another, and fontconfig defaults were updated and it changed how aliasing works between distro releases and now your fonts are all fugly in generated exports... stick all those definitions in a Dockerfile and deploy onto any Linux distro / kernel and it'll look identical to it does on local

nevermind this, check out this thread to destroy your illusion that simply having node installed locally will make your next project super future proof https://github.com/webpack/webpack/issues/14532 and note that some of the packages referencing this old issue in open new bug reports are very popular!

if you respond please do not open with "yeah but rust", I can still compile Fortran code too

maaarghk | 3 years ago | on: Rails on Docker

the cache is small but if you have a `docker buildx build --cache-from --push` type command it will always pull the image at the end and try to push it again (although it'll get layer already exists responses), for ~250mb images on gitlab I find this do-nothing job takes about 2.5 mins in total (vs a 10 min build if the entire cache were to be invalidated by a new base image version). I'd very much like it if I could say "if the entire build was cached don't bother pulling it at the end", maybe buildkit is the tool for that job

maaarghk | 5 years ago | on: RethinkDB: why we failed (2017)

On the one hand I usually love to drop in a bit of spolsky wisdom myself, and that quote stands well on its own, but the article itself is unfortunately marred by the fact the Microsoft products mentioned were just unsuccessful early attempts at iCloud and Google Accounts, both of which have since seen considerable "killer app" level success. I guess easy with hindsight to say "ah yes, but smartphones".
page 1