maaarghk | 2 months ago | on: Making Google Sans Flex
maaarghk's comments
maaarghk | 3 months ago | on: Show HN: KiDoom – Running DOOM on PCB Traces
maaarghk | 11 months ago | on: The 2005 Sony Bravia ad
maaarghk | 1 year ago | on: Stories from the Internet
maaarghk | 1 year ago | on: The tiny chip that powers Montreal subway tickets
maaarghk | 1 year ago | on: Dotnet9x: Backport of .NET 2.0 – 3.5 to Windows 9x
HN is usually good for this kind of thing: it looks like NDP is an internal name for dotnet, does anyone here who remembers what it stood for?
maaarghk | 2 years ago | on: Making small games, which is fun in itself
You're right about the dictionary, actually the whole time I kept wondering about how annoying it must have been to choose a dictionary for this game. Even though not accidentally making non-obscure words without noticing is part of the challenge, accidentally making obscure words is annoying!
Maybe I just don't know enough words - but looking through my game log, I was annoyed by "cony", "smit", "huic", "yipe", "nome", "torii", "agon", "mairs", "imido" and "sial", some of which don't display a definition when you click them, but all of which appear in all the scrabble dictionaries referenced on the website you just linked. Meanwhile I was sad to discover vape is so far only in one scrabble dictionary :) And annoyed to discover "oxalic", which is also in all the dictionaries on that site, was not accepted.
I guess there's a spectrum between "advanced scrabble player level vocabulary" and "fun word game", because I imagine (and suspect you have probably had feedback along these lines) _not_ allowing a word which is obscure but still unambiguously used in the modern era would be worse UX overall - the sort that's more likely to make you rage-quit.
I can see why you'd try to get a bit of wordle-esque shareability out of the daily mode even though I like the classic mode more myself. But I think the tutorial popup isn't as comprehensive as it needs to be for someone's first game to be fun. The first time I clicked the link I did an abysmal job at the daily challenge, I think it wasn't obvious that swaps didn't need to be neighbouring like the given example. Something that might be better is to make an interactive tutorial for first-time visitors - come up with a 5x5 board that is quickly solved and demonstrates several strategies and then walk the player through clearing it. I also think the help popup being one click away would be useful.
I would also have liked the help popup to let me know that progress is saved if you close the page, I ended up checking in an incognito window because I had no time to keep playing but wanted to come back and try to reach the target I'd set myself another time!
Anyway - criticism and suggestions aside - well done, it is a fun game and concept!
maaarghk | 2 years ago | on: Bit banging a 3.5" floppy drive
maaarghk | 2 years ago | on: Windows NT 3.1 on Dec Alpha AXP
So while DEC NT is sort of a footnote, it did have this pretty profound influence : )
maaarghk | 2 years ago | on: A deep dive into Single Pair Ethernet
maaarghk | 2 years ago | on: The world in which IPv6 was a good design (2017)
https://djangocas.dev/blog/huge-improve-network-performance-...
https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...
maaarghk | 2 years ago | on: The Microcontroller That Just Won’t Die
The development tools just feel so antiquated. The Keil compiler mentioned in the article has a per-seat license cost in the thousands and runs on Windows, and it feels like it has not received any serious upgrades since the mid-2000s. It runs fine on Wine (with a free trial license, of course), but has basically unusable UX on a hidpi screen. Of course I can pretty much get away with coding in vscode and writing a Makefile which calls `wine ~/.wine/drive_c/Keil/BIN/C52.exe` but it's not ideal, plus, my trial license will of course expire, and this is a hobby project.
I tried switching to using SDCC. Preface, my honest opinion is that the small handful of people who maintain this are doing a wonderful job and have been for years - it's a thankless task for a small audience. But for serious features that a modern day user might expect, like code banking, the implementation is inflexible, supports half of the implementation methods that Keil does, and generates larger code. And of course, there are currently very few people capable of making contributions to improve the situation. The documentation is extensive but split across PDF files for the compiler, TXT files for the linker, unstyled HTML files for the simulator, and various README and errata "NOTES" files for other components.
Meanwhile the only copies of the original Intel documentation for the 8051 I could find were scanned images of a printed book. A lot of random entry level tutorials for beginners are dotted around the net, on websites like http://8052mcu.com/ or in Youtube video lectures uploaded by universities based in non-English speaking countries; but high quality written reference guides seem to be difficult to find. Of course, maybe it's not as bad as that, but just was not easy for me to grok; I realised in hindsight I have the assumptions of von Neumann architecture more or less internalised, so it took a while to get my head around the concept of having three separate address spaces (one for code, another for internal RAM, another for external RAM).
I would not be surprised if this were the case for an equally old but now-niche chip, like the Z80 and its derivatives. But given the neighbouring comments estimating just how widespread this MCU is (billions of units per year?!) it does seem kinda surprising that modern open source embedded development tools of the kind available for platforms like the RP2040, STM32, ESP8266, etc, just haven't reached the 8051 platform. (edit to add: I don't think it's necessarily bad if development tools are simply "old", fwiw, and I do think software can be finished. But in this case there is something of a gulf between the open source solution and the paid solution, and progress to close this has only slowed, not accelerated.)
My only guess as to why (as a layperson) is that the Harvard architecture plus 8-bit stack address space makes it difficult to target with modern compiler tools or something. Of course the modern derivatives being heavily burdened by IP rights also can't help; I suppose the only people who have access to datasheets detailed enough to implement simulators / advanced compiler features, have a day job which affords them access to the "good enough for enterprise" Keil solution : )
maaarghk | 2 years ago | on: Man spends entire career mastering crappy codebase
maaarghk | 2 years ago | on: Sigh, this is what browsing the web in the EU looks like nowadays (2021)
maaarghk | 2 years ago | on: Tax prep firms shared ‘extraordinarily sensitive’ data about taxpayers with Meta
If you only got as far as the press release[2] then I can understand your view:
> * Tax prep companies shared extraordinarily sensitive personal and financial information with Meta, which used the data for diverse advertising purposes
> TaxAct, H&R Block, and TaxSlayer each revealed, in response to this Congressional inquiry, that they shared taxpayer data via their use of the Meta Pixel and Google’s tools. Although the tax prep companies and Big Tech firms claimed that all shared data was anonymous, the FTC and experts have indicated that the data could easily be used to identify individuals, or to create a dossier on them that could be used for targeted advertising or other purposes.
This paragraph is woolly and does not appear to support the claim in the bullet point. But the full report has much strong wording on page 2: "Meta also confirmed that it used the data to target ads to taxpayers, including for companies other than the tax prep companies themselves, and to train Meta's own AI algorithms".
The logic of this claim, via page 19, appears to be: Meta says if their sensitive information filtering algorithm detected personal information, the information would not have been used for advertising, and they'd have sent a notification to the tax prep firms. They also confirmed the negative case: if no notification was received by the tax prep firm, then no filtering of their data took place. Meta was asked to provide copies of notifications they had sent to the tax prep firms and they did not do so. So the assumption is that none were sent, therefore no filtering took place, and the data were used as a signal in the advertising algorithm.
I don't find it to be an unequivocal confirmation, but the sources don't support your claim that this article is misleading or your claim that there's no reason to consider it a problem of the tech companies involved.
[1] https://www.warren.senate.gov/imo/media/doc/Attacks%20on%20T...
[2] https://www.warren.senate.gov/oversight/reports/in-new-repor...
maaarghk | 3 years ago | on: Rails on Docker
What if you want to start a new project using the latest postgres version because postgres has a new feature that will be handy, but you already maintain another project that uses a postgres feature or relies on behaviour that was removed/changed in the latest version? You're going to set up a whole new VM on the internet to be a staging environment and instead of setting up a testing and deployment pipeline you're going to just FTP / remote-ssh into it and change live code?
you define an apps entire chain of dependencies including external services in a compose file / set of kube manifests / terraform config for ecs. Then in the container definition itself you lock down things like C library and distro versions: maybe you use specially patched imagemagick on one project or a pdf generator on another, and fontconfig defaults were updated and it changed how aliasing works between distro releases and now your fonts are all fugly in generated exports... stick all those definitions in a Dockerfile and deploy onto any Linux distro / kernel and it'll look identical to it does on local
nevermind this, check out this thread to destroy your illusion that simply having node installed locally will make your next project super future proof https://github.com/webpack/webpack/issues/14532 and note that some of the packages referencing this old issue in open new bug reports are very popular!
if you respond please do not open with "yeah but rust", I can still compile Fortran code too
maaarghk | 3 years ago | on: Rails on Docker
maaarghk | 3 years ago | on: Fred Brooks has died
maaarghk | 5 years ago | on: Unsplash is being acquired by Getty Images
maaarghk | 5 years ago | on: RethinkDB: why we failed (2017)