It makes me deeply happy to hear success stories like this for a project that's moving in the correctly opposite direction to that of the rest of the world.
Engildification. Of which there should be more!
My soul was also satisfied by the Sleeping At Night post which, along with the recent "Lie Still in Bed" article, makes for very simple options to attempt to fix sleep (discipline) issues.
It's a function of scale: the larger the team/company behind the product, the greater its enshittification factor/potential.
The author recently went full time on their Marginalia search engine, AFAIK it's a team size of 1, so it's the farthest away from any enshittification risk. Au contraire, like you say: it's at these sizes where you make jewels. Where creativity, ingenuity and vision shines.
This comment is sponsored by the "quit your desk job and go work for yourself" gang.
When I stay up late and go on a walk after morning coffee I just walk like a zombie then fall asleep for the rest of the day. I think it is rationalization, maybe something else changed for the better and then you'd both want to go for a walk and also sleep better.
I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.
In this case, marginalia is (ridiculously) efficient because Victor (the creator) is intentionally restricting what hardware it runs on and how much ram it has.
If he just caved in and added another 32GiB it would work for a while, but the inefficient design would persist and the problem would just show it's head later and then there would be more complexity around that design and it might not be as easy to fix then.
If the original thesis is correct, then I think it explains why most software is so bad (bloated, slow, buggy) nowadays. It's because very few individual pieces of software nowadays are hitting any limits (in isolation). So each individual piece is terribly inefficient but with the latest M2 Pro and GiB connection you can just keep ahead of the curve where it becomes a problem.
Anyways, turned into a rant; but the conclusion might be to limit yourself, and you (and e everyone else) will be better off long term.
For most applications it simply does not make any sense to spend this much time on relatively small optimizations. If you can choose to either buy 32GiB of RAM for your server for less than $50 or spend probably over 40 hours of developer time at at least $20 / hour, it is quite obvious which one makes more sense from a business perspective. Not to mention that the website was offline for an entire week - that alone would've killed most businesses!
A lot of tech people really like doing such deep dives and would happily spend years micro-optimizing even the most trivial code, but endless "yak shaving" isn't going to pay any bills. When the code runs on a trivial number of machines, it probably just isn't worth it. Not to mention that such optimizations often end up in code which is more difficult to maintain.
In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing. On the other hand, the user is running it on a five-year-old machine which was at best mid-range when they bought it.
You can't really sell "we can save 150MB of memory" to your manager, but you can sell "saving 150MB of memory will make our app's performance go from terrible to borderline for 10% of users".
Oh thank you. I have been doing a hobby project on search engines, and I kept searching of variations of "Magnolia" for some reason. ""Marginalia"" at least for me is hard to remember. Currently, I am trying to figure my way around Searx.
Does Marginalia support "time filters" for search like past day, past week etc? According the special keywords the only search params accepted is based on years.
year>2005 (beta) The document was ostensibly published in or after 2005
year=2005 (beta) The document was ostensibly published in 2005
year<2005 (beta) The document was ostensibly published in or before 2005
The search index isn't updated more than once every month, so no such filters. The year-filter is pretty rough too. It's very hard to accurately date most webpages.
> In brief, every time an SSD updates a single byte anywhere on disk, it needs to erase and re-write that entire page.
Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s. Writing is orders of magnitude slower than reading, but still a couple orders of magnitude faster than erasing (resetting back to “empty”), and only erases count against your wear budget. It sounds like an own goal for an SSD controller to not take advantage of that, although if the actual guts of it are log-structured then I could imagine it not being able to.
>> In brief, every time an SSD updates a single byte anywhere on disk, it needs to erase and re-write that entire page.
> Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s.
Maybe it depends. I wrote the driver for more than one popular flash chips (don't remember which ones now, but that employer had a policy of never using components that were not mainstream and available from multiple suppliers) and all the chips I dealt with did read and write exclusively via fixed-size pages.
Since SSDs are collection of chips, I'd expect each chip on the SSD to only support fixed-size paged IO.
In this scenario I was basically re-writing the entire hard-drive completely in a completely random order, which is the worst case scenario for an SSD.
Normally the controller will use a whole bunch of tricks (e.g. overprovisioning, buffering and reordering of writes) to avoid this type of worst case pattern, but that only goes so far.
1. Writable Unit: The smallest unit you can write to in an SSD is a page.
2. Erasable Unit: The smallest unit you can erase in an SSD is a block, which consists of multiple pages.
So if a write operation impacts only 1 byte within a page, the SSD cannot erase just that byte. However, it does not need to erase the entire block either.
The SSD can perform a "read-modify-write" type of operation:
- Read the full page containing the byte that needs to change into the SSD's cache buffer.
- Modify just the byte that needs updating in the page cache.
- Erase a new empty block.
- Write the modified page from cache to the new block.
- Update the FTL mapping tables to point to the updated page in the new block.
So, a page does need to be rewritten even if just 1 byte changes. Whole-block erasure is avoided until many pages within it need to be modified.
Not precisely. The logical view of a page living at some address of flash is not the reality. Pages get moved around the physical device as writes happen. The drive itself maintains a map of what addresses are used for what purpose, their health and so on. It’s a sparse storage scheme.
There’s even maintenance ops and garbage collection that happens occasionally or on command (like a TRIM).
In reality a “write” to a non-full drive is:
1. Figure out which page the data goes to.
2. Figure out if there’s data there or not. Read / modify / write if needed.
3. Figure out where to write the data.
4. Write the data. It might not go back where it started. In fact it probably won’t because of wear leveling.
You’re right that the controller does a far more complex set of steps for performance. That’s why an empty / new drive performs better for a while (page cache aside) then literally slows down compared to a “full” drive that’s old, with no spare pages.
Source: I was chief engineer for a cache-coherent memory mapped flash accelerator. We let a user map the drive very very efficiently in user space Linux, but eventually caved to the “easier” programming model of just being another hard drive after a while.
Just a shout out to my boss at Mojeek who presumably has a very similar path to this (the post resonates a lot with past conversations). Mojeek started back in 2004 and for the most part has been a single developer who built the bones of it, and in that, pretty much all of the IR and infrastructure.
Limitations of finance and hardware, making decisions about 32 vs 64 bit ids, sharding, speed of updating all sound very familiar.
Reminds me of Google way back when and their 'Google dance' that updated results once a month, nowadays it's a daily flux. It's all an evolution, and great to see Marginalia offering another view point into the web beyond big tech.
Lots of people treat optimization as some deep-black-magic thing[1], but most of the time, it's actually easier than fixing a typical bug; all you have to do is treat excessive resource usage identical to how you would treat a bug.
I'm going to make an assertion: most bugs that you can easily reproduce don't require wizardry to fix. If you can poke at a bug, then you can usually categorize it. Even the rare bugs that reveal a design flaw tend to do so readily once you can reproduce it.
Software that nobody has taken a critical eye to performance on is like software with 100s of easily reproducible bugs that nobody has ever debugged. You can chip away at them for quite a while until you run into anything that is hard.
1: I think this attitude is a bit of a hold-out from when people would do things like set their branch targets so that the drum head would reach the target at the same time the CPU wanted the instruction, and when resources were so constrained that everything was hand-written assembly with global memory-locations having different semantics depending on the stage the program was in. In that case, really smart people had already taken a critical eye to performance, so you need to find things they haven't found yet. This is rarely true of modern code.
I also like how they decided to mix in sqlite alongside the existing MariaDB database because it gets the job done, and "a foolish consistency is the hobgoblin of little minds".
"I wish I knew what happened, or how to replicate it. It’s involved more upfront design than I normally do, by necessity. I like feeling my way forward in general, but there are problems where the approach just doesn’t work"
Yes, immediate (or soon enough) gratification feels good... To me, and maybe is because I am an old fart, this is the difference between programming and engineering.
I took a start script from 90 seconds to 30 seconds yesterday, by finding a poorly named timeout value. Now I'm working on a graceful fallback from itimer to alarm instead of outdated c directives.
I enjoyed reading this but I also fundamentally don't get it at a basic level like... why re-implement stuff that has already been done by entire teams? There are so many bigger and productionised search and retrieval systems. Why invest the human capital in doing it all again yourself? I just don't get it.
How does one learn new things if not first by understanding them, and then looking to evolve them?
Sure, we can shell out to libraries and to other people's work, but at some point, you will have to understand the thing that you've abstracted away if you want to evolve it.
Most of what exists doesn't work for my application. It either assumes an unbounded resource budget, or makes different priorities that don't scale by e.g. permitting arbitrary realtime updates.
I'm building stuff myself because it's the only way I'm aware of to run a search engine capable of indexing quarter of a billion documents on a PC.
I want to do my own version of something like this to have a personally curated search function. The "it's mine" factor is enticing, if it does something unexpected, then I know all the dependent, interacting parts so I can trace the problem and fix it.
But I'm a privacy and self-hosting nut, which is probably just another way of saying the same thing.
(I will probably never actually do it, but that doesn't stop it being on the list).
Independant efforts are as important, if not more important, than larger corporate efforts. Because independants have different priorities, and are often not as tied to satisfying curiosity instead of some bottom-line.
Read any articles about how amazing Google's search is lately? Me neither.
[+] [-] BLKNSLVR|2 years ago|reply
It makes me deeply happy to hear success stories like this for a project that's moving in the correctly opposite direction to that of the rest of the world.
Engildification. Of which there should be more!
My soul was also satisfied by the Sleeping At Night post which, along with the recent "Lie Still in Bed" article, makes for very simple options to attempt to fix sleep (discipline) issues.
[+] [-] sph|2 years ago|reply
The author recently went full time on their Marginalia search engine, AFAIK it's a team size of 1, so it's the farthest away from any enshittification risk. Au contraire, like you say: it's at these sizes where you make jewels. Where creativity, ingenuity and vision shines.
This comment is sponsored by the "quit your desk job and go work for yourself" gang.
[+] [-] keyle|2 years ago|reply
- cut his resources burning in half,
- is more productive with a smaller screen than before, and
- sleeps like a log at night
(his 3 last blog posts!)
[+] [-] brutusborn|2 years ago|reply
I’ve been struggling with sleep this year and finding out what works for others is very useful. I wouldn’t have found it if not for your comment.
Link for others interested: https://www.marginalia.nu/log/86-sleep/
[+] [-] noman-land|2 years ago|reply
[+] [-] not_your_vase|2 years ago|reply
[+] [-] throwaway290|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] flexagoon|2 years ago|reply
https://help.kagi.com/kagi/search-details/search-sources.htm...
If you use the "non-commercial" lens, those results, among with results from Kagi's own index and a few other independent sources will be prioritized.
[+] [-] gnyman|2 years ago|reply
I'm wondering if humans are mostly incapable of producing great things without (artifical) restrictions.
In this case, marginalia is (ridiculously) efficient because Victor (the creator) is intentionally restricting what hardware it runs on and how much ram it has.
If he just caved in and added another 32GiB it would work for a while, but the inefficient design would persist and the problem would just show it's head later and then there would be more complexity around that design and it might not be as easy to fix then.
If the original thesis is correct, then I think it explains why most software is so bad (bloated, slow, buggy) nowadays. It's because very few individual pieces of software nowadays are hitting any limits (in isolation). So each individual piece is terribly inefficient but with the latest M2 Pro and GiB connection you can just keep ahead of the curve where it becomes a problem.
Anyways, turned into a rant; but the conclusion might be to limit yourself, and you (and e everyone else) will be better off long term.
[+] [-] crote|2 years ago|reply
For most applications it simply does not make any sense to spend this much time on relatively small optimizations. If you can choose to either buy 32GiB of RAM for your server for less than $50 or spend probably over 40 hours of developer time at at least $20 / hour, it is quite obvious which one makes more sense from a business perspective. Not to mention that the website was offline for an entire week - that alone would've killed most businesses!
A lot of tech people really like doing such deep dives and would happily spend years micro-optimizing even the most trivial code, but endless "yak shaving" isn't going to pay any bills. When the code runs on a trivial number of machines, it probably just isn't worth it. Not to mention that such optimizations often end up in code which is more difficult to maintain.
In my opinion, a lot of "software bloat" we see these days for apps running on user machines comes from a mismatch between the developer machine and the user machine. The developer is often equipped with a high-end workstation as they simply need those resources to do their job, but they end up using the same machine to do basic testing. On the other hand, the user is running it on a five-year-old machine which was at best mid-range when they bought it.
You can't really sell "we can save 150MB of memory" to your manager, but you can sell "saving 150MB of memory will make our app's performance go from terrible to borderline for 10% of users".
[+] [-] nicbou|2 years ago|reply
[+] [-] marginalia_nu|2 years ago|reply
[+] [-] anyfactor|2 years ago|reply
Does Marginalia support "time filters" for search like past day, past week etc? According the special keywords the only search params accepted is based on years.
[+] [-] marginalia_nu|2 years ago|reply
[+] [-] meithecatte|2 years ago|reply
[+] [-] mananaysiempre|2 years ago|reply
Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s. Writing is orders of magnitude slower than reading, but still a couple orders of magnitude faster than erasing (resetting back to “empty”), and only erases count against your wear budget. It sounds like an own goal for an SSD controller to not take advantage of that, although if the actual guts of it are log-structured then I could imagine it not being able to.
[+] [-] lelanthran|2 years ago|reply
> Is that actually true for SSDs? For raw flash it’s not, provided you are overwriting “empty” all-ones values or otherwise only changing 1s to 0s.
Maybe it depends. I wrote the driver for more than one popular flash chips (don't remember which ones now, but that employer had a policy of never using components that were not mainstream and available from multiple suppliers) and all the chips I dealt with did read and write exclusively via fixed-size pages.
Since SSDs are collection of chips, I'd expect each chip on the SSD to only support fixed-size paged IO.
[+] [-] marginalia_nu|2 years ago|reply
Normally the controller will use a whole bunch of tricks (e.g. overprovisioning, buffering and reordering of writes) to avoid this type of worst case pattern, but that only goes so far.
[+] [-] gavinray|2 years ago|reply
1. Writable Unit: The smallest unit you can write to in an SSD is a page.
2. Erasable Unit: The smallest unit you can erase in an SSD is a block, which consists of multiple pages.
So if a write operation impacts only 1 byte within a page, the SSD cannot erase just that byte. However, it does not need to erase the entire block either.
The SSD can perform a "read-modify-write" type of operation:
- Read the full page containing the byte that needs to change into the SSD's cache buffer.
- Modify just the byte that needs updating in the page cache.
- Erase a new empty block.
- Write the modified page from cache to the new block.
- Update the FTL mapping tables to point to the updated page in the new block.
So, a page does need to be rewritten even if just 1 byte changes. Whole-block erasure is avoided until many pages within it need to be modified.
[+] [-] mikehollinger|2 years ago|reply
Not precisely. The logical view of a page living at some address of flash is not the reality. Pages get moved around the physical device as writes happen. The drive itself maintains a map of what addresses are used for what purpose, their health and so on. It’s a sparse storage scheme.
There’s even maintenance ops and garbage collection that happens occasionally or on command (like a TRIM).
In reality a “write” to a non-full drive is: 1. Figure out which page the data goes to. 2. Figure out if there’s data there or not. Read / modify / write if needed. 3. Figure out where to write the data. 4. Write the data. It might not go back where it started. In fact it probably won’t because of wear leveling.
You’re right that the controller does a far more complex set of steps for performance. That’s why an empty / new drive performs better for a while (page cache aside) then literally slows down compared to a “full” drive that’s old, with no spare pages.
Source: I was chief engineer for a cache-coherent memory mapped flash accelerator. We let a user map the drive very very efficiently in user space Linux, but eventually caved to the “easier” programming model of just being another hard drive after a while.
[+] [-] Filligree|2 years ago|reply
It's completely false. Even the most primitive SSD controllers would make some attempt at mitigating this.
[+] [-] ricardo81|2 years ago|reply
Limitations of finance and hardware, making decisions about 32 vs 64 bit ids, sharding, speed of updating all sound very familiar.
Reminds me of Google way back when and their 'Google dance' that updated results once a month, nowadays it's a daily flux. It's all an evolution, and great to see Marginalia offering another view point into the web beyond big tech.
[+] [-] aidenn0|2 years ago|reply
Lots of people treat optimization as some deep-black-magic thing[1], but most of the time, it's actually easier than fixing a typical bug; all you have to do is treat excessive resource usage identical to how you would treat a bug.
I'm going to make an assertion: most bugs that you can easily reproduce don't require wizardry to fix. If you can poke at a bug, then you can usually categorize it. Even the rare bugs that reveal a design flaw tend to do so readily once you can reproduce it.
Software that nobody has taken a critical eye to performance on is like software with 100s of easily reproducible bugs that nobody has ever debugged. You can chip away at them for quite a while until you run into anything that is hard.
1: I think this attitude is a bit of a hold-out from when people would do things like set their branch targets so that the drum head would reach the target at the same time the CPU wanted the instruction, and when resources were so constrained that everything was hand-written assembly with global memory-locations having different semantics depending on the stage the program was in. In that case, really smart people had already taken a critical eye to performance, so you need to find things they haven't found yet. This is rarely true of modern code.
[+] [-] newman123|2 years ago|reply
[+] [-] marginalia_nu|2 years ago|reply
SQLite has the benefit that it's a single file though, and you can do cool things with that. Such as copy it, share it, etc.
[+] [-] janvdberg|2 years ago|reply
[+] [-] fouc|2 years ago|reply
[+] [-] gary_0|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] jorgeleo|2 years ago|reply
"I wish I knew what happened, or how to replicate it. It’s involved more upfront design than I normally do, by necessity. I like feeling my way forward in general, but there are problems where the approach just doesn’t work"
Yes, immediate (or soon enough) gratification feels good... To me, and maybe is because I am an old fart, this is the difference between programming and engineering.
[+] [-] csours|2 years ago|reply
[+] [-] 38|2 years ago|reply
https://search.marginalia.nu/search?query=encoding%2Fjson
but NOT what I am looking for. If I try again with Google:
https://google.com/search?q=encoding%2Fjson
first result is exactly what I want.
[+] [-] _madmax_|2 years ago|reply
[+] [-] alberth|2 years ago|reply
[+] [-] donotsay|2 years ago|reply
[+] [-] paulcole|2 years ago|reply
[deleted]
[+] [-] catchnear4321|2 years ago|reply
[+] [-] bomewish|2 years ago|reply
[+] [-] lbotos|2 years ago|reply
Sure, we can shell out to libraries and to other people's work, but at some point, you will have to understand the thing that you've abstracted away if you want to evolve it.
Or as the kids say, let him cook: https://knowyourmeme.com/memes/let-him-cook-let-that-boy-coo...
[+] [-] marginalia_nu|2 years ago|reply
I'm building stuff myself because it's the only way I'm aware of to run a search engine capable of indexing quarter of a billion documents on a PC.
[+] [-] EdwardDiego|2 years ago|reply
Because you're able to offer a product with differences in the market.
Because you can.
It's not like they're implementing their own DB to get to a MVP of their product "Tinder, but like for dogs".
[+] [-] BLKNSLVR|2 years ago|reply
But I'm a privacy and self-hosting nut, which is probably just another way of saying the same thing.
(I will probably never actually do it, but that doesn't stop it being on the list).
[+] [-] catchnear4321|2 years ago|reply
plenty of folks dabble in art, many of which quite poorly, when good and even great artwork can be purchased for not all that much.
some paint the walls of their house rather than call a professional painter.
there are countless reasons, including but not limited to the easiest, most flippant, and possibly the most human response to your question.
why not?
[+] [-] yjftsjthsd-h|2 years ago|reply
Because by not being a huge team you can do it better.
[+] [-] ftxbro|2 years ago|reply
[+] [-] fjfuvucucuc|2 years ago|reply
[+] [-] MrVandemar|2 years ago|reply
Read any articles about how amazing Google's search is lately? Me neither.
[+] [-] fouc|2 years ago|reply
anybody or any team that sets out to build something will prioritize different things and end up with radically divergent implementations.
[+] [-] mrkeen|2 years ago|reply