fool1471 | 1 year ago | on: Ask HN: Which Manual Work to Pivot To?
fool1471's comments
fool1471 | 1 year ago | on: Neofetch developer archives all his repositories: "Have taken up farming"
fool1471 | 2 years ago | on: JITX – The Fastest Way to Design Circuit Boards
The major issue is that there are a lot more stake-holders in a schematic design than just the person who draws it.
Often the person who draws the schematic is not the person who lays out the PCB, so the schematic needs to encode information about how to lay the PCB out. For example, you may have several components in parallel in a filter circuit; the order that they are placed on the schematic can be used to communicate which order they should be placed on the board (but does not affect the electrical correctness of the schematic at all), and this in turn helps to betray the function of these components.
Even if the whole PCB design is being done by one engineer, other engineers need to be able to review it. If a schematic is electrically correct but otherwise a mess, it makes reviewing it a lot more difficult.
Then there will be third-parties who need to read the schematic and understand the design - for hobbyist products this could be the end-users, and in consumer products it might be service engineers. A well drawn schematic makes a PCB easier to understand and therefore easier to debug, repair, and modify.
In the analogy of software vs schematics, schematics are both code and documentation at the same time.
fool1471 | 2 years ago | on: JITX – The Fastest Way to Design Circuit Boards
For sure! PCB layout is one of the best and most enjoyable parts of the job.
My major concern with these products is always the quality of the schematics they produce. It's not enough for a schematic to have all the nets connected correctly - a schematic is a piece of documentation that, when drawn well, should convey a lot more information than simply what pin is connected to which others.
fool1471 | 2 years ago | on: Ask HN: How should organize and back up 23 TiB of personal files?
Since you are not bothered about huge data throughput, software-RAID (rather than hardware-RAID) would be the cheaper way to go in general. A lot of the discussion of pros/cons of different RAID-levels that you can find online will give a lot of attention to how it affects the aggregate read/write speed; for a single-user data-archive, this is not hugely important when compared to the basic ratio of usable to redundant disk space.
You can manually set up software-RAID on most linux distros for any filesystem you like, or if you want something that does most of it for you then I can recommend unRAID (https://unraid.net/).
I have an unRAID server with 8x 3TB HDDs and 2x 1TB SSDs in which the HDDs are in a parity RAID array (I can never remember which RAID-level number that is) meaning I get 18TB of usable space with two-disk of redundancy.
The two SSDs then act as a write-cache (in mirrored RAID) so the HDDs don't need to be spun up when you add new data. This makes the whole thing very low power as the HDDs spend 99% of their time spun-down. I think my server uses about 42W on average, and that's with a bunch of web services going on as well.
unRAID provides a lot of useful utilities for managing files, some native and some via plugins. This is things such as Discord/email/Telegram integrations (so your server can notify you when a disk starts to fail) as well as things like file integrity monitoring, fan control, scheduled backup, etc.
LUKS encryption is supported if you want extra security.
Re point 3: if your unRAID OS keels over for whatever reason then the data on the drives is stored in the filesystem of your choice so you are not bound to using unRAID to recover that data.
Re point 4: you can test your system by pulling drives - unRAID should automatically emulate the data on the missing drive while you find a replacement. I have had multiple drives fail (due to a faulty HBA) and have not lost any data at all.
Re point 2: an unRAID server is accessible on your local network, and you can choose to enable SAMBA and/or NFS for different "shares." e.g. you could have your music share accessible read-only by everyone but write-protected to just you and simultaneously have your personal files share only accessible to one user.
What filesystem to use is a whole can of worms. I use XFS and it is thoroughly okay - I'm not enough of a power user for the choice of file system to make a difference to daily life and I suspect this is the case for you too.
If you want more redundancy than the standard parity array offers, then you can set up "pools" in the OS that have different RAID levels.
For your 6TB of data, an array of 4x 3TB HDDs would be a fine start, giving you 8TB of usable space with single-disk redundancy. An SSD cache pool can be added later to lower initial setup costs. With just four to six devices, chances are your motherboard will have enough SATA ports for you to not need any kind of PCI HBA or expander cards. 3TB per disk is a good trade-off point between capacity and the cost of a failed drive, IMO.
You won't need a lot of RAM - 8GB would be plenty if you don't plan on using it for hosting any web-services.
For a processor, look for a good low-power option such as an Intel Xeon E3-1220L. With a canny enough choice of components, you should be able to keep your power consumption well below 30W (while the drives are spun down). If you really only need to access this data very occasionally then there is no reason not to power the server down when not in use.
Chassis choice is also near-infinite. I have a UNAS (https://www.u-nas.com/xcart/cart.php?target=category&categor...) which is lovely, but any old PC chassis will do if you aren't fussy.
A good tip with multi-drive systems in general is to deliberately unbalance your drive-use. If you use all the drives equally, you will wear them all out at the same rate, raising the chances that multiple drives will fail within a short space of time. I tend to separate my drives by use - music on one, films on others, etc. This also keeps power consumption down as you only need to spin up one drive to access one group of files (rather than songs in an album being potentially split over multiple drives).
In terms of strats for performing the actual backup/organisation, I would find a way of mounting the existing drives to the new system one-by-one (such as by using an eSATA PCI card). Using fresh, blank drives for your NAS will mean that you can retain the ability to start again if you mess up or change your mind about what you want to keep as you won't need to modify any of the data on your existing collection while you create the new one. Generally speaking, I would avoid trying to organise the data in-situ.
It is worth re-iterating that you can achieve very similar results with a server running Ubuntu using open-source software RAID drivers or even a cheap second-hand hardware RAID controller PCI card. I am just a big fan of how low-maintenance my unRAID setup is compared to when I used to do it all manually.
It is also worth mentioning that you can get a pretty good off-the-shelf solution for this sort of thing from companies like QNAP and Synology.
Flipping burgers for a company you don't care about is very different to marshaling your mind into doing work that requires you to care about a company you don't care about.
That said, flipping burgers for a company you don't care about also sucks for many other reasons that should not be downplayed. Chiefly, the lack of cash makes it hard-to-impossible to even attempt to find your fulfillment in activities outside of work.
My recommendation would be to do what I've done and to find a small company staffed by engineers who aren't sycophants. Look for meaningful benefits (e.g. more PTO, an office in a nice location, four-day week) and try your very best to relax: it's just a job. (Then take up yoga /s).