There are a few people currently working on NAS boards based on the CM4. A few different approaches in terms of chipsets used and performance targets.
I'm trying to track them in this GitHub issue [1] but a couple people have remained more or less anonymous as they don't want to attract attention too early.
The single PCIe lane is the biggest limitation if you're looking for raw speed (350 MB/sec is kind of the upper real-world sustained transfer limit), though since the gigabit Ethernet port is on a different interface, you can still expect to get 80-100 MB/sec network transfer speeds.
Something like this, with the right case and OMV or other adequate software would be a relatively competitive replacement for low-end NASes.
I'm also exploring building a 2.5G NAS with a CM4, but the PCIe bus speed limitation is what kinda hamstrings that. Hopefully the next Pi revision has a 4x (at least) lane, like the RockPro64.
"There are a few people currently working on NAS boards based on the CM4."
I continue to appreciate the time and attention you are paying to these new boards on your blog and in your comments here at HN - thank you.
I am having trouble sourcing maxed out CM4 parts - that is, 8 GB ram and 32 GB onboard storage[1]. They are either sold out until August or October or something silly like that or they are only available in 200+ quantity.
Do you have any suggestions as to where I could source those ?
80MB/sec is actually quite fine for most NAS applications.
Anyone trying to get 10G speeds out of a high end NAS won’t be looking to Raspberry Pi solutions right now, anyway.
The real advantage, IMO, is that this helps kick off the popularity of DIY NAS solutions based on ARM hardware. It’s not the first solution in this space, but Raspberry Pi is great for taking things mainstream.
I imagine that a few years from now we’ll have an even faster Raspberry Pi to build a NAS around. These current-gen solutions might be just what we need to get the software sorted out before the powerful hardware arrives.
Its sorta hard to justify those at the moment, since the XHCI interface on the base Pi4B is more than capable of maximizing the PCIe x1 interface. Basically, you can run a NAS at the capabilities of the machine with random USB3 JBODs.
So a $100 5+ bay USB3 JBOD, RPi4B run over the 1G nic, and a 64-bit arm64 distro (because rasbian will die with modern SATA disks in a larger capacity JBOD) and you have a fairly reasonable low end NAS for basically the price of the disks.
(or just plug in 4 of USB3 easystores/etc and save on the enclosure).
Have you come across anyone producing a PoE board for the CM4 - just looking for a network port, USB and perhaps SD card (similar to a Ubquiti Cloud Key)?
(the Gumstix camera board is overkill for my needs)
Nothing against cool RPi projects but be aware that a motherboard with a NAS-grade Celeron, like ASRock J4105M, sells for $85 or so (but needs RAM, $20 for 4GB roughly). At $105, this leaves this carrier board around $35 of budget if you can get RPi CM4 4GB for $70. Plus all the doubts about reliability mentioned above.
I love the pi and have many of them. But their beauty seems to come out at the edge where things are low power and they replace nonsense IOT devices.
To replace a server or desktop is a stretch. That's where using a pi is death by 10,000 papercuts. The super-competitive low end PC market gives you so many things for free, like good power supplies, a wide variety of cases and silly things like power switches and a clock with a battery.
Last year I built a nas system around the Asrock j4105 itx board, using a flashed dell perc h200 as an hba, in a fractal node 304.
I had looked at arm options, including the kobol 64 and boards from hardkernel, but decided that paying a bit more for x86 would make things a lot easier.
It's working well so far: low power draw and near silent, allows for proper zfs on linux support, hardware accelerating transcoding for Jellyfin streams, and at a bit over half the cost of a 6 bay x86 Synology.
I've noticed this regularly with RPi projects. They are interesting, but the costs are often at least as high as an, often more powerful, x86 equivalent.
Do not buy ROCK Pi SATA HAT. I own one, and after few months of lite usage with 4 SSDs, one channel (two SSD connectors) has just died. I just installed OpenMediaVault and configured LVM on 2TB, 2x 1TB and .5 TB SSDs, to use it as a home media server.
I recently tried to initialize warranty process (I bought it directly from shop.allnetchina.cn with shipping to Central Europe) but I got no replay.
What warranty process? You are in Central Europe. Seller is in China, no distributor in between. Basically you have no rights, buy it cheap and forget. Warranty lol. Best case for you is to open a case with your credit card vendor or PayPal. But you got what you paid for. As I maker in Europe I don’t feel bad about you, because I must sell everything for insane price with included 2 years warranty. While Chinese not.
That looks a lot like the mainboard from the Western Digital Sharespace I had about 10 years ago.
I was recently wanting a NAS again, but came to the conclusion that the best bang for your buck was just grabbing a cheap mini-ITX board and processor as the price of dedicated NAS boxes was not much different and was far less flexible. Then you would have full-fat sata, even m2 for a boot drive, whatever RAM you wanted etc.
My memory is hazy now but there was something weird/screwy about the previous gen Microserver ... like, you couldn't use all four SATA drives at the same time as booting from the onboard ? Or something ?
How is the gen10 ? Can I use all four SATA drives for my raid array but still have a SSD/m.2 boot drive ?
I would still defer to something with ECC RAM and ZFS for keeping my data, but I could see myself using something like this for unimportant data and/or serving important data as a read-only mirror. Either way, I love that an RPi system will soon come in a sweet form-factor optimized for NAS use.
I wouldnt trust my data with this kind of devices. There's nothing wrong per se, but I have my doubts about how easy you can find replacement parts if something breaks in, say, a year or two.
If you really depend on your data and want it to be safe, i'd recommend spending the extra money and either getting a proper nas (synology/qnap) or go the proper diy way (aka an x86 box and truenas/unraid).
you really don't want to be in the position where you absolutely need your data but the replacement parts are two-weeks far in the future because they're travelling via snail mail or worse, relying on 2nd-hand spare parts off ebay.
edit: not to mention, the gigabit ethernet port is a bottleneck. you would probably hitting the bottleneck even by using four rotational disks.
If you aren’t using hardware raid or another disk management tool that locks you in to one manufacturer or worse one unique controller - you should be able to take your disks to any other machine and rebuild your array.
At least, that’s how you should be constructing a home NAS if that’s what you’re doing.
The same situation can arise with a popular/managed solution like qnap/syno.
I have a DS918 and really love it (it’s one of those set it and forget it machines) but I don’t totally know how it works. It’s Linux of course, but it’s sorta a black box.
So I think there is a lot to be said for DIY as long as you are aware of the drawbacks and engineer around them accordingly.
As a household backup device, rather than a NAS, this is actually pretty good.
Use mdadm RAID-10 across four spinning disks, or RAID-1 across a pair and have two slots free. If the board fails, you can plug them into any other Linux box with SATA ports available.
Most people have no better than 1Gb/s ethernet available in their house anyway. One expects a backup or restore to take a while, but also not to be interactive.
It's probably cheaper to repurpose an old desktop, but this will probably use less power, even when both are asleep.
If you're concerned about availability of parts, why not just keep some cold spares around? It's not like we're talking about expensive and specialized hardware here.
Here's what I want, but haven't been able to find:
I already have a beefy server at home ( it's actually a refurbished enterprise workstation, with 24 cores, but read on ).
However it lacks drives. What I ideally want is a a dumb drive bay I could buy, and then connect ( somehow ) to my existing server, so I could use its CPU and RAM.
I don't want this drive bay to have its own CPU, I just want it to hold data and transfer it over some wire. Ideally I would achieve close to gigabit speeds requesting data off of this server/drive bay.
Some options I've explored are those dumb quad storage bays that connect over USB-3. But I was worried about running ZFS over a USB interface, as well as potentially parallel read and writes with the quality of the USB controller.
Not sure what your workstation offers, but I bought myself an Icy Dock Black Vortex (MB074SP-B) [1]. It's just a case (internal 5,25", but that airflow! :D) which fits 4x3,5" drives with a 120mm fan in front. Slapped some noise damping feets under it. I then bought two low-profile eSATApd to internal SATA slotpanels [2] and 4 corresponding cables [3]. Works great so far!
And yes, you need eSATA, because the distance between cage and machine might be enough to introduce read/write errors. That's what I had, because I used normal SATA cables before.
Hm, rack mounted fiberchannel drive enclosures exist (~$100 used on fleabay, usually sans disk trays), but that'd require a fiberchannel card in your server, direct attach copper (or fiber) cabling, and driver finagling.
Use RPi for NAS is really a stretch. Just buy those low-power x86 mini-itx boards with 4GB memory and 4 SATA ports along with gigabit ethernet, they're around $150?
Note the smaller size of RPi-etc makes no sense when you're going to host 4 hard drives anyways which takes quite some space on their own, and you need a decent PSU for the drives too, and a solid case as well, etc.
Just buy those ASRock mini-itx boards at newsegg or somewhere and let RPi do what it's best at.
A quick google shows eMMC flash memory for the compute module...do you need to mindfully control writing to flash to keep something like this from nuking the boot device? Could be my choice of storage (whatever $4 buys at Microcenter) but the failure mode on my RPi weather station was SD card death. I'd hate to rely on that for a NAS.
You can also boot the pi 4 via kind-of-special-sauce PXE. I wish they had just adopted standard PXE (why do we need a magic string? What the hell is going on with the three spaces in that magic string... Why can't we use the standard PXE way to specify the file to load?), but it's better than nothing. See https://www.raspberrypi.org/documentation/hardware/raspberry...
The 12V/5A power supply seems a little undersized for 4 x 3.5 HDD startup. I suspect that it's fine if the drives are started sequentially but the drives will probably want to pull more than 5A if they're all starting simultaneously. Possibly something to watch out for.
5A should be plenty for normal post-start operation.
geerlingguy|5 years ago
I'm trying to track them in this GitHub issue [1] but a couple people have remained more or less anonymous as they don't want to attract attention too early.
The single PCIe lane is the biggest limitation if you're looking for raw speed (350 MB/sec is kind of the upper real-world sustained transfer limit), though since the gigabit Ethernet port is on a different interface, you can still expect to get 80-100 MB/sec network transfer speeds.
Something like this, with the right case and OMV or other adequate software would be a relatively competitive replacement for low-end NASes.
I'm also exploring building a 2.5G NAS with a CM4, but the PCIe bus speed limitation is what kinda hamstrings that. Hopefully the next Pi revision has a 4x (at least) lane, like the RockPro64.
[1] https://github.com/geerlingguy/raspberry-pi-pcie-devices/iss...
rsync|5 years ago
I continue to appreciate the time and attention you are paying to these new boards on your blog and in your comments here at HN - thank you.
I am having trouble sourcing maxed out CM4 parts - that is, 8 GB ram and 32 GB onboard storage[1]. They are either sold out until August or October or something silly like that or they are only available in 200+ quantity.
Do you have any suggestions as to where I could source those ?
[1] CM4008032, I think ...
PragmaticPulp|5 years ago
Anyone trying to get 10G speeds out of a high end NAS won’t be looking to Raspberry Pi solutions right now, anyway.
The real advantage, IMO, is that this helps kick off the popularity of DIY NAS solutions based on ARM hardware. It’s not the first solution in this space, but Raspberry Pi is great for taking things mainstream.
I imagine that a few years from now we’ll have an even faster Raspberry Pi to build a NAS around. These current-gen solutions might be just what we need to get the software sorted out before the powerful hardware arrives.
StillBored|5 years ago
So a $100 5+ bay USB3 JBOD, RPi4B run over the 1G nic, and a 64-bit arm64 distro (because rasbian will die with modern SATA disks in a larger capacity JBOD) and you have a fairly reasonable low end NAS for basically the price of the disks.
(or just plug in 4 of USB3 easystores/etc and save on the enclosure).
youngtaff|5 years ago
(the Gumstix camera board is overkill for my needs)
smarx007|5 years ago
88|5 years ago
Also it only has two SATA ports so would need a PCIe expansion card to match the four ports offered by the Pi board?
I suspect the Pi is also significantly more power efficient?
m463|5 years ago
I love the pi and have many of them. But their beauty seems to come out at the edge where things are low power and they replace nonsense IOT devices.
To replace a server or desktop is a stretch. That's where using a pi is death by 10,000 papercuts. The super-competitive low end PC market gives you so many things for free, like good power supplies, a wide variety of cases and silly things like power switches and a clock with a battery.
eightails|5 years ago
I had looked at arm options, including the kobol 64 and boards from hardkernel, but decided that paying a bit more for x86 would make things a lot easier.
It's working well so far: low power draw and near silent, allows for proper zfs on linux support, hardware accelerating transcoding for Jellyfin streams, and at a bit over half the cost of a 6 bay x86 Synology.
_ea1k|5 years ago
gitowiec|5 years ago
I recently tried to initialize warranty process (I bought it directly from shop.allnetchina.cn with shipping to Central Europe) but I got no replay.
lnsru|5 years ago
Nursie|5 years ago
I was recently wanting a NAS again, but came to the conclusion that the best bang for your buck was just grabbing a cheap mini-ITX board and processor as the price of dedicated NAS boxes was not much different and was far less flexible. Then you would have full-fat sata, even m2 for a boot drive, whatever RAM you wanted etc.
I know it's not a very exciting solution...
the-dude|5 years ago
End of the article : There’s close to no information about the software right now, and the hardware is not available yet
gorgoiler|5 years ago
It supports ECC and has 4 cable-free swappable bays plus space for an SSD system drive in the top. The bolts-as-caddy system is also a great idea.
rsync|5 years ago
How is the gen10 ? Can I use all four SATA drives for my raid array but still have a SSD/m.2 boot drive ?
unknown|5 years ago
[deleted]
EwanToo|5 years ago
If you do want something like this, I think the ODROID-HC4[1] is probably a better option.
1 - https://ameridroid.com/products/odroid-hc4
magicalhippo|5 years ago
AnIdiotOnTheNet|5 years ago
1MachineElf|5 years ago
KaiserPro|5 years ago
Although I don't think ZFS is stable on the pi just yet[1], so perhaps 4 sata drives is overkill.
I would be interested in a single sata/cm4 carrier board though.
[1]citation needed, I've not really looked...
znpy|5 years ago
If you really depend on your data and want it to be safe, i'd recommend spending the extra money and either getting a proper nas (synology/qnap) or go the proper diy way (aka an x86 box and truenas/unraid).
you really don't want to be in the position where you absolutely need your data but the replacement parts are two-weeks far in the future because they're travelling via snail mail or worse, relying on 2nd-hand spare parts off ebay.
edit: not to mention, the gigabit ethernet port is a bottleneck. you would probably hitting the bottleneck even by using four rotational disks.
whalesalad|5 years ago
At least, that’s how you should be constructing a home NAS if that’s what you’re doing.
The same situation can arise with a popular/managed solution like qnap/syno.
I have a DS918 and really love it (it’s one of those set it and forget it machines) but I don’t totally know how it works. It’s Linux of course, but it’s sorta a black box.
So I think there is a lot to be said for DIY as long as you are aware of the drawbacks and engineer around them accordingly.
dsr_|5 years ago
Use mdadm RAID-10 across four spinning disks, or RAID-1 across a pair and have two slots free. If the board fails, you can plug them into any other Linux box with SATA ports available.
Most people have no better than 1Gb/s ethernet available in their house anyway. One expects a backup or restore to take a while, but also not to be interactive.
It's probably cheaper to repurpose an old desktop, but this will probably use less power, even when both are asleep.
m3at|5 years ago
pmiller2|5 years ago
njharman|5 years ago
Naac|5 years ago
I already have a beefy server at home ( it's actually a refurbished enterprise workstation, with 24 cores, but read on ).
However it lacks drives. What I ideally want is a a dumb drive bay I could buy, and then connect ( somehow ) to my existing server, so I could use its CPU and RAM.
I don't want this drive bay to have its own CPU, I just want it to hold data and transfer it over some wire. Ideally I would achieve close to gigabit speeds requesting data off of this server/drive bay.
Some options I've explored are those dumb quad storage bays that connect over USB-3. But I was worried about running ZFS over a USB interface, as well as potentially parallel read and writes with the quality of the USB controller.
MrGilbert|5 years ago
And yes, you need eSATA, because the distance between cage and machine might be enough to introduce read/write errors. That's what I had, because I used normal SATA cables before.
[1]: https://www.icydock.com/goods.php?id=166
[2]: https://www.delock.de/produkte/G_61725/merkmale.html?setLang...
[3]: https://www.delock.de/produkte/G_84402/merkmale.html?setLang...
KaiserPro|5 years ago
Get a decent[1] external SAS adaptor, a SAS cable, and a second hand enclosure and bob's your noisy uncle!
something like this: https://www.bargainhardware.co.uk/dell-powervault-md1220-sto...
https://www.bargainhardware.co.uk/hp-z800-z820-external-mini...
Your workstation might even have a sas controller on it already.
This might be overkill, but even with spinny disks you'll be able to get fast random io[2]
[1] subjective. If yours is an enterprise workstation you'll most likley be able to get an official SAS controller for it.
[2] well about 10 iops per drive.
francis_t_catte|5 years ago
njharman|5 years ago
ausjke|5 years ago
Note the smaller size of RPi-etc makes no sense when you're going to host 4 hard drives anyways which takes quite some space on their own, and you need a decent PSU for the drives too, and a solid case as well, etc.
Just buy those ASRock mini-itx boards at newsegg or somewhere and let RPi do what it's best at.
Damogran6|5 years ago
manuel_w|5 years ago
cure|5 years ago
[edit: fixed spelling]
entangledqubit|5 years ago
5A should be plenty for normal post-start operation.