Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
I would have chosen the i3-n305 version of that motherboard because it has In-Band ECC (IBECC) support - great for ZFS. IBECC is very underrated feature that doesn't get talked about enough. It may be available for the N150/N355, but I have never seen a confirmation.
Can you explain why ECC is great for ZFS in particular as opposed to any other filesystem?
And if the data leaves the NAS to be modified by a regular desktop computer then you lose the ECC assurance anyway, don't you?
Obligatorische Pastete:
"16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."
Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days?
Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this.
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.
The developer hardkernel also publishes all relevant info such as board schematics.
I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply).
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
I am not at all an expert, I can only share my anecdotal unscientific observations!
I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*
The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.
I can un-confidently say that it feels like the fastest TM device I've ever used!
TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.
I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.
So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.
And I can confirm that it soaks up most of the 48GB of RAM!
TrueNAS reports ZFS Cache currently at 36.4 GiB.
I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL
* I don't recommend this.
They seem unreliable and report errors all the time.
But it's just what I had sitting around :-)
I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...
I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc.
Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.
ZFS also uses RAM for read through cache aka ARC.
However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common.
Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB).
You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1].
I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.
Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.
Depends on the network speed. At 1Gbps a single HDD can easily saturate the network with sequential reads. A pair of HDD could do the same at 2.5Gbps. At 10Gbps or more, you would definitely see the benefits of caching in memory.
Not as much as expected. I have several toy ZFS pools out of ancient 3tb wd reds, and anything remotely home-grade (stripped mirrors, 4,6,8 wide raidz1/2) saturates the disks before 10gig networking. As long as it's sequential, 8gb or 128gb doesn't matter.
Makes sense. I didn't know if the FS used RAM for this purpose without some specialized software. PikachuEXE and Mewse mentioned ZFS. Looks like it has native support for caching frequent reads [0]. Good to know
As the other said already if you have more RAM you can have more cache.
Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.
But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.
Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
The Jonsbo N3 case which is 8x 3.5" drives has a smaller footprint than this, which might be better for most folks. Needs a SFX PSU though, which is kind of annoying.
If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:
* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.
* The other ports can be used for conventional M.2 nVME drives.
That's what I built! It's a great case, the only components I didn't already have lying around were the motherboard and PSU.
It's very well made, not as tight on space as I expected either.
The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.
Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.
I recently got a used QNAP TS-131P for cheap, that holds one 3.5" drive for offsite backup at a friend's house. It's compact and runs off a common 12V 3A power supply.
There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin
Are there any NAS solutions for 3.5" drives, homebrew or purchased, that are slim enough to stash away in a wall enclosure? (This sort of thing: https://www.legrand.us/audio-visual/racks-and-enclosures/in-... , though not that particular model or height.) I'd like to really stash something away and forget about it. Height is the major constraint, you can only be ~3.5" tall. And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
> And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.
Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).
I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.
> Do you have to use that particular wall enclosure thing?
It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.
I researched a bunch of cases recently and the Jonsbo, while it looked good, came up as having a ton of issues with airflow to cool the drives. Because of this, I ended up buying the Fractal Node 804 case, which seemed to have a better overall quality level and didn't require digging around AliExpress for a vendor.
lol same. All my parts arrived except the 804. The supply chain for these cases appears to be imploding where I live (Hungary). The day after I ordered it either went out of stock or went up by +50% in all webshops that are reputable here.
I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.
Are you satisfied with it? Any issues that came up since building?
I too was in the market recently for a NAS, downgrading from a 12 bay server because of yagni - it's far too big, too loud, runs hot, and uses way too much energy. I was also tempted by the jonsbo (it's a very nice case) but prices being what they are it was actually better to get a premade 4 bay model for under $500 (batteries included, hdds are not). It's small, quiet, power efficient, and didnt break the bank in the process. Historically DIY has always been cheaper, but that's no longer the case (no pun intended)
I have built 2 NAS that borrow ideas from his blogs. One uses the Silverstone CS382 case (6x 6TB SAS) and the other uses a Topton N5105 Mini-ITX board (6x 10TB SATA). I'm quite happy with both.
Obligatory comment every time one of these threads comes up that Synology, sure, the hardware is a bit dated but… as far as set and forget goes:
I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.
I wonder how many consumer level HDDs in RAID5 will take to saturate a 10Gbps connection. My napkin math says that from 1,250 MB/s we can achieve around 1,150 MB/s due to network overhead so it means about 5 Red Pro/ Ironwolf Pro (reading at about 250–260 MB/s each) in RAID5 to saturate the connection.
Case can actually fit a low-profile discrete GPU, there's about half height worth of space.
Obligatorische Pastete: "16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."
Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.
The developer hardkernel also publishes all relevant info such as board schematics.
I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.
I'm not sure what the benefit would be since all it's doing is moving information from the drives over to the network.
I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*
The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.
I can un-confidently say that it feels like the fastest TM device I've ever used!
TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.
I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.
So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.
And I can confirm that it soaks up most of the 48GB of RAM!
TrueNAS reports ZFS Cache currently at 36.4 GiB.
I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL
* I don't recommend this. They seem unreliable and report errors all the time. But it's just what I had sitting around :-) I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...
https://superuser.com/a/993019
Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.
I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.
Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.
[1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu...
[0]: https://www.truenas.com/docs/references/l2arc/
Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.
But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.
By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).
Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.
[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...
[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...
That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).
If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:
* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.
* The other ports can be used for conventional M.2 nVME drives.
It's very well made, not as tight on space as I expected either.
The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.
Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.
There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin
Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).
I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.
It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.
I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.
Are you satisfied with it? Any issues that came up since building?
ref: https://blog.briancmoses.com/2024/07/migrating-my-diy-nas-in...
I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.