Dec 202015
 

Taranis RAID-6 logoAnd here’s another minor update after [part 4½] of my RAID array progress log. Since I was thinking that weekly RAID verifications would be too much for an array this size (because I thought it would take too long), I set the Areca controller to scrub and error-check my disks at an interval of four weeks. Just a shame that the thing doesn’t feature a proper scheduler with a calendar and configurable starting times for this. All you can tell it is to “check it every n weeks”. In any case, the verify completed this night, running for a total of 29:07:29 (so: 29 hours) across those 12 × 6TB HGST Ultrastar disks, luckily with zero bad blocks detected. Would’ve been a bit early for unrecoverable read errors to occur anyway. ;)

So this amounts to a scrub speed just shy of 550MiB/s, which isn’t blazingly fast for this array, but it’s acceptable I think. The background process priority during this operation was set to “Low (20%)”, and there have been roughly 150GiB of I/O during the disk scrubbing. Most of that I/O was concentrated in one fast Blu-Ray demux, but some video encoders were also running, reading and writing small amounts of data all the time. I guess I can live with that result.

Ah yeah, I should also show you the missing benchmarks, but before that, here’s a more normal photograph of the final system (where “normal” means “not a nightshot”. It does NOT mean “in a proper colorspace”, cause the light sources were heavily mixed, so the colors suck once again! ;) ):

The Taranis system during daytime

The “Taranis” RAID-6 system during daytime

And here are the missing benchmarks on the finalized array in a normal state. Once again, this is an Areca ARC-1883ix-12 with 12 × HGST Ultrastar 7k6000 6TB SAS disks in RAID-6 at an aligned stripe block size of 64kiB. The controller is equipped with FBM-backed 2GiB of Reg. ECC DDR-III/1866 write-back cache, and each individual drive features 128MiB of write-through cache (I have no UPS unit for this machine, which is why the drive caches themselves aren’t configured for write-back). The controller is configured to read & discard parity data to reduce seeks and is thus tuned for maximum sequential read performance. The benchmarking software was HDTune 2.55 as well as HDTune Pro 5.00:

With those modern Ultrastars instead of the old Seagate Cheetah 15k drives, the only thing that turned out to be worse is the seeking time. Given that it’s 3.5″ 7200rpm platters vs. 2.5″ 15000rpm platters that’s only natural though. Sequential throughput is a different story though: At large enough block sizes we get more than 1GiB/s almost consistently, for both reads and writes. Again, I’d have loved to try 4k writes as well, but HDTune Pro would just crash when picking that block size, same as with the Cheetah drives. Anyhow, 4k performance is nice as well. I’d give you some ASSSD numbers, but it fails to even see the array at all.

What I’ve seen in some other reviews holds true here too though: The Ultrastars do seem to fluctuate partly when it comes to performance. We can see that for the 64kiB reads as well as the 512kiB and 1MiB writes. On average though, raw read and write performance is absolutely stellar, just like ATTO, HDTach and Everst/Aida64 tests have suggested before as well. That IBM 1.2GHz [PowerPC 476] dual core chip is truly a monster in comparison to what I’ve seen on older RAID-6 controllers.

I’ve compared this to my old 3ware 9650SE-8LPML (AMCC [PowerPC 405CR] @ 266MHz), to an Adaptec-built ICP Vortex 5085BR (Intel [XScale IOP333] @ 800MHz), both with 8 × 7200rpm SATA disks and even to a Hewlett Packard MSA2312fc SAN with 12 × 15000rpm SAS Cheetahs (AMD [Turion 64 MT-32] 1800MHz). All of them are simply blown out of the water in every way thinkable: Performance, manageability, and if I were to consider the MSA2312fc as a serious contender as well (it isn’t exactly meant as a simple local block device): Stability too. I couldn’t tell how often those freaking management controllers are crashing on that thing and have to be rebooted via SSH…

So this thing has been up for about 4 weeks now. Still looking good so far…

Summer will be interesting, with some massive heat and all. We’ll see it that’ll trigger the temperature alarms of the HDD bays…