Jun 052015

Taranis RAID-6 logoAfter [part 1] we now get to the second part of the Taranis RAID-6 array and its host machine. This time we’ll focus on the Areca controller fan modification or as I say “Noctuafication”, the real power supply instead of the dead mockup shown before and a modification of it (not the electronics, I won’t touch PSU electronics!) plus the new CPU cooler, which has been designed by a company which sits in my home country, Austria. It’s Noctuas most massive CPU cooler produced to this date, the NH-D15. Also, we’ll see some new filters applied to the side part of the case, and we’ll take a look at the cable management, which was a job much more nasty than it looks.

Now, let’s get to it and have a look at what was done to the Areca controller:

So as you can see above, the stock heatsink & fan unit was removed. Reason being that it emits a very high-pitched, loud noise, which just doesn’t fit into the new machine which creates more like a low-pitched “wind” sound. In my old box, which features a total of 19 40×40mm fans you wouldn’t hear the card, but now it’s becoming a disturbance.

Note that when doing this, the Arecas fan alarm needs to be disabled. What the controller does due to lack of a rpm signal cable is to measure the fan’s “speed” by measuring its power consumption. Now the original fan is a 12V DC 0.09A unit, whereas the Noctua only needs 0.06A, thus triggering the controllers audible alarm. In my case not so troublesome. Even if it would fail – which is highly unlikely for a Noctua in its first 10 years or so – there are still the two 120mm side fans.

Cooling efficiency is slightly lower now, with the temperature of the dual-core 1.2GHz PowerPC 476FP CPU going from ~60°C to ~65°C, but that’s still very much ok. The noise? Pretty much gone!

Now, to the continued build:

So there it is, although not yet with final hardware all around. In any case, even with all that storage goodness sitting in there, the massive Noctua NH-D15 simply steals the show here. That Xeon X5690 will most definitely never encounter any thermal issues! And while the NH-D15 doesn’t come with any S1366 mounting kit, Noctua will send you one NM-I3 for free, if you email them your mainboard or CPU receipt as well as the NH-D15 receipt to prove you own the hardware. Pretty nice!

In total we can see that cooler, the ASUS P6T Deluxe mainboard, the 6GB RAM that are just there for testing, the Areca ARC-1883ix-12, a Creative Soundblaster X-Fi XtremeMusic, and one of my old EVGA GTX580 3GB Classified cards. On the top right of the first shot you can also spot the slightly misaligned Areca flash backup module again.

While all my previous machines were in absolute chaos, I wanted to have this ONE clean build in my life, so there it is. For what’s inside in terms of cables, very little can be seen really. Considering 12 SAS lanes, 4 SATA cables, tons of power cables and extensions, USB+FW cables, fan cables, an FDD cable, 12 LED cathode traces bundled into 4 cables for the RAID status/error LEDs and I don’t know what else. Also, all the internal headers are used up. 4 × USB for the front panel, one for the combo drives’ card reader and one for the Corsair Link USB dongle of the power supply, plus an additional mini-Firewire connector at the rear.

Talking about the cabling, I found it nearly impossible to even close the rear lid of the tower, because the Great Cthulhu was literally sitting back there. It may not look like it, but it took me many hours to get it under some control:

Cable chaos under control

That’s a ton of cables. The thingy in the lower right is a Corsair Link dongle bridging the PSUs I²C header to USBXPress, so you can monitor the power supply in MS Windows.

Now it can be closed without much force at least! Lots of self-adhesive cable clips and some pads were used here, but that’s just necessary to tie everything down, otherwise it just won’t work at all. Two fan cables and resistors are sitting there unused, as the fans were re-routed to the mainboard headers instead, but everything else you can see here is actually necessary and in use.

Now, let’s talk about the power supply. You may have noticed it already in the pictures above, but this Corsair AX1200i doesn’t look like it should. Indeed, as said, I modified it with an unneeded fan grill I took out of the top of the Lian Li case. Reason is, that this way you can’t accidentally drop any screws into the PSU when working on the machine, and that can happen very quickly. If you miss just one, you’re in for one nasty surprise when turning the machine on! Thanks fly out to [CryptonNite]German flag, who gave me that idea. Of course you could just turn the PSU around and let it suck in air from the floor (The Lian Li PC-A79B supports this), but I don’t want to have to tend to the bottom dust filter all the time. So here’s what it looks like:

A modfied Corsair Professional Series Platinum AX1200i.

A modfied Corsair Professional Series Platinum AX1200i. Screws are no danger anymore!

With 150W of power at +5V, this unit should also be good enough for driving all that HDD drive electronics. Many powerful PSUs ignore that part largely and only deliver a lot at +12V for CPUs, graphics cards etc. Fact is, for hard drives you still need a considerable amount of 5V power! Looking at Seagates detailed specifications for some of the newer enterprise drives, you can see a peak current of 1.45A @ 5V in a random write scenario, which means 1.45A × 5V = 7.25W per disk, or 12 × 7.25W = 87W total for 12 drives. That, plus USB requiring +5V and some other stuff. So with 150W I should be good. Exactly the power that my beloved old Tagan PipeRock 1300W PSU also provided on that rail.

Now, as for the side panels:

And one more, an idea I got from an old friend of mine, [Umlüx]Austrian Flag. Since I might end up with a low pressure case with more air being blown out rather than sucked in, dust may also enter through every other unobstructed hole, and I can’t have that! So we shut everything out using duct tape and paper inlets (a part of which you have maybe seen on the power supply closeup already):

The white parts are all paper with duct tape behind it. The paper is there so that the sticky side of the tape doesn't attract dust, which would give the rear a very ugly look otherwise. As you can see, everything is shut tight, even the holes of the controller card. No entry for dust here!

The white parts are all paper with duct tape behind it. The paper is there so that the sticky side of the tape doesn’t attract dust, which would give the rear a very ugly look otherwise. As you can see, everything is shut tight, even the holes of the controller card. No entry for dust here!

That’s it for now, and probably for a longer time. The next thing is really going to be the disks, and since I’m going for 6TB 4Kn enterprise disks, it’s going to be terribly expensive. And that alone is not the only problem.

First we got the weak Euro now, which seems to be starting to hurt disk drive imports, and then there is this crazy storage “tax” (A literal translation would be “blank media compensation”) that we’re getting in October after years of debate about it in my country. The tax is basically supposed to cover the monetary loss of artists due to legal private recordings from radio or TV stations to storage media. The tax will affect every device that features any kind of storage device, whether mechanical/magnetic, optical or flash. That means individual disks, SSDs, blank DVDs/BDs, USB pendrives, laptops, desktop machines, cellphones and tablets, pretty much everything. Including enterprise class SAS drives.

Yeah, talk about some crazy and stupid “punish everybody across the board for what only a few do”! Thanks fly out to the Austro Mechana (“AUME”, something like “GEMA” in Germany) and their fat-ass friends for that. Collecting societies… legal, institutionalized, large-scale crime if you ask me.

But that means that I’m in between a rock and a hard place. What I need to do is to find the sweet spot between the idiot tax and the Euros currency rate, taking natural price decline into account as well. So it’s going to be very hard to pick the right time to buy those drives. And as I am unwilling to step down to much cheaper 512e consumer – or god forbid shingled magnetic recording – drives with read error rates as high as <1 in 1014 bits, we’re talking ~6000€ here at current prices. Since it’s 12 drives, even a small price drop will already have great effect.

We’ll see whether I’ll manage to make a good decision on that front. Also, space on my current array is getting less and less by the week, which is yet another thing I need to keep my eyes on.

Edit: [Part 3 is now ready]!

May 282015

Taranis RAID-6 logoTodays post shall be about storage. My new storage array actually. I wanted to make this post episodic, with multiple small posts that make sort of a build log, but since I’m so damn lazy, I never did that. So by now, I have quite some material piled up, which you’re all getting in one shot here. This is still not finished however, so don’t expect any benchmarks or even disks – yet! Some parts will be published in the near future, in the episodic manner I had actually intended to go for. So…

I’ve been into parity RAID (redundant array of independent/inexpensive disks) since the days of PATA/IDE with the Promise Supertrak SX6000, which I got in the beginning of 2003. At first with six 120GB Western Digital disks in RAID-5 (~558GiB of usable capacity), then upgraded to six 300GB Maxtor MaxLine II disks (~1.4TiB, the first to break the TiB barrier for me). It was very stable, but so horribly slow and fragmented at the end, that playback of larger video files – think HDTV, Blu-Rays were hitting the market around that time – became impossible, and the space was once again filled up at the end of 2005 anyway.

2006, that was when I got the controller I’m still using today, the 3ware 9650SE-8LPML. Typically, I’d say that each upgrade has to give me double capacity at the very least. Below that I wouldn’t even bother with replacing either disks or a whole subsystem, given the significant costs. The gain has to be large enough to make it worthwhile.

The 3ware had its disks upgraded once too, going from a RAID-6 array consisting of 8×1TB Hitachi Deskstars (~5.45TiB usable) to 8×2TB Hitachi Ultrastars (~10.91TiB usable), which is where I’m sitting at right now. All of this – my whole workstation – is installed in an ancient EYE-2020 server tower from the 90s, which so far has housed everything starting from my old Pentium II 300MHz with a Voodoo² SLI setup all the way up to my current Core i7 980X hexcore with a nVidia SLI subsystem. Talk about some long-lasting hardware right there. So here’s what the “Helios” RAID-6 array and that ugly piece of steel look like today, and please forgive me for not providing any pictures of the actual RAID controller or its battery backup unit, I don’t have any nice photos of them, so I have to point you to some web search regarding the 3ware 9650SE-8LPML, as always, please CTRL+click to enlarge:

As you can see, that makes 16 × 40mm fans. It’s not like server-class super noisy, but it for sure ain’t silent either. It’s quite amazing that the Y.S. Tech fans in there have survived running 24/7 from 2003 to 2015, that’s a whopping 12 years! They are noisier now, and every few weeks one of the bearings would go to saw-blade mode for a brief moment, but what can you expect. None have died so far, so that’s a win in my book for any consumer hardware (which the HDCS was).

Thing is, I have two of those 3ware RAID controllers now, but each one has issues. One wouldn’t properly synchronize on the PCIe bus, negotiating only a single PCIe lane, and that thing is PCIe v1.1 even, which means a 250MiB/s limit in that crippled mode. The second one syncs properly, but has a more pressing issue; Whenever there are sharp environmental temperature changes (opening the window for 5 minutes when it’s cool outside is enough), the controller randomly starts dropping drives from the array. It took me a LONG time to figure that out, as you probably can imagine. Must be some bad soldering spots on the board or something, but I couldn’t really identify any.

Plus, capacity is running out again. Now, the latest 3ware firmware would enable me to upgrade this to at least 8 × 6TB, but with 4K video coming up and with my desire to build something very long-lasting, I decided to retire “Helios”. Ah, yes. The name…

Consider me as being childish here, but naming is something very important for me, when it comes to machines and disks or arrays. ;) I had decided to name each array once per controller. For disk upgrades, it simply gets a new number. So there was the IDE one, “Polaris”. Then “Polaris 2”, then “Helios” and “Helios 2”.

The next one shall be called “Taranis”, named after an iconic vessel a player could fly in the game [EVE Online], and its own namesake, an ancient Celtic [god of thunder].

Supposedly, a famous Taranis pilot once said this:

“The taranis is a ship for angry men or people who prefer to deal in absolutes. None of that cissy boy, ‘we danced around a bit, shot some ammo then ran away LOL’, or, ‘I couldn’t break his tank so I left’, crap. It goes like this:

You fly Taranis. A fight starts. Someone dies.”

I flew on the wing of a Taranis pilot for only one single time. A lot of people died that night, including our entire wing! ;)

In any case, I wanted to 1up this a bit. From certain enterprise storage solutions I of course knew the concept of hot-swapping and more importantly error reporting LEDs on the front of a storage enclosure. Since that’s extremely useful, I wanted both for my new array in a DIY way. I also wanted to get rid of the Antec HDCS, which had served me for 12 years now, and ultimately also semi-retire my case, after understanding that it was just too cramped for this. A case that had served me for 17 years, 24/7.

Holy shit. That’s a long time!

So I had to come up with a good solution. The first part was: I needed hot-swap bays that could do error reporting in a way supported by at least some RAID controllers. I found only ONE aftermarket bay that would fully satisfy my requirements. The controller could come later, I would just pick it from a pool of controllers supporting the error LEDs of the cages.

It was the Chieftec SST-2131SAS ([link 1], [link 2]), the oldest of Chieftecs SAS/SATA bays. It had to be the old one, because the newer TLB and CBP series no longer have any hard disk error reporting capability built in for whatever reason, and on top of that, the older SST series shows much less plastic and just steel and what I think is magnesium alloy, feels awesome:

So there is no fancy digital I²C bus for error reporting on the bays, just some plain LED connectors that do require the whole system to have a common electrical ground to work for closing the circuit, as we only got cathode pins. I got myself four such bays, which makes for a total of 12 possible drives. As you may already be guessing, I’m going for more than just twice the capacity on this one.

For a fast, well-maintainable controller, I went for the Areca [ARC-1883ix-12], which was released just at the end of 2014. It supports both I²C as well as the old “just an error LED” solution my bays have, pretty nice!

Areca (and I can confirm this first-hand) is very well known for their excellent support, which means a lot of points have to go to them for that. Sure the Taiwanese Areca guys don’t speak perfect English, but given their technical competence, I can easily overlook that. And then they support a ton of operating systems, including XP x64, even after it’s [supposed] demise (The system shall run with a mirror of my current XP x64 setup at first, and either some Linux or FreeBSD UNIX later). This thing comes with a dual-core ROC (RAID-on-Chip) running at 1.2GHz, +20% when compared to its predecessor. Plus, you get 2GiB of cache, which is Reg. ECC DDR-III/1866. Let’s just show you a few pictures before going into detail:

So there are several things to notice here:

  1. It’s got an always-full-power fan and a big cooler, so it’s not going to run cool. Like, ever.
  2. It requires PCIe power! Why? Because all non-PEG devices sucking more than 35W have to, by PCIe specification. This one eats up to 37.2W (PEG meaning the “PCI Express Graphics” device class, graphics cards get 75W from the slot itself).
  3. It has Ethernet. Why? Because you need no management software. The management software runs completely *ON* the card itself!

The really interesting part of course is the Ethernet plug. In essence, the card runs a complete embedded operating system, including a web server to enable the administrator to manage it in an out-of-band way.

That means that a.) it can be managed on all operating systems even without a driver and b.) it can even be managed, when the host operating system has crashed fatally, or when the machine sits in the system BIOS or in DOS. Awesome!

Ok, but then, there is heat. The system mockup build I’m going to show you farther below was still built with the “lets plug it in the top PCIe x4 slot” idea in mind. That would include my EVGA GeForce GTX580 3GB Classified Ultra SLI system still being there, meaning that the controller would have to sit right above an extremely hot GPU.

By now, I’ve abandoned this idea for a thermally more viable solution, replacing the SLI with a GeForce GTX Titan Black I got for an acceptable price. In the former setup, the controllers many thermal probes have reported temperatures reaching 90°C during testing, and that’s without the GPUs even doing much, so yeah.

But before we get to the mockup system build, there is one more thing, and that’s the write cache backup for the RAID controller for cases of power failures. Typically, Lithium-Ion batteries are used for that, but I’m already a bit fed up with my 3ware batteries having gone belly-up every 2 years. So I wanted to ditch that. There are such battery backup units (“BBUs”) for the Areca, but it may also be combined with a so-called flash backup module (“FBM”). Typically, a BBU would keep the DRAM and its write cache alive on the controller during power outages for like maybe 24-48 hours, waiting for the main AC power to return. Then, the controller would flush the cached data to the disks to retain a consistent state.

An FBM does it differently: It uses capacitors instead, plus a small on-board SSD. It would keep the memory alive for just seconds, just enough to copy the data off the DRAM and onto its local SSD. Then it would power off entirely. The data gets fetched back after any arbitrary amount of downtime upon power-up of the system, and flushed out to the RAID disks. The hope here is, that the supercapacitors being used by such modules can survive for much longer than the LiOn batteries.

There is one additional issue though: Capacity (both in terms of electrical capacitance and SSD capacity) is limited by price and physical dimensions. So the FBM can only cover 2GiB of cache, but not the larger sizes of 4GiB or 8GiB.

That’s where Areca support came into play, readily helping you with any pre-purchase question. I talked to a guy there, and described my workload profile to him, which boils down to highly sequential I/O with relatively few parallel streams (~40% read + ~60% write), and very little random R/W. He told me that based on that use case, more cache doesn’t make sense, as that’d be useful only for highly random I/O profiles with a very high workload and high parallelism. Think busy web servers or mail servers. But for me, 4GiB or the maximum of 8GiB of cache wouldn’t do more than what the stock 2GiB does.

As such, I forgot about the cache upgrade idea and went with the flash backup module instead of a conventional BBU. That FBM is called the ARC-1883-CAP:

So, let’s put all we have for now together, and look at some build pictures:

Let me tell you one thing; Yes, the Lian Li PC-A79B is nice, because it’s so manageable. The floors in the HDD cages can be removed even, so that any HDD bay can fit, with no metal noses in the way in the wrong places. Its deep, long and generally reasonably spacious.

But – there is always a but – when you’re coming from an ancient steel monster like I did, the aluminium just feels like thin paper or maybe tin foil. The EYE-2020 can could the weight of a whole man standing on top of it. But with an aluminium tower you’ll have to be careful not to bend anything when just pulling out the mainboard tray. The HDD cage feels as if you could very easily rip it out entirely with just one hand.

Aluminium is really soft and weak for a case material, so that’s a big minus. But I can have a ton of drives, a much better cooling concept and a much, much, MUCH cleaner setup, hiding a lot of cables from the viewer and leaving room for air to move around. Because that part was already quite terrible in my old EYE.

Please note that the above pictures do not show the actual system as it’s supposed to look like in the end though. The RAID controller already moved one slot downwards, away from the 4 PCIe lanes coming from the ICH10R (“southbridge”), which in turn is connected to the IOH (“northbridge”) only via a 2GiB/s DMI v1 bus. So it went down one slot, onto the PCIe/PEG x16 slot which is connected to the X58 chipsets IOH directly. This should take care of any potential bandwidth problems, given that the ICH10R also has to route all my USB 2.0 ports, the LAN ports, all Intel SATA ports including my system SSD and the BD drives, one Marvell eSATA controller and one Marvell SAS Controller to the IOH and with it ultimately to the CPU & RAM, all via a bus that might’ve gotten a bit overcrowded when using a lot of those subsystems at once.

Also, this tiny Intel cooler isn’t gonna stay there, it just came “for free” with the second ASUS P6T Deluxe I bought, together with a Core i7 930. Well, as a matter of fact, that board… umm… let’s just say it had a little accident and had to be replaced *again*, but that’s a story for the next episode. ;) A Noctua NH-D15 monster and the free S1366 mounting kit that Noctua sends you if you need one, plus a proper power supply all have already arrived, so there might be a new post soon enough, with even more Noctuafication also being on the way! Well, as soon as I get out of my chair to actually get something done at least. ;)

And for those asking the obvious question “what drives are you gonna buy for this?”, the answer to that (or at least the current plan) is either the 6TB Seagate Enterprise Capacity 3.5 in their 4Kn version, the [ST6000NM0014], or the 6TB Hitachi Ultrastar 7K6000, also in their 4Kn version, that’d be the [HUS726060AL4210]. Given that I want drives with a read error rate of <1 error in 1015 bits read instead of <1 in 1014, like it is for consumer drives, those would be my primary drives of choice. Seagates cheap [SMR] (shingled magnetic recording) disks are completely unacceptable for me anyway, and from what I’ve heard so far, I can’t really trust Hitachis helium technology with being reliable either, so it all boils down to 6TB enterprise class drives with conventional air filling for now. That’s if there aren’t any dramatic changes in the next few months of course.

Those disks are all non-encrypting drives by the way, as encryption will likely be handled by the Areca controllers own AES256 ASIC and/or Truecrypt or Veracrypt.

Ah, I almost forgot, I’m not even done here yet. As I may get a low-air-pressure system in the end, with less air intake than exhaust, potentially sucking dust in everywhere, I’m going to filter or block dust wherever I possibly can. And the one big minus for the Chieftec bays is that they have no dust filters. And the machine sits in an environment with quite a lot of dust, so every hole has to be filtered or blocked, especially those that air gets sucked through directly, like the HDD bays.

For that I got myself some large 1×1 meter stainless steel filter roll off eBay. This filter has a tiny 0.2mm mesh aperture size and 0.12mm wire diameter, so it’s very, very fine. I think it was originally meant to filter water rather than air, but that doesn’t mean it can’t do the job. With that, I could get those bays properly modified. I don’t want them to become dust containers eventually after all.

See here:

Steel filter with 0.2mm mesh aperture

Steel filter with 0.2mm mesh aperture, coins for size comparison (10 Austrian shillings and 1 Euro).

I went for steel to have something easy enough to work with, yet still stable. Now, it took me an entire week to get this done properly, and that’s because it’s some really nasty work. First, let’s look at one of the trays that need filtering, so you can see why it’s troublesome:

So as you can see, I had to cut out many tiny pieces, that would then be glued into the tray front from the inside, for function as well as neat looks. This took more than ten man-hours for all 4 bays (12 trays), believe it or not. This is what it looks like:

Now that still leaves the other hexagonal holes in the bay frame, that air may get sucked through and into the bays inside. Naturally, we’ll have to handle them as well:

And here is our final product, I gotta say, it looks reaaal nice! And all you’d have to do every now and then is to go over the front with your vacuum cleaner, and you’re done:

SST-2131SAS, fully filtered by steel

A completed SST-2131, fully filtered by pure steel.

So yeah, that’s it for now, more to follow, including the new power supply, more dust filtering and blocking measures, all bays installed in the tower and so on and so forth…

Edit: [Part 2 is now ready]!