It’s not like a lot of people are actively commenting on this weblog, but there are at least two posts which do have quite a few replies (well, for my standards), the one about using [48GB of RAM on an X58 chipset mainboard], and the other about [SSD TRIM, exFAT, EXT3/4, ASPI and UDF 2.5 on Windows XP / XP x64]. To make it possible for users to interact and discuss in a better fashion, I had enabled nested comments. However, the comments took too much horizontal space away, limiting the usable depth of nesting levels. The last (10th) comment would be like ~2cm narrow, and extremely hard to read.

Finally, I thought I should really fix that. So I found that the corresponding CSS code for nested comments was in my themes’ subfolder, in a file called wp-content/themes/<theme name>/style.css. By inspecting my website with the [Vivaldi] web browser (based on Chromium), i found that the CSS class ul.children was likely to blame, as the comments are actually a combination of ordered and unordered HTML lists, see here:

1. ul.children { padding-left: 20px; }

On top of that, <ul> gets a wide margin-left set by default as well, worsening the situation. This resulted in child comments indenting by something like 40 pixels. For nothing. So I fixed that:

1. ul.children {
2.         padding-left: 0px;
3.         margin-left: 5px;
4. }

That way it looks much, much more compact and much nicer. This gave me enough space to make the nesting levels even deeper without sacrificing readability (like it was before at 10 levels), so I decided to go for 16 levels. Better than nothing.

The WordPress guys however – in their infinite wisdom – limited the depth to 10 levels, so I was already at the maximum?! Back to inspecting with Vivaldi – the comment settings page this time. That way I found out that the limit is set in wp-admin/options-discussion.php. This is the corresponding code:

1. <?php
2. /**
3.  * Filter the maximum depth of threaded/nested comments.
4.  *
5.  * @since 2.7.0.
6.  *
7.  * @param int $max_depth The maximum depth of threaded comments. Default 10. 8.  */ 9. $maxdeep = (int) apply_filters( 'thread_comments_depth_max', 10 );
10. 
11. $thread_comments_depth = '</label><label for="thread_comments_depth"><select name="thread_comments_depth" id="thread_comments_depth">'; 12. for ($i = 2; $i <=$maxdeep; $i++ ) { 13. $thread_comments_depth .= "<option value='" . esc_attr($i) . "'"; 14.  if ( get_option('thread_comments_depth') ==$i ) $thread_comments_depth .= " selected='selected'"; 15. $thread_comments_depth .= ">$i</option>"; 16. } 17. $thread_comments_depth .= '</select>';

Yeah, ok. So I just changed the $maxdeep assignment to this: 1. $maxdeep = (int) apply_filters( 'thread_comments_depth_max', 16 );

And with that we’re done, as everything seems to be working properly:

Nesting more than 10 WordPress comments without relying on some additional CPU-hungry plugin.

Of course, I now need to keep track of my changes, because updating either WordPress itself or my theme will revert my changes back to the previous behavior. But since I haven’t modified my WordPress installation that much so far, I can still live with patching everything back in manually after an upgrade.

Update: I just noticed, that sometimes, very long “words” (like HTTP or FTP links without any spaces in them) would overflow the barriers of the div layers they were sitting in in the comments, so while I was dealing with restoring my modifications after a theme update, I fixed the word wrapping as well. The related CSS code sits in wp-content/themes/<theme name>/style.css again. So, I look for the following part…

1. .comment-body p { line-height: 1.5em; }

…and change it to this:

1. .comment-body p {
2.         line-height: 1.5em;
3.         word-break: break-word;
4. }

Now even very long words like links will be wrapped properly! With word-break: break-word; only words without spaces will be broken where necessary, so normal sentences with whitespaces for delimiters will still be broken at the spaces, like it should be!

Just yesterday I’ve showed you how to modify and compile the [HakuNeko] Manga Ripper so it can work on FreeBSD 10 UNIX, [see here] for some background info. I also mentioned that I couldn’t get it to build on CentOS 6 Linux, something I chose to investigate today. After flying blind for a while trying to fix include paths and other things, I finally got to the core of the problem, which lies in HakuNekos’ src/CurlRequest.cpp, where the code calls CURLOPT_ACCEPT_ENCODING from cURLs typecheck-gcc.h. This is critical stuff, as [cURL] is the software library needed for actually connecting to websites and downloading the image files of Manga/Comics.

It turns out that CURLOPT_ACCEPT_ENCODING wasn’t always called that. With cURL version 7.21.6, it was renamed to that from the former CURLOPT_ENCODING as you can read [here]. And well, CentOS 6 ships with cURL 7.19.7…

When running $./configure && make from the unpacked HakuNeko source tree without fixing anything, you’ll run into this problem: g++ -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -o obj/CurlRequest.cpp.o src/CurlRequest.cpp src/CurlRequest.cpp: In member function ‘void CurlRequest::SetCompression(wxString)’: src/CurlRequest.cpp:122: error: ‘CURLOPT_ACCEPT_ENCODING’ was not declared in this scope make: *** [obj/CurlRequest.cpp.o] Error 1 So you’ll have to fix the call in src/CurlRequest.cpp! Look for this part: 1. void CurlRequest::SetCompression(wxString Compression) 2. { 3.  if(curl) 4.  { 5.  curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8)); 6.  //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len())); 7.  } 8. } Change CURLOPT_ACCEPT_ENCODING to CURLOPT_ENCODING. The rest can stay the same, as the name is all that has really changed here. It’s functionally identical as far as I can tell. So it should look like this: 1. void CurlRequest::SetCompression(wxString Compression) 2. { 3.  if(curl) 4.  { 5.  curl_easy_setopt(curl, CURLOPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8)); 6.  //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len())); 7.  } 8. } Save the file, go back to the main source tree and you can do: • $ ./configure && make
• # make install

And done! Works like a charm:

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux!

And now, for your convenience I fixed up the Makefile and rpm/SPECS/specfile.spec a little bit to build proper rpm packages as well. I can provide them for CentOS 6.x Linux in both 32-bit as well as 64-bit x86 flavors:

You need to unzip these first, because I was too lazy to allow the rpm file type in my blogging software.

The naked rpms have also been submitted to the HakuNeko developers as a comment to their [More Linux Packages] support ticket which you’re supposed to use for that purpose, so you can get them from there as well. Not sure if the developers will add the files to the projects’ official downloads.

This build of HakuNeko has been linked against the wxWidgets 2.8.12 GUI libraries, which come from the official CentOS 6.7 package repositories. So you’ll need to install wxGTK to be able to use the white kitty:

• # yum install wxGTK

After that you can install the .rpm package of your choice. For a 64-bit system for instance, enter the folder where the hakuneko_1.3.12_el6_x86_64.rpm file is, run # yum localinstall ./hakuneko_1.3.12_el6_x86_64.rpm and confirm the installation.

Now it’s time to have fun using HakoNeko on your Enterprise Linux system! Totally what RedHat intended you to use it for!

Since I’ve started using FreeBSD as a Linux and Windows replacement, I’ve naturally always been looking at porting my “known good” software over to the UNIX OS, or at replacing it by something that gets the job done without getting on my nerves too much at the same time. For most parts other than TrueCrypt, that was quite achievable, even though I had to endure varying degrees of pain getting there. Now, my favorite Manga / Comic ripper on Windows, [HakuNeko] was the next piece of software on the list. It’s basically just a more advanced Manga website parser and downloader based on stuff like [cURL], [OpenSSL] or the [wxWidgets] GUI libraries.

I didn’t even know this until recently (shame on me for never looking closely), but HakuNeko is actually free software licensed under the MIT license. Unfortunately, the source code and build system are quite Linux- and Windows-centric, and there exist neither packages nor ports of it for FreeBSD UNIX. Actually, the code doesn’t even build on my CentOS 6.7 Linux right now (I have yet to figure out the exact problem), but I managed to fix it up so it can compile and work on FreeBSD! And here’s how, step by step:

1.) Prerequisites

Please note that from here on, terminal commands are shown in this form: $command or # command. Commands starting with a $ are to be executed as a regular user, and those starting with # have to be executed as the superuser root.

Ok, this has been done on FreeBSD 10.2 x86_32 using HakuNeko 1.3.12, both are current at the time of writing. I guess it might work on older and future releases of FreeBSD with different releases of HakuNeko as well, but hey, who knows?! That having been said, you’ll need the following software on top of FreeBSD for the build system to work (I may have missed something here, if so, just install the missing stuff like shown below):

• cURL
• GNU sed
• GNU find
• bash
• OpenSSL
• wxWidgets 2.8.x

Stuff that’s not on your machine can be fetched and installed as root from the official package repository, Like e.g.: # pkg install gsed findutils bash wx28-gtk2 wx28-gtk2-common wx28-gtk2-contrib wx28-gtk2-contrib-common

Of course you’ll need the HakuNeko source code as well. You can get it from official sources (see the link in first paragraph) or download it directly from here in the version that I’ve used successfully. If you take my version, you need 7zip for FreeBSD as well: # pkg install p7zip.

Unpack it:

• $7z x hakuneko_1.3.12_src.7z (My version) • $ tar -xzvf hakuneko_1.3.12_src.tar.gz (Official version)

The insides of my archive are just vanilla as well however, so you’ll still need to do all the modifications by yourself.

2.) Replace the shebang lines in all scripts which require it

Enter the unpacked source directory of HakuNeko and open the following scripts in your favorite text editor, then replace the leading shebang lines #!/bin/bash with #!/usr/local/bin/bash:

• ./configure
• ./config_clang.sh
• ./config_default.sh
• ./res/parsers/kissanime.sh

It’s always the first line in each of those scripts, see config_clang.sh for example:

1. #!/bin/bash
2. 
3. # import setings from config-default
4. . ./config_default.sh
5. 
6. # overwrite settings from config-default
7. 
8. CC="clang++"
9. LD="clang++"

This would have to turn into the following (I also fixed that comment typo while I was at it):

1. #!/usr/local/bin/bash
2. 
3. # import settings from config-default
4. . ./config_default.sh
5. 
6. # overwrite settings from config-default
7. 
8. CC="clang++"
9. LD="clang++"

3.) Replace all sed invocations with gsed invocations in all scripts which call sed

This is needed because FreeBSDs sed and Linux’ GNU sed aren’t exactly that compatible in how they’re being called, different options and all.

In the text editor vi, the expression :%s/sed /gsed /g can do this globally over an entire file (mind the whitespaces, don’t omit them!). Or just use a convenient graphical text editor like gedit or leafpad for searching and replacing all occasions. The following files need sed replaced with gsed:

• ./configure
• ./res/parsers/kissanime.sh

4.) Replace all find invocations with gfind invocations in ./configure

Same situation as above with GNU find, like :%s/find /gfind /g or so, but only in one file:

• ./configure

5.) Fix the make check

This is rather cosmetic in nature as $./configure won’t die if this test fails, but you may still wish to fix this. Just replace the string make --version with gmake --version (there is only one occurrence) in: • ./configure 6.) Fix the DIST variables’ content I don’t think that this is really necessary either, but while we’re at it… Change the DIST=linux default to DIST=FreeBSD in: • ./configure Again, only one occurrence. 7.) Run ./configure to create the Makefile Enough with that, let’s run the first part of the build tools: • $ ./configure --config-clang

Notice the --config-clang option? We could use GCC as well, but since clang is FreeBSDs new and default platform compiler, you should stick with that whenever feasible. It works for HakuNeko, so we’re gonna use the default compiler, which means you don’t need to install the entire GCC for just this.

There will be error messages looking quite intimidating, like the basic linker test failing, but you can safely ignore those. Has something to do with different function name prefixes in FreeBSDs libc (or whatever, I don’t really get it), but it doesn’t matter.

However, there is one detail that the script will get wrong, and that’s a part of our include path. So let’s handle that:

8.) Fix the includes in the CFLAGS in the Makefile

Find the line containing the string CFLAGS = -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread or similar in the newly created ./Makefile. After the option -O2 add the following: -I/usr/local/include. So it looks like this: CFLAGS = -c -Wall -O2 -I/usr/local/include -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread. That’s it for the Makefile.

9.) Fix the Linux-specific conditionals across the C++ source code

And now the real work starts, because we need to fix up portions of the C++ code itself as well. While the code would build and run fine on FreeBSD, those relevant parts are hidden behind some C++ preprocessor macros/conditionals looking for Linux instead. Thus, important parts of the code can’t even compile on FreeBSD, because the code only knows Linux and Windows. Fixing that isn’t extremely hard though, just a bit of copy, paste and/or modify. First of all, the following files need to be fixed:

• ./src/MangaConnector.cpp
• ./src/Logger.cpp

Now, what you should look for are all conditional blocks which look like #ifdef __LINUX__. Each will end with an #endif line. Naturally, there are also #ifdef __WINDOWS__ blocks, but those don’t concern us, as we’re going to use the “Linux-specific” code, if you can call it that. Let me give you an example right out of MangaConnector.cpp, starting at line #20:

1. #ifdef __LINUX__
2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
3. #endif

Now given that the Linux code builds just fine on FreeBSD, the most elegant and easier version would be to just alter all those #ifdef conditionals to inclusive #if defined ORs, so that they trigger for both Linux and FreeBSD. If you do this, the block from above would need to change to this:

1. #if defined __LINUX__ || __FreeBSD__
2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
3. #endif

Should you ever want to create different code paths for Linux and FreeBSD, you can also just duplicate it. That way you could later make changes for just Linux or just FreeBSD separately:

1. #ifdef __LINUX__
2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
3. #endif
4. #ifdef __FreeBSD__
5. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
6. #endif

Whichever way you choose, you’ll need to find and update every single one of those conditional blocks. There are three in Logger.cpp, three in MangaConnector.cpp, two in MangaDownloaderConfiguration.cpp and again three in MangaDownloaderMain.cpp. Some are more than 10 lines long, so make sure to not make any mistakes if duplicating them.

Note that you can maybe extend compatibility even further with additional directives like __OpenBSD__ or __NetBSD__ for additional BSDs or __unix__ for a wide range of UNIX systems like AIX or HP-UX. None of which has been tested by me of course.

When all of that is done, it’s compile and install time:

10.) Compile and install

You can compile as a regular user, but the installation needs root by default. I’ll assume you’ll want to install HakuNeko system-wide, so, we’ll leave the installation target directories on their defaults below /usr/local/. While sitting in the unpacked source directory, run:

• gmake • # gmake install If nothing starts to crash and burn, this should compile and install the code. clang will show some warnings during compilation, but you can safely ignore that. 11.) Start up the white kitty The installation procedure will also conveniently update your window manager as well, if you’re using panels/menus. Here it’s Xfce4: HakuNeko (“White Cat”) is showing up as an “Internet” tool. Makes sense. With the modifications done properly it should fire up just fine after initializing its Manga connectors: HakuNeko with the awesomeness that is “Gakkou Gurashi!” being selected from the HTTP source [MangaReader]. Recently the developers have also added [DynastyScans] as a source, which provides access to partially “rather juicy” Yuri Dōjinshi (self-published amateur and sometimes semi-professional works) of well-known Manga/Anime, if you’re into that. Yuri, that is (“girls love”). Mind you, not all, but a lot of the stuff on DynastyScans can be considered NSFW and likely 18+, just as a word of warning: HakuNeko fetching a Yuru Yuri Dōjinshi called “Secret Flowers” from DynastyScans, bypassing their download limits by not fetching packaged ZIPs – it works perfectly! Together with a good comic book reader that can read both plain JPEG-filled folders and stuff like packaged .cbz files, HakuNeko makes FreeBSD a capable comic book / Manga reading system. My personal choice for a reader to accompany HakuNeko would be [QComicBook], which you can easily get on FreeBSD. There are others you can fetch from the official package repository as well though. Final result: HakuNeko and QComicBook make a good team on FreeBSD UNIX – I like the reader even more than ComicRack on Windows. And, at the very end, one more thing, even though you’re likely going to be aware of this already: Just like Anime fansubs, fan-translated Manga or even Dōjinshi are sitting in a legal grey zone, as long as the book in question hasn’t been licensed in your country. It’s being tolerated, but if it does get licensed, ownership of a fan-translated version will likely become illegal, which means you should actually buy the stuff at that point in time. Just wanted to have that said as well. Should you have trouble building HakuNeko on FreeBSD 10 UNIX (maybe because I missed something), please let me know in the comments! When I had set XINs web chat up back in 2014, I thought I’d found the holy grail of free IRC web frontends, but that wasn’t quite the case. While it worked, it wasn’t overly stable, and its GUI was a pretty crappy high-load HTML5/JavaScript part that didn’t work in a lot of browsers. It was based on the “kind of pre-alpha” [webchat2], a project which was dropped somewhere in the middle of the development process. The biggest issue however was, that when a user was idle for like 5-10 minutes, webchat2 would drop his IRC connection in the backend without telling the user. So while the user kept thinking “oh, nobody is saying anything”, people might have continued to talk without him seeing it. The error became apparent only if the affected user started to write something again, which is when the “connection lost”-or-something message appeared. webchat2 – It looks nice, but it doesn’t really work that well. It seems that software was bad at maintaining persistent connections for extended periods of time. Back then I had tried several other alternatives, but most are based on [node.js], which my ancient Windows 2000 server (yeah yeah, I know) cannot run. I did stumble over the Python-based [qWebIRC] back then, but for some reason I had probably failed to install it properly. That piece was developed by the [QuakeNet] guys, who’re running it on their own site as well. Yesterday I decided to give it another shot, and well… The minimalistic qWebIRC login screen. “LunaticNet” isn’t really an IRC network though, it’s just the XIN.at IRC server by itself… I wanted it perfect as well, so I aimed at fulfilling all the dependencies, which are: • Some IRC server (Duh! I won’t cover that part in detail here, but I’m running UnrealIRCd). • Python 2.5.x, 2.6.x or 2.7.x (obviously, and keep in mind that it won’t work with any Python 3.x). • zope.interface (a contract-based programming interface required by Twisted). • Twisted (for event-driven networking, something IRC needs to push stuff happening on the IRC server to the web frontend). • pyWin32 (to enable Python to interface with the Win32 APIs). • simplejson (optional; preferably a version including its C extensions, provides a performance boost). • pyOpenSSL (optional; required if you wish to connect to IRC+SSL servers and/or to host the web chat via HTTPS instead of HTTP). • Java (optional; used for JavaScript minify during compile time. Makes the JS much smaller to save bandwidth). • Mercurial (optional; fast versioning system, provides a qWebIRC performance boost for some reason I don’t quite get yet). • instsrv & srvany (optional; Used to create a Windows system service for qWebIRC). Now that’s quite something, and given that I’m doing this on Windows 2000, there have to be compromises. While the latest Python 2.7.11 can work on Win2k, the installer will fail. 2.7.3 is the last which works “cleanly”. You can still install 2.7.11 on a modern Windows box and then just copy it over, but then you won’t have it registered in the OS. In any case, I decided to go with the much older Python 2.5.4, also because some of the modules listed above including machine code were nowhere to be found for Python 2.7.x in a pre-compiled state. So, some software is brand-new (from 2016 even), and other parts not so much. I tried to use the newest possible software without having to compile any machine code myself (like the C extensions of simplejson), because that would’ve been a lot of work. I packaged everything I picked for this into one archive for you to use, here it is: What you get are the following versions: • qWebIRC #516de557ddc7 • Python v2.5.4 • zope.interface v3.8.0 • Twisted v12.1.0 • pyWin32 v220 • simplejson v2.1.1 with C extensions • pyOpenSSL v0.13.12 built by egenix • Sun Java Runtime Environment v1.6u31 • Mercurial v3.4.2 And that’s what it looks like when it’s up and running: What qWebIRC looks like for a user logged into the XIN.at IRC server. Now how do you install this? Simply follow these step-by-step instructions: 1. Install Python 2.5.4. Make sure python.exe is in your systems search path. If it isn’t, add it. 2. Copy the zope\ folder from the zope.interface 3.8.0 to the Lib\ subdirectory of your Python 2.5 installation, so that it looks like: C:\Program Files\Python25\Lib\zope\. Make sure the user who will run qWebIRC has sufficient permissions on the folder. 3. Install Twisted 12.1.0. 4. Install pyWin32 220 5. Install simplejson 2.1.1 6. Install egenix’ pyOpenSSL 0.13.12. 7. Install Java 1.6u31. Make sure to disable auto-updates in the system control panel and disable the browser plugins for security reasons. Java is only needed for JavaScript code compression when compiling qWebIRC and for nothing else! 8. Install Mercurial 3.4.2. 9. Copy qWebIRC to a target directory, copy config.py.example to config.py and configure qWebIRC to your liking by editing config.py. 10. When done, open a cmd.exe shell, cd to your qWebIRC installation directory and run python .\compile.py (This will take a few seconds). To test it, run python .\run.py, which will launch qWebIRC on the default port 9090. You can terminate it cleanly by pressing CTRL+C twice in a row. 11. Optional, if you want qWebIRC as a system service: Copy instsrv.exe and srvany.exe to %WINDIR%\system32\. Then run instsrv qWebIRC %WINDIR%\system32\srvany.exe. Actual service configuration is discussed below. 12. Optional, if you want SSL, create a certificate and a private key in PEM format using OpenSSL. If you don’t know how to do that, get OpenSSL [from here] and [read this] for a quick and simple solution. Create a subfolder SSL\ in your qWebIRC installation directory and put the certificate and key files in there. When ran as a background service, the passphrase has to be removed from the key! Make sure to keep your key file safe from theft! After that, you’ll have compiled Python byte code and compressed JavaScript code for the static part of the web frontend. If you chose to create the service stub as well, you’ll need to configure the service first, otherwise it won’t really do anything. Find the service in your registry by running regedit. It should be in HKLM\SYSTEM\CurrentControlSet\Services\, called qWebIRC. Here: A qWebIRC service, configured to run the XIN.at chat with SSL on port 8080. My Windows 2000 Server is German, but I guess it’s still understandable. The values are all REG_SZ / strings. Set the following three: 1. AppDirectory (the working directory, should be the installation dir of qWebIRC). 2. Application (the application to be launched by the service, so python.exe). 3. AppParameters (the parameters to be passed to Python for launching qWebIRCs’ run.py. Here, I’m specifying a port to run on, as well as SSL certificate and key files to load, so qWebIRC can automatically switch to HTTPS). Now, go to your system control panel, create a simple, restricted user to run qWebIRC as (if you don’t have a suitable one already) and make sure that user has permissions to read & execute the qWebIRC and Python 2.5 installations. For the qWebIRC\ directory the user also needs write access. Then, go to the Administrative Tools in the system control panel and configure the service qWebIRC to run as that restricted user. Start the service and you should be done. Of course, you can always just run a shell and launch it interactively from the command prompt as well, which is very useful for debugging by the way. If you click on the web chat on the top right on this page, you can try it out for yourself! It may not look as fancy as webchat2, but it works a lot faster and is far more stable! Ah, you’d have to accept the self-signed certificate of course, your web browser will likely warn you about it. And that’s that. Now visitors not only have easy access to my IRC chat server, but also one that works properly and doesn’t consume a ton of resources. Besides watching Anime like ALL the time during my Christmas / End-of-the-year vacation, I had actually decided to do some useful stuff as well. You see, the IBM PC Server 704 which hosts all my services – mail server, web server including this blog, IRC server and a lot more – isn’t just an ancient quad Pentium Pro 1MB 200MHz machine, it has also been running for almost 10 years straight in my pretty dusty flat. While that machine has some insanely powerful fans, dust is a serious issue still, given that this machine has no filters whatsoever installed. This has made be pretty uneasy since I had piqued inside like a year ago. I’m going to spare you the details (and photos) of what it looked like inside, also to make the downtime as short as possible. Let me just say: It looked ugly. Really, really ugly. In parts, dust had accumulated to form real carpets inside the machine, covering the towers’ floor mostly, but also the PCI cards, to a degree the CPU heat sinks, capacitors and memory as well. This isn’t healthy. I wanted that gone for quite some time now, mostly to prevent the hardware from overheating, especially the electrolytic capacitors on the CPU boards which have aged considerably already anyway. Every extra °C added on top unnecessarily is one too much for capacitors this old. Now, I don’t have pictures of the AWFUL inside, but to give you a feeling of the setup, here are some old pics (as always: click to enlarge): Ok, these are really old… The photos are pretty bad, and the server isn’t even equipped with much yet there. But you can see how it’s divided into multiple thermal zones, the air being pushed through by high static pressure & high throughput fans. Dust collected mainly on the floor of the box, but given the cover of the memory and second CPU riser board, I couldn’t clean that part properly with the machine still running, even when using pressure air cans. And since the CPU riser boards and the memory riser board are the most critical parts, I decided to shut the box down entirely and do the cleaning properly. But what good is any of this, if dust gets sucked in again in the future? When I equipped the SAS HDD bays of my modern RAID-6 machine with ultra-fine steel wire dust filters ([see here]), there was still a lot of material left over. So I thought I’d use this and cover the entire front of the server with it. So this is what the server looks like by default, before the modification: IBM PC Server 704, front (another very old photo) And this is what it looks like now: The PC Server 704 that is “Zenit/XIN.at”, equipped with ultra-fine steel wire dust filters I stuck those filters onto the server using double-sided adhesive power tape. The stuff you use to put mirrors on the wall without having to drill any holes. Yeah, this ain’t exactly pretty, that much is obvious. But it should do the trick. Cleaning those filtered fronts should be much less pain than having to deal with actual downtime just to get tons of dirt out of the machine. Also, airflow has decreased less than expected. I guess those fat (=deep), high rpm fans in there boast some serious static pressure. Ah, and those fans are the originals by the way. Proudly running since 1995! Now, there will be one more downtime this vacation I’m thinking, to create a fresh full system backup image. And then I should probably be fine letting it run for another 1-2 years straight. Or maybe 10? Who knows. And here we have another NFC-enabled banking card for contactless payment, this time it’s a VISA credit card issued by [Card Complete Austria]. Might I add, this is one of the few companies which also still leisurely prints the full credit cards’ numbers on their invoices, which are then being sent out via regular postal mail, clearly showing off what those letters are on the envelope. Good Job, Card Complete, especially your reply regarding the matter, telling me that “it’s okay, no need to change that”! In any case, I’ve written about this whole NFC thing before in much greater detail, [see here]! That article will show you the risks involved when working with potentially exploitable NFC cards, which mostly pose a data leaking problem. But you may wish to read the comments there as well, especially [this one here]. This shows that VISA cards tend(ed?) to be extremely vulnerable up to the point where a person walking by close enough to your wallet could actually draw money from your credit card without you noticing. I have no idea whether this security hole still exists in todays’ NFC-enabled VISA cards, but I’m no longer gonna take that risk. Especially not since I don’t even need the NFC tech in that (or any other) banking card. Once again, we’re just gonna physically destroy the induction coil that powers the NFC chip in the card and which also operates as its antenna. Since I’m lazy, I didn’t do the “poor mans’ x-ray” photos this time (you can see how to do that in the first NFC post), just before and after pictures: So, before: As you can see, there are already signs of use on this card. Yeah, I’ve been lazy, running around with an NFC enabled VISA for too long already. Time to deal with it then: Finding a good spot for my slightly oversized drilling head wasn’t so easy on this card, because the card numbers, signature field, magnetic strip and other parts like the hologram where in the way. But in the end I managed to drill a hole in about the right spot. Because of the heavy image editing you can’t see it, but I did damage the first of my credit cards’ numbers, but only slightly, so it should still be o.k., just a bit of paint that came off. So, once again: You’re not welcome in my wallet, so bye bye, NFC! And here’s another minor update after [part 4½] of my RAID array progress log. Since I was thinking that weekly RAID verifications would be too much for an array this size (because I thought it would take too long), I set the Areca controller to scrub and error-check my disks at an interval of four weeks. Just a shame that the thing doesn’t feature a proper scheduler with a calendar and configurable starting times for this. All you can tell it is to “check it every n weeks”. In any case, the verify completed this night, running for a total of 29:07:29 (so: 29 hours) across those 12 × 6TB HGST Ultrastar disks, luckily with zero bad blocks detected. Would’ve been a bit early for unrecoverable read errors to occur anyway. So this amounts to a scrub speed just shy of 550MiB/s, which isn’t blazingly fast for this array, but it’s acceptable I think. The background process priority during this operation was set to “Low (20%)”, and there have been roughly 150GiB of I/O during the disk scrubbing. Most of that I/O was concentrated in one fast Blu-Ray demux, but some video encoders were also running, reading and writing small amounts of data all the time. I guess I can live with that result. Ah yeah, I should also show you the missing benchmarks, but before that, here’s a more normal photograph of the final system (where “normal” means “not a nightshot”. It does NOT mean “in a proper colorspace”, cause the light sources were heavily mixed, so the colors suck once again! ): The “Taranis” RAID-6 system during daytime And here are the missing benchmarks on the finalized array in a normal state. Once again, this is an Areca ARC-1883ix-12 with 12 × HGST Ultrastar 7k6000 6TB SAS disks in RAID-6 at an aligned stripe block size of 64kiB. The controller is equipped with FBM-backed 2GiB of Reg. ECC DDR-III/1866 write-back cache, and each individual drive features 128MiB of write-through cache (I have no UPS unit for this machine, which is why the drive caches themselves aren’t configured for write-back). The controller is configured to read & discard parity data to reduce seeks and is thus tuned for maximum sequential read performance. The benchmarking software was HDTune 2.55 as well as HDTune Pro 5.00: With those modern Ultrastars instead of the old Seagate Cheetah 15k drives, the only thing that turned out to be worse is the seeking time. Given that it’s 3.5″ 7200rpm platters vs. 2.5″ 15000rpm platters that’s only natural though. Sequential throughput is a different story though: At large enough block sizes we get more than 1GiB/s almost consistently, for both reads and writes. Again, I’d have loved to try 4k writes as well, but HDTune Pro would just crash when picking that block size, same as with the Cheetah drives. Anyhow, 4k performance is nice as well. I’d give you some ASSSD numbers, but it fails to even see the array at all. What I’ve seen in some other reviews holds true here too though: The Ultrastars do seem to fluctuate partly when it comes to performance. We can see that for the 64kiB reads as well as the 512kiB and 1MiB writes. On average though, raw read and write performance is absolutely stellar, just like ATTO, HDTach and Everst/Aida64 tests have suggested before as well. That IBM 1.2GHz [PowerPC 476] dual core chip is truly a monster in comparison to what I’ve seen on older RAID-6 controllers. I’ve compared this to my old 3ware 9650SE-8LPML (AMCC [PowerPC 405CR] @ 266MHz), to an Adaptec-built ICP Vortex 5085BR (Intel [XScale IOP333] @ 800MHz), both with 8 × 7200rpm SATA disks and even to a Hewlett Packard MSA2312fc SAN with 12 × 15000rpm SAS Cheetahs (AMD [Turion 64 MT-32] 1800MHz). All of them are simply blown out of the water in every way thinkable: Performance, manageability, and if I were to consider the MSA2312fc as a serious contender as well (it isn’t exactly meant as a simple local block device): Stability too. I couldn’t tell how often those freaking management controllers are crashing on that thing and have to be rebooted via SSH… So this thing has been up for about 4 weeks now. Still looking good so far… Summer will be interesting, with some massive heat and all. We’ll see it that’ll trigger the temperature alarms of the HDD bays… Ok, enough technology and storage stuff already, time for more Anime nonsense! Soon I will seriously need to call myself a figure collector if this keeps going on. So here’s a new one, Miss Tokisaki Kurumi from the Harem Anime [Date A Live]. While I was growing increasingly bored with the formulaic nature of such shows (it’s always a flock of girls swarming around a male center character and falling for him in the end after all), they’re usually fun enough to watch now and then. The same is pretty much true for Date A Live – if it wasn’t for the psychotic, strong-willed Kurumi, which always follows her own agenda and never actually falls for our stereotypical male center character. Not even in the very end of the series. I kind of liked that, because I never encountered such a character in such a type of Anime before – intelligent, highly manipulative, psychotic and insane, and definitely superior to the center character in almost every sense, be it willpower, intellect or raw force. Nice breeze of change. Also, when she shows up, the series changes from sugary sweetness to a literal bloody mess, which was quite intense! In essence – she makes the series. [1] An interpretation of Kurumi – the clock dial in her eye shows her remaining life time. It can be rewound though – at the price of life force of those surrounding her, potentially all the way to their demise. Besides all the slightly lewd subplots, the rest of the Anime is basically supernatural action with lots of nonsensical battles and destruction, where our male character – Itsuka Shidō – “disables” newly arrived “spirits” (=the girls) by sealing their highly dangerous and uncontrollable powers so that they can’t cause their accidental spatial anomalies anymore. Those could otherwise annihilate entire cities or even countries. He seals those powers via romancing/kissing them. Sometimes mid-battle. Yeah, it’s about as stupid as it sounds, at least until Kurumi hits the stage and blood starts spilling. In any case, I’ve been thinking about getting the [AlphaMax] 1/7 scale figure of her for some time, but it’s expensive and I wasn’t quite sure whether I should get her. Did she impress me enough for that? But now, months after watching the show, I’ve still been eyeballing the figure, so yeah. Recently, I saw something called the [clear dress] version on eBay, which is a 130pcs limited edition made for the [Wonder Festival 2015] only. Not knowing about its existence prior to that, I asked whether this was the “AlphaMax preorder limited” edition I had recently learned about. Well, it wasn’t, but to my great surprise the Japanese seller – a guy who specializes in rare items – looked for and managed to find one for me after my inquiry, here it is: Sculpted by MOON for AlphaMax – from the outside it looks just like the regular version – for a moment I thought I was duped… but nah, it’s the exclusive limited edition alright! Now this limited version doesn’t feature the weird clear dress which creates that contrast between upper and lower part of her garments that I don’t like – also, it doesn’t have the golden guns as bonus parts. I don’t really like them that much either. Instead you get a different version of the clockwork base plate, which is rather iconic for Kurumi and really makes the whole figure much more flashy! So, the normal base plate looks like this: [2] The regular version comes with this stand. The clockwork can also be seen appearing in the series whenever Kurumi unleashes her time manipulation powers, so this is really fitting. However, the one I got with this 200-pieces limited edition – which was exclusively sold only by the manufacturer AlphaMax on their own webshop and only within Japan – is this one: The metallic/golden base plate of the AlphaMax preorder limited edition, I should’ve probably hidden the paper support better, but meh… So as you can see, that’s quite the eye-catcher! It’s still made of cheap plastic, but the metallic finish makes it reflect light in really nice ways when illuminated. The next part are her guns, a vintage pistol and rifle: Those guns can shoot normal bullets just fine – but when infused with Kurumis supernatural powers they can do a lot more damage. Ok, time for showing her off in her full glory, gothic lolita “battle dress” and all: Her garments do look awesome alright. The whole dynamic feeling comes across really well in her pose, with the dress flowing around like that. And it’s hard to beat a red/orange+black gothic lolita dress in the first place. Let’s get a bit closer, so you can see some of the details better: Her ribbon’s well done as well. The chains could’ve looked better with some more love to detail, but they’re still ok. And the back: The laces across her otherwise exposed back and the black apron flowing with her implied movement all look awesome. The sculpting is perfect here, just look at the shoulder blades! And more gunnery: And now to her face, which is important because of the clock dial eye: One interesting part is the actual scale; that 1/7 Kurumi should be quite a bit larger than the [1/8 Ho-kago Tea Time figures] made by Alter. The dimensions are almost the same however, with Kurumis’ features being only insignificantly larger. Given that Wave/Dreamtechs 1/8 Elfen Lied scale figures are smaller than Ho-kago Tea Time, this may indicate that Alter is slightly oversizing their figures? Or is it just that nobody’s staying 100% true to the scale factor to begin with?! Ah, in any case, time to move in: She’s living here now. Just give her a wide berth if you’re feeling unlucky today. ;) Besides the [supersized Kirino], which simply can’t be beaten in how she dominates the whole cabinet due to sheer size (and by being awesome), Kurumi is definitely the most flashy figure I own now. The gothloli dress and her very dynamic pose alone make sure of that, but the limited editions’ golden base plate really uplifts the whole figure to a different level. An eyecatcher and a beauty to look at! I’d almost feel sad for the other girls from the show for not even considering them for a second, but sorry; you simply can’t even hold a candle to Kurumi! Let’s hope I’ll stop buying more for some while. There are two more scales and at least one Nendoroid on my wishlist, but I’ll have to think about the space issues here. That cabinet may be full height, but it’s starting to get a bit crowded in there… At least the Nendoroids don’t need to stand in the cabinet. The two I own for now – I haven’t posted them here yet, [Miyauchi Renge] from Non Non Biyori and [Nee-san] from Plastic Nee-san – are happily standing around elsewhere. Ah well, we’ll see, we’ll see… Kurumi says bye bye to you! [1] The original creator of this image is unknown to me. If you are the creator and wish to object to the publication of this image here, please post in the comment section! [2] Original image is © 2014 橘公司・つなこ/KADOKAWA 富士見書房刊/「デート・ア・ライブⅡ」製作委員会, published by AlphaMax. And another continuation after the “final” [part 4]: Unbelievable but true, I actually took the time to re-write and polish the previously broken web reporting for my new RAID-6 array. Originally, it was relying on the 3ware command line tool tw_cli.exe and SmartMonTools’ smartctl.exe (The tools also exist on Linux/UNIX by the way) in conjunction with Windows Batch and some Perl. Rest assured, it was ugly. But in all my glorious stupidity I decided to write the new version in Windows Batch again, this time taking the help of a different set of UNIX-style tools, namely grep.exe, sed.exe, tr.exe and cut.exe. I got all of this stuff on my Windows box anyway, partly from [GnuWin32] and partly out of [CygWin x64], so yeah. Ah yes, and Arecas own cli.exe of course. Why wouldn’t I use a proper scripting language then? Pffh, I guess I just have some masochistic tendencies… In any case, results first, you can see the reporting website [by clicking here]. It’ll show you the RAID controller status, some information about the RAID volume and some S.M.A.R.T. info about the individual disks. It’s not real-time as this would consume too many resources, so it’s updated on a daily basis only. Quite nice that SmartMonTools can use Arecas API via things like e.g. smartctl -x -d areca,n/2 /dev/arcmsr0 to get info from disk n (replace with a number)! Here’s a screenshot as well: Taranis RAID-6 web report And the script itself? Well, get ready for some extremely ugly fuckshit, here it comes: expand/collapse source code (hereby released under the GNU GPLv3) 1. @ECHO OFF 2.   3. SETLOCAL ENABLEDELAYEDEXPANSION 4. SET totalreadvolume=0 5. SET totalwritevolume=0 6. SET totalreadTiB=0 7. SET totalwriteTiB=0 8. SET volindex=0 9. FOR %%I IN (1,2) DO SET "volume[%%I]=0" 10.   11. FOR /L %%I IN (1 1 12) DO ( 12.  FOR /F "usebackq" %%V IN (smartctl -x -d areca^,%%I/2 /dev/arcmsr0 ^| grep ^ 13. -e "read:" -e "write:" ^| tr -s " " ^| cut -d " " -f7 ^| cut -d "," -f1) DO ( 14.  SET /A volindex += 1 15.  SET "volume[!volindex!]=%%V" 16.  ) 17.  SET /A totalreadvolume += !volume[1]! 18.  SET /A totalwritevolume += !volume[2]! 19.  SET volindex=0 20. ) 21. SET /A totalreadTiB = "%totalreadvolume% / 1024" 22. SET /A totalwriteTiB = "%totalwritevolume% / 1024" 23. SET /A parityreadTiB = "%totalreadTiB% / 6" 24. SET /A paritywriteTiB = "%totalreadTiB% / 6" 25. SET /A totaluserreadTiB = "%totalreadTiB% - %parityreadTiB%" 26. SET /A totaluserwriteTiB = "%totalwriteTiB% - %paritywriteTiB%" 27.   28.   29. ECHO ^<b^>Areca ARC-1883ix-12 RAID controller status:^<br^> > "Z:\web\xin\raid6^ 30. stats\raid6stats_temp.txt.html" 31.   32. ECHO ==========================================^</b^>^<br^>^<br^> >> "Z:\web\xi^ 33. n\raid6stats\raid6stats_temp.txt.html" 34. "C:\Program Files (x86)\system\ArcCLI\cli.exe" sys info | grep -e "Main Process^ 35. or" -e CPU -e "System Memory" -e "Controller Name" | sed -e "s/1200MHz/PowerPC^ 36.  476 dual-core, 1200MHz/g" -e "s/SCache\sSize\s\s/L2 Cache Size/g" -e "s/\s/\&^ 37. nbsp;/g" -e "s/^/\&nbsp;\&nbsp;/g" -e "s//<br>/g" >> "Z:\web\xin\raid6stats\r^
38. aid6stats_temp.txt.html"
39. 
40. "C:\Program Files (x86)\system\ArcCLI\cli.exe" hw info | grep -e "CPU Temperatu^
41. re" -e "Controller Temp" -e "CPU Fan" -e "12V" -e "5V" -e "3\.3V" -e "IO Volta^
42. ge" -e "DDR3" -e "CPU VCore" -e "Ethernet" -e "Battery Status" -e "Chip Temp" ^
43. | sed -e "s/\sC$/\&deg;C/g" -e "s/\sV$/V/g" -e "s/\sRPM$/rpm/g" -e "s/\x25/F/g^ 44. " -e"s/^\s\s//g" -e "s/\s:\s/ : /g" -e "s/Chip\sTemp\s\s\s\s\s\s\s\s\s/SAS E^ 45. xpander Temp\./g" -e "s/\s/\&nbsp;/g" -e "s/^/\&nbsp;\&nbsp;/g" -e "s/$/<br>/g^
46. " >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
47. 
48. ECHO ^<br^>^<br^>^<br^>^<br^>^<b^>RAID ^&amp; volume set status:^<br^> >> "Z:\w^
49. eb\xin\raid6stats\raid6stats_temp.txt.html"
50. ECHO ========================^</b^>^<br^>^<br^> >> "Z:\web\xin\raid6stats\raid6^
51. stats_temp.txt.html"
52. 
53. ECHO ^&nbsp;^&nbsp;RAID set:^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.tx^
54. t.html"
55. ECHO ^&nbsp;^&nbsp;--------^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt^
56. .html"
57. ECHO ^&nbsp;^&nbsp;^&nbsp;^&nbsp;#^&nbsp;Name^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp^
58. ;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^
59. ^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;Disks^&nbsp;TotalCap^&nbsp;^&n^
60. bsp;^&nbsp;FreeCap^&nbsp;MinDiskCap^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&^
61. nbsp;^&nbsp;^&nbsp;State^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.ht^
62. ml"
63. 
64. "C:\Program Files (x86)\system\ArcCLI\cli.exe" rsf info | grep -e Taranis | sed^
65.  -e "s/\sTaranis\sRAID-6\sC/Taranis RAID-6 CryptoArray/g" -e "s/^\s/  /g" -e "s^
66. /\s/\&nbsp;/g" -e "s/^/\&nbsp;\&nbsp;/g" -e "s/$/<br>/g" >> "Z:\web\xin\raid6st^ 67. ats\raid6stats_temp.txt.html" 68.   69. ECHO ^<br^>^&nbsp;^&nbsp;Volume set:^<br^> >> "Z:\web\xin\raid6stats\raid6stats^ 70. _temp.txt.html" 71. ECHO ^&nbsp;^&nbsp;----------^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.t^ 72. xt.html" 73. ECHO ^&nbsp;^&nbsp;^&nbsp;^&nbsp;#^&nbsp;Name^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp^ 74. ;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^ 75. ^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;^&nbsp;UsableCap^&nbsp;^ 76. Ch/Id/Lun^&nbsp;^&nbsp;State^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.tx^ 77. t.html" 78.   79. "C:\Program Files (x86)\system\ArcCLI\cli.exe" vsf info | grep -e Taranis | sed^ 80.  -e "s/Taranis\sRAID-6\sC\sTaranis\sRAID-6\sCRaid6/Taranis RAID-6 CryptoArray/g^ 81. " -e "s/\s/\&nbsp;/g" -e "s/^/\&nbsp;\&nbsp;/g" -e "s/$/<br>/g" >> "Z:\web\xin\^
82. raid6stats\raid6stats_temp.txt.html"
83. 
84. ECHO ^<br^>^<br^>^<br^>^<b^>Global array S.M.A.R.T. information:^<br^> >> "Z:\w^
85. eb\xin\raid6stats\raid6stats_temp.txt.html"
86. ECHO ===================================^</b^>^<br^>^<br^> >> "Z:\web\xin\raid6^
87. stats\raid6stats_temp.txt.html"
88. ECHO ^&nbsp;^&nbsp;Total data read from array (raw, with N+P parity data) :^
89.  ~%totalreadTiB% TiB^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
90. ECHO ^&nbsp;^&nbsp;Total data written to array (raw, with N+P parity data):^
91.  ~%totalwriteTiB% TiB^<br^>^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt^
92. .html"
93. ECHO ^&nbsp;^&nbsp;Total data read from array (user data) : ~%totaluserreadTiB%^
94.  TiB^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
95. ECHO ^&nbsp;^&nbsp;Total data written to array (user data):^
96.  ~%totaluserwriteTiB% TiB^<br^>^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp^
97. .txt.html"
98. ECHO ^&nbsp;^&nbsp;^<em^>^<span style="font-size: 8pt;"^>Note: Parity data is i^
99. ncluded in raw total reads, because the controller is^<br^> >> "Z:\web\xin\raid^
100. 6stats\raid6stats_temp.txt.html"
101. ECHO ^&nbsp;^&nbsp;configured to read ^&amp; discard parity data to minimize re^
102. -seeks and optimize^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
103. ECHO ^&nbsp;^&nbsp;for sequential reading performance.^</span^>^</em^>^<br^> >>^
104.  "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
105. 
106. ECHO ^<br^>^<br^>^<br^>^<b^>Per-Disk S.M.A.R.T. information:^<br^> >> "Z:\web\x^
107. in\raid6stats\raid6stats_temp.txt.html"
108. ECHO ===============================^</b^>^<br^> >> "Z:\web\xin\raid6stats\raid^
109. 6stats_temp.txt.html"
110. 
111. FOR /L %%M IN (1 1 12) DO (
112.   ECHO ^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
113.   ECHO ^&nbsp;^&nbsp;Disk %%M:^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.^
114. txt.html"
115.   ECHO ^&nbsp;^&nbsp;-------^<br^> >> "Z:\web\xin\raid6stats\raid6stats_temp.tx^
116. t.html"
117.   smartctl -a -d areca,%%M/2 /dev/arcmsr0 | grep -e "Vendor[:space:]" -e Produc^
118. t -e "User Capacity" -e "Rotation Rate" -e "Transport Protocol" -e "Current Dri^
119. ve Temperature" -e "Accumulated start-stop cycles" -e "Accumulated load-unload ^
120. cycles" -e "Elements in grown defect list" | sed -e "s/\sC$/\&deg;C/g" -e "s/Ve^ 121. ndor:\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s/Vendor : /g" -e "s/P^ 122. roduct:\s\s\s\s\s\s\s\s\s\s\s\s\s\s/Product : /g" -e "s/U^ 123. ser\sCapacity:\s\s\s\s\s\s\s\s/User capacity : /g" -e "s/Rotati^ 124. on\sRate:\s\s\s\s\s\s\s\s/Rotation rate : /g" -e "s/Current\sDr^ 125. ive\sTemperature:\s\s\s\s\s/Current drive temperature : /g" -e "s/start-sto^ 126. p\scycles:\s\s/start-stop cycles : /g" -e "s/grown\sdefect\slist:\s/grown defec^ 127. t list : /g" -e "s/load-unload\scycles:\s\s/load-unload cycles: /g" -e "s/\s/\&^ 128. nbsp;/g" -e "s/^/\&nbsp;\&nbsp;\&nbsp;\&nbsp;/g" -e "s/$/<br>/g" >> "Z:\web\xin^
129. \raid6stats\raid6stats_temp.txt.html"
130. 
131.   smartctl -H -d areca,%%M/2 /dev/arcmsr0 | grep Health | sed -e "s/Status:/Sta^
132. tus           :/g" -e "s/\s/\&nbsp;/g" -e "s/^/\&nbsp;\&nbsp;\&nbsp;\&nbsp;/g" ^
133. -e "s/\$/<br>/g" >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
134. )
135. 
136. ECHO ^<br^>^<br^>^<br^>^<br^>^<em^>Last update: >> "Z:\web\xin\raid6stats\raid6^
137. stats_temp.txt.html"
138. DATE /T >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
139. ECHO (DD.MM.YYYY),^  >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
140. TIME /T >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
141. ECHO ^</em^> >> "Z:\web\xin\raid6stats\raid6stats_temp.txt.html"
142. 
143. COPY /b /Y "Z:\web\xin\raid6stats\header.html" + "Z:\web\xin\raid6stats\raid6st^
144. ats_temp.txt.html" + "Z:\web\xin\raid6stats\footer.html" "Z:\web\xin\raid6stats^
145. \raid6stats.html"
146. 
147. ENDLOCAL

Guess you can’t even read that at all? Yeah, because it’s shit! Also, Batch is one hell of an obscure language, seriously. Doing nested loops (or any loops actually) can break your variable expansion inside the loop body (and thus: your entire program) when doing assignments in no time. Arithmetics look ugly as hell. And pushing command outputs into variables absolutely requires a FOR /F loop, there just is no other way at all. Backticks? Yeah, keep dreaming.

The whole syntax is just plain painful. It really hurts writing that code, and that’s just a simple program. I don’t even wanna see any seriously complex Batch code, ever.

But well, a load of GNU tools are helping Batch walk through this left and right, and it works, so I’m just gonna leave it alone. When a migration to a UNIX-style OS comes, I’ll just have to rewrite it using Bash or something.

Well, so far so good I guess…

Edit: Seems I forgot to post some of the benchmarks here. In the meantime, we have a completed verify process with information about how long it takes on this 54.5TiB array, as well as those missing sequential r/w benchmarks and another photo of the system, all in [part 4¾] (it’s just a minor update)!

While there has been quite some trouble with the build of my new storage array, as you can see in the last [part 3½], everything seems to have been resolved now. As far as tests have shown, the instability issues with my drives have indeed been caused by older Y-cables used to support all eight 4P molex plugs of my Chieftec 2131SAS drive bays. This was necessary, as all plugs on the Corsair AX1200i power supply had been used up, partly to support the old RAID-6 arrays 8 × SATA power plugs as well.

To fix it, I just ripped out half of the Y-cables, more specifically those connected to the bays which showed trouble, and hooked the affected bays up to a dedicated ATX power supply. The no-name 400W PSU used for this wasn’t stable with zero load on the ATX cable however, so just shorting the green and grey cables on the ATX plug didn’t work. Happens for a lot of ATX PSUs, so I hooked another ASUS P6T Deluxe up to it, which stabilized all voltage rails.

After that, a full encryption of the (aligned) GPT partition created on the device, rsync for 3 days, then a full diff for a bit more than 2 days, and yep. Everything worked just as planned, all 10.5TiB of my data was synced over to the new array correctly and without any inconsistencies. After that, I ripped out the old array, and did the cabling properly, and well – still no problems at all!

With everything having been copied over, that little blue triangle still has ways to go trying to eat up Taranis!

I do have to apologize for not giving you pictures of the 12 drives though, but while completing everything, I was just in too much of a rush to get everything done, so no ripping out of disks for photos. Besides some additional benchmarks I can give you a few nightshots of the machine though. This is with my old 3ware 9650SE-8LPML card and all of its drives removed already. Everything has been cleaned one last time, the flash backup module reconnected to the Areca ARC-1883ix-12, the controllers management interface itself hooked up to my LAN and made accessible via a SSH tunnel and all status-/error-LED headers hooked up in the correct order.

For the first one of these images, the error LEDs have been lit manually via Arecas “identify enclosure” function applied to the whole SAS expander chip on the card:

The drive bays’ power LEDs are truly insanely bright. The two red error LEDs that each bay has – one for fan failure, one for overheating – are off here. What you can see are the 12 drive bays’ activity and status LEDs as well as the machines’ power LED. The red system SSD LED and the three BD-RW drive LEDs are off. It’s still a nice christmas tree.

The two side intakes, Noctua 120mm fans in this case, filtered by Silverstone ultra-fine dust filters let some green light through. This wasn’t planned, and it’s caused by the green LEDs of the GeForce GTX Titan Black inside. It’s quite dim though. The fans a live savers by the way, as they keep the Areca RAID controllers’ dual-core 1.2GHz PowerPC 476 processor at temperatures <=70°C instead of something close to 90°C. The SAS expander chip sits at around 60°C with the board temperature at 38°C, and the flash backup module temperature is at ~40°C. All of this at an ambient testing temperature of 28°C after 4 hours of runtime. So that part’s perfectly fine.

Only problem are the drives, which can still reach temperatures as high as 49-53°C. While the trip temperature of the drives is 85°C, everything approaching 60°C should already be quite unhealthy. We’ll see how well that goes, but hopefully it’ll be fine for them. My old 2TiB A7K2000 Ultrastars ran for what is probably a full accumulated year at ~45°C without issues. Hm…

In any case, some more benchmarks:

The Taranis RAID-6 running ATTO disk benchmark v2.47, 12 × Ultrastar 7K6000 SAS @ ARC-1883ix-12 in RAID-6, results are kiB/s

In contrast to some really nice theoretical results, practical tests with [dd] and [mkvextract+mkvmerge] show, that the transfer rate on the final, encrypted and formatted volume sits somewhere in between 500-1000MiB/s for very large sequential transfers with large block sizes, which is what I’m interested in. While the performance loss seems significant when taking the proper partition-to-stripe-width-alignment and the multi-threaded, AES-NI boosted encryption into account, it’s still nothing to be ashamed of at all. In the end, this is by several factors faster than the old array which delivered roughly 200-250MiB/s or rather less at the end, with severe fragmentation beginning to hurt the file system significantly.

Ah yes, one more thing that might be interesting: Power consumption of the final system! To measure this, I’m gonna rely on the built-in monitoring and management system of my Corsair AX1200i power supply again. But first, a list of the devices hooked up to the PSU:

• ASUS P6T Deluxe mainboard, X58 Tylersburg chipset
• 3 × 8 = 24GB DDR-III/1066 CL8 SDRAM (currently for testing, would otherwise be 48GB)
• Intel Xeon X5690 3.46GHz hexcore processor, not overclocked, idle during testing
• nVidia GeForce GTX Titan Black, power target at 106%, not overclocked, idle during testing
• Areca ARC-1883ix-12 controller + ARC-1883-CAP flash backup module
• Auzentech X-Fi Prelude 7.1
• 1 × Intel 320 SSD 600GB, idle during testing
• 3 × LG HL-DT-ST BH16NS40 BD-RW drives, idle during testing
• 1 × Teac FD-CR8 combo drive (card reader + FDD), idle during testing
• 12 × Hitachi Global Storage Ultrastar 7K6000 6TB SAS/12Gbps, sequential transfer during testing
• 4 × Chieftec 2131SAS HDD bays
• 2 × Noctua NF-A15 140mm fans
• 2 × Noctua NF-A14 PWM 140mm fans
• 3 × Noctua NF-F12 PWM 120mm fans
• 4 × Noctua NF-A8 FLX 80mm fans (in the drive bays)
• 1 × Noctua NF-A4x10 40mm fan
• 1 × unspecified 140mm PWM fan in the power supply

Full system load with the new Taranis RAID-6 array

So we’re still under the 300W mark, which I had originally expected to be cracked, since the old system was in the same ballpark when it comes to power consumption. But the old system had an overclocked i7 980X instead of this seriously cool-running Xeon as well (it has a low VID, it’s cooler even on stock settings).

Now all that’s missing is the adaptation of my old scripts checking the RAID controller and drive status periodically. For this, I was using 3wares tw_cli tool and SmartMonTools originally. I’ll continue to use the SmartMonTools of course, as they’ve been adapted to make use of Arecas API as well, thus being able to fetch S.M.A.R.T. data from all individual drives in the array. The tw_cli part will have to be replaced with Arecas own command line tool though, including a lot of post-processing with Perl to publish this in a nice HTML form again. When it’s done, the stats will be reachable [here].

Depending on how extremely my laziness and my severe Anime addiction bog me down, this may take a few days. Or weeks.

Edit: Ah, actually, I was motivated enough to do it, cost me several hours, inflicted quite some pain due to the weirdness of Microsoft Batch, but it’s done, the RAID-6 web status reporting script is back online! More (including the source code) in [part 4½]!