Nov 192016
 

FreeBSD GMABoost logoRecently, after finding out that the old Intel GMA950 profits greatly from added memory bandwidth (see [here]), I wondered if the overclocking mechanism applied by the Windows tool [here] had leaked into the public after all this time. The developer of said tool refused to open source the software even after it turning into abandonware – announced support for GMA X3100 and X4500 as well as MacOS X and Linux never came to be. Also, he did not say how he managed to overclock the GMA950 in the first place.

Some hackers disassembled the code of the GMABooster however, and found out that all that’s needed is a simple PCI register modification that you could probably apply by yourself on Microsoft Windows by using H.Oda!s’ [WPCREdit].

Tools for PCI register modification do exist on Linux and UNIX as well of course, so I wondered whether I could apply this knowledge on FreeBSD UNIX too. Of course, I’m a few years late to the party, because people have already solved this back in 2011! But just in case the scripts and commands disappear from the web, I wanted this to be documented here as well. First, let’s see whether we even have a GMA950 (of course I do, but still). It should be PCI device 0:0:2:0, you can use FreeBSDs’ own pciconf utility or the lspci command from Linux:

# lspci | grep "00:02.0"
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
 
# pciconf -lv pci0:0:2:0
vgapci0@pci0:0:2:0:    class=0x030000 card=0x30aa103c chip=0x27a28086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller'
    class      = display
    subclass   = VGA

Ok, to alter the GMA950s’ render clock speed (we are not going to touch it’s 2D “desktop” speed), we have to write certain values into some PCI registers of that chip at 0xF0hex and 0xF1hex. There are three different values regulating clockspeed. Since we’re going to use setpci, you’ll need to install the sysutils/pciutils package on your machine via # pkg install pciutils. I tried to do it with FreeBSDs’ native pciconf tool, but all I managed was to crash the machine a lot! Couldn’t get it solved that way (just me being too stupid I guess), so we’ll rely on a Linux tool for this. Here is my version of the script, which I call gmaboost.sh. I placed that in /usr/local/sbin/ for global execution:

  1. #!/bin/sh
  2.  
  3. case "$1" in
  4.   200) clockStep=34 ;;
  5.   250) clockStep=31 ;;
  6.   400) clockStep=33 ;;
  7.   *)
  8.     echo "Wrong or no argument specified! You need to specify a GMA clock speed!" >&2
  9.     echo "Usage: $0 [200|250|400]" >&2
  10.     exit 1
  11.   ;;
  12. esac
  13.  
  14. setpci -s 02.0 F0.B=00,60
  15. setpci -s 02.0 F0.B=$clockStep,05
  16.  
  17. echo "Clockspeed set to "$1"MHz"

Now you can do something like this: # gmaboost.sh 200 or # gmaboost.sh 400, etc. Interestingly, FreeBSDs’ i915_kms graphics driver seems to have set the 3D render clock speed of my GMA950 to 400MHz already, so there was nothing to be gained for me in terms of performance. I can still clock it down to conserve energy though. A quick performance comparison using a crappy custom-recorded ioquake3 demo shows the following results:

  • 200MHz: 30.6fps
  • 250MHz: 35.8fps
  • 400MHz: 42.6fps

Hardware was a Core 2 Duo T7600 and the GPU was making use of two DDR-II/667 4-4-4 memory modules in dual channel configuration. Resolution was 1400×1050 with quite a few changes in the Quake III configuration to achieve more performance, so your results won’t be comparable, even when running ioquake3 on identical hardware. I’d post my ~/.ioquake3/baseq3/q3config.cfg here, but in my stupidity I just managed to freaking wipe the file out. Now I have to redo all the tuning, pfh.

But in any case, this really works!

Unfortunately, it only applies to the GMA950. And I still wonder what it was that was so wrong with # pciconf -w -h pci0:0:2:0 0xF0 0060 && pciconf -w -h pci0:0:2:0 0xF0 3405 and the like. I tried a few combinations just in case my byte order was messed up or in case I really had to write single bytes instead of half-words, but either the change wouldn’t apply at all, or the machine would just lock up. Would be nice to do this with only BSD tools on actual FreeBSD UNIX, but I guess I’m just too stupid for pciconf

Aug 032016
 

xmms logoYeah, I’m still using the good old xmms v1 on Linux and UNIX. Now, the boxes I used to play music on were all 32-bit x86 so far. Just some old hardware. Recently I tried to play some of my [AAC-LC] files on my 64-bit Linux workstation, and to my surprise, it failed to play the file, giving me the error Pulse coding not allowed in short blocks when started from the shell (just so I can read the error output).

I thought this had to have something to do with my file and not the player, so I tried .aac files with ADTS headers instead of audio packed into an mp4/m4a container. The result was the same. Also, the encoding (SBR/VBR or even HE-AAC, which sucks for music anyway) didn’t make a difference. Then I found [this post] on the Gentoo forums, showing that the problem was really the architecture. 32-bit builds wouldn’t fail, but the source code of the [libfaad2] decoding library of xmms’ mp4 plugin wasn’t ready for 64-bit.

xmms’ xmms_mp4Plugin-0.4 comes with a pretty old libfaad2, 2.0 or something. I will show you how to upgrade that to version 2.7, which is fixed for x86_64 (Readily fixed source code is also provided at the bottom). We’ll also fix up other parts of that plugin so it can compile using more modern versions of GCC and clang, and my test platform for this would be a rather conservative CentOS 6.8 Linux. First, get the source code:

I’m assuming you already have xmms installed, otherwise obtain version 1.2.11 from [here]!

Step 1, libfaad2:

Unpack both archives from above, then enter the mp4 plugins’ source directory. You’ll find a libfaad2/ directory in there. Delete or move it and all of its contents. From the faad2 source tree, copy the directory libfaad/ from there to libfaad2/ in the plugins’ directory, replacing the deleted one. Now that’s the source, but we also need the updated headers so that the xmms plugin can link against the newer libfaad2. To do that, copy the two files include/faad.h and include/neaacdec.h from the faad2 2.7 source tree to the directory include/ in the plugin source tree. Overwrite faad.h if prompted.

Step 2, libmp4v2:

Also, the bundled libmp4v2 of the mp4 plugin is very old and broken on modern systems due to code issues. Let’s fix them, so back to the mp4 plugin source tree. We need to fix some invalid pure specifiers in the following files: libmp4v2/mp4property.h, libmp4v2/mp4property.cpp, libmp4v2/rtphint.h and libmp4v2/rtphint.cpp.

To do so, open them in a text editor and search and replace all occurences of = NULL with = 0. This will fix all assignments and comparisons that would otherwise break. When using vi or vim, you can do :%s/=\sNULL/= 0/g for this.

On top of that, we need to fix a invalid const char* to char* conversion in libmp4v2/rtphint.cpp as well. Open it, and hop to line number 325, you’ll see this:

  1. char* pSlash = strchr(pRtpMap, '/');

Replace it with this:

  1. const char* pSlash = strchr(pRtpMap, '/');

And we’re done!

Now, in the mp4 plugins’ source root directory, run $ ./bootstrap && ./configure. Hopefully, no errors will occur. If all is ok, run $ make. Given you have xmms 1.2.11 installed, it should compile and link fine now. Last step: # make install.

This has been tested with my current GCC 4.4.7 platform compiler as well as GCC 4.9.3 on CentOS 6.8 Linux. Also, it has been tested with clang 3.4.1 on FreeBSD 10.3 UNIX, also x86_64. Please note that FreeBSD 10.3 needs extensive modifications to its build tools as well, so I can’t provide fixed source for this. However, packages on FreeBSD have already been fixed in that regard, so you can just # pkg install xmms-faad2 and it’s done anyway. This is likely the case for several modern Linux distros as well.

Let’s play:

xmms playing AAC-LC Freebsd 10.3 UNIX on x86_64

Yeah, it works. In this case on FreeBSD.

And the shell would say:

2-MPEG-4 AAC Low Complexity profile
MP4 - 2 channels @ 44100 Hz

Perfect! And here is the [fixed source code] upgraded with libfaad2 2.7, so you don’t have to do anything by yourself other than $ ./bootstrap && ./configure && make and # make install! Ah, actually I’m not so sure whether xmms itself builds fine on CentOS 6.8 from source these days… Maybe I’ll check that out another day. ;)

Jan 272016
 

HakuNeko logoJust yesterday I’ve showed you how to modify and compile the [HakuNeko] Manga Ripper so it can work on FreeBSD 10 UNIX, [see here] for some background info. I also mentioned that I couldn’t get it to build on CentOS 6 Linux, something I chose to investigate today. After flying blind for a while trying to fix include paths and other things, I finally got to the core of the problem, which lies in HakuNekos’ src/CurlRequest.cpp, where the code calls CURLOPT_ACCEPT_ENCODING from cURLs typecheck-gcc.h. This is critical stuff, as [cURL] is the software library needed for actually connecting to websites and downloading the image files of Manga/Comics.

It turns out that CURLOPT_ACCEPT_ENCODING wasn’t always called that. With cURL version 7.21.6, it was renamed to that from the former CURLOPT_ENCODING as you can read [here]. And well, CentOS 6 ships with cURL 7.19.7…

When running $ ./configure && make from the unpacked HakuNeko source tree without fixing anything, you’ll run into this problem:

g++ -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -o obj/CurlRequest.cpp.o src/CurlRequest.cpp
src/CurlRequest.cpp: In member function ‘void CurlRequest::SetCompression(wxString)’:
src/CurlRequest.cpp:122: error: ‘CURLOPT_ACCEPT_ENCODING’ was not declared in this scope
make: *** [obj/CurlRequest.cpp.o] Error 1

So you’ll have to fix the call in src/CurlRequest.cpp! Look for this part:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Change CURLOPT_ACCEPT_ENCODING to CURLOPT_ENCODING. The rest can stay the same, as the name is all that has really changed here. It’s functionally identical as far as I can tell. So it should look like this:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Save the file, go back to the main source tree and you can do:

  • $ ./configure && make
  • # make install

And done! Works like a charm:

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux!

And now, for your convenience I fixed up the Makefile and rpm/SPECS/specfile.spec a little bit to build proper rpm packages as well. I can provide them for CentOS 6.x Linux in both 32-bit as well as 64-bit x86 flavors:

You need to unzip these first, because I was too lazy to allow the rpm file type in my blogging software.

The naked rpms have also been submitted to the HakuNeko developers as a comment to their [More Linux Packages] support ticket which you’re supposed to use for that purpose, so you can get them from there as well. Not sure if the developers will add the files to the projects’ official downloads.

This build of HakuNeko has been linked against the wxWidgets 2.8.12 GUI libraries, which come from the official CentOS 6.7 package repositories. So you’ll need to install wxGTK to be able to use the white kitty:

  • # yum install wxGTK

After that you can install the .rpm package of your choice. For a 64-bit system for instance, enter the folder where the hakuneko_1.3.12_el6_x86_64.rpm file is, run # yum localinstall ./hakuneko_1.3.12_el6_x86_64.rpm and confirm the installation.

Now it’s time to have fun using HakoNeko on your Enterprise Linux system! Totally what RedHat intended you to use it for! ;) :roll:

Jan 262016
 

HakuNeko logoSince I’ve started using FreeBSD as a Linux and Windows replacement, I’ve naturally always been looking at porting my “known good” software over to the UNIX OS, or at replacing it by something that gets the job done without getting on my nerves too much at the same time. For most parts other than TrueCrypt, that was quite achievable, even though I had to endure varying degrees of pain getting there. Now, my favorite Manga / Comic ripper on Windows, [HakuNeko] was the next piece of software on the list. It’s basically just a more advanced Manga website parser and downloader based on stuff like [cURL], [OpenSSL] or the [wxWidgets] GUI libraries.

I didn’t even know this until recently (shame on me for never looking closely), but HakuNeko is actually free software licensed under the MIT license. Unfortunately, the source code and build system are quite Linux- and Windows-centric, and there exist neither packages nor ports of it for FreeBSD UNIX. Actually, the code doesn’t even build on my CentOS 6.7 Linux right now (I have yet to figure out the exact problem), but I managed to fix it up so it can compile and work on FreeBSD! And here’s how, step by step:

1.) Prerequisites

Please note that from here on, terminal commands are shown in this form: $ command or # command. Commands starting with a $ are to be executed as a regular user, and those starting with # have to be executed as the superuser root.

Ok, this has been done on FreeBSD 10.2 x86_32 using HakuNeko 1.3.12, both are current at the time of writing. I guess it might work on older and future releases of FreeBSD with different releases of HakuNeko as well, but hey, who knows?! That having been said, you’ll need the following software on top of FreeBSD for the build system to work (I may have missed something here, if so, just install the missing stuff like shown below):

  • cURL
  • GNU sed
  • GNU find
  • bash
  • OpenSSL
  • wxWidgets 2.8.x

Stuff that’s not on your machine can be fetched and installed as root from the official package repository, Like e.g.: # pkg install gsed findutils bash wx28-gtk2 wx28-gtk2-common wx28-gtk2-contrib wx28-gtk2-contrib-common

Of course you’ll need the HakuNeko source code as well. You can get it from official sources (see the link in first paragraph) or download it directly from here in the version that I’ve used successfully. If you take my version, you need 7zip for FreeBSD as well: # pkg install p7zip.

Unpack it:

  • $ 7z x hakuneko_1.3.12_src.7z (My version)
  • $ tar -xzvf hakuneko_1.3.12_src.tar.gz (Official version)

The insides of my archive are just vanilla as well however, so you’ll still need to do all the modifications by yourself.

2.) Replace the shebang lines in all scripts which require it

Enter the unpacked source directory of HakuNeko and open the following scripts in your favorite text editor, then replace the leading shebang lines #!/bin/bash with #!/usr/local/bin/bash:

  • ./configure
  • ./config_clang.sh
  • ./config_default.sh
  • ./res/parsers/kissanime.sh

It’s always the first line in each of those scripts, see config_clang.sh for example:

  1. #!/bin/bash
  2.  
  3. # import setings from config-default
  4. . ./config_default.sh
  5.  
  6. # overwrite settings from config-default
  7.  
  8. CC="clang++"
  9. LD="clang++"

This would have to turn into the following (I also fixed that comment typo while I was at it):

  1. #!/usr/local/bin/bash
  2.  
  3. # import settings from config-default
  4. . ./config_default.sh
  5.  
  6. # overwrite settings from config-default
  7.  
  8. CC="clang++"
  9. LD="clang++"

3.) Replace all sed invocations with gsed invocations in all scripts which call sed

This is needed because FreeBSDs sed and Linux’ GNU sed aren’t exactly that compatible in how they’re being called, different options and all.

In the text editor vi, the expression :%s/sed /gsed /g can do this globally over an entire file (mind the whitespaces, don’t omit them!). Or just use a convenient graphical text editor like gedit or leafpad for searching and replacing all occasions. The following files need sed replaced with gsed:

  • ./configure
  • ./res/parsers/kissanime.sh

4.) Replace all find invocations with gfind invocations in ./configure

Same situation as above with GNU find, like :%s/find /gfind /g or so, but only in one file:

  • ./configure

5.) Fix the make check

This is rather cosmetic in nature as $ ./configure won’t die if this test fails, but you may still wish to fix this. Just replace the string make --version with gmake --version (there is only one occurrence) in:

  • ./configure

6.) Fix the DIST variables’ content

I don’t think that this is really necessary either, but while we’re at it… Change the DIST=linux default to DIST=FreeBSD in:

  • ./configure

Again, only one occurrence.

7.) Run ./configure to create the Makefile

Enough with that, let’s run the first part of the build tools:

  • $ ./configure --config-clang

Notice the --config-clang option? We could use GCC as well, but since clang is FreeBSDs new and default platform compiler, you should stick with that whenever feasible. It works for HakuNeko, so we’re gonna use the default compiler, which means you don’t need to install the entire GCC for just this.

There will be error messages looking quite intimidating, like the basic linker test failing, but you can safely ignore those. Has something to do with different function name prefixes in FreeBSDs libc (or whatever, I don’t really get it), but it doesn’t matter.

However, there is one detail that the script will get wrong, and that’s a part of our include path. So let’s handle that:

8.) Fix the includes in the CFLAGS in the Makefile

Find the line containing the string CFLAGS = -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread or similar in the newly created ./Makefile. After the option -O2 add the following: -I/usr/local/include. So it looks like this: CFLAGS = -c -Wall -O2 -I/usr/local/include -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread. That’s it for the Makefile.

9.) Fix the Linux-specific conditionals across the C++ source code

And now the real work starts, because we need to fix up portions of the C++ code itself as well. While the code would build and run fine on FreeBSD, those relevant parts are hidden behind some C++ preprocessor macros/conditionals looking for Linux instead. Thus, important parts of the code can’t even compile on FreeBSD, because the code only knows Linux and Windows. Fixing that isn’t extremely hard though, just a bit of copy, paste and/or modify. First of all, the following files need to be fixed:

  • ./src/MangaConnector.cpp
  • ./src/Logger.cpp
  • ./src/MangaDownloaderMain.cpp
  • ./src/MangaDownloaderConfiguration.cpp

Now, what you should look for are all conditional blocks which look like #ifdef __LINUX__. Each will end with an #endif line. Naturally, there are also #ifdef __WINDOWS__ blocks, but those don’t concern us, as we’re going to use the “Linux-specific” code, if you can call it that. Let me give you an example right out of MangaConnector.cpp, starting at line #20:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Now given that the Linux code builds just fine on FreeBSD, the most elegant and easier version would be to just alter all those #ifdef conditionals to inclusive #if defined ORs, so that they trigger for both Linux and FreeBSD. If you do this, the block from above would need to change to this:

  1. #if defined __LINUX__ || __FreeBSD__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Should you ever want to create different code paths for Linux and FreeBSD, you can also just duplicate it. That way you could later make changes for just Linux or just FreeBSD separately:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif
  4. #ifdef __FreeBSD__
  5. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  6. #endif

Whichever way you choose, you’ll need to find and update every single one of those conditional blocks. There are three in Logger.cpp, three in MangaConnector.cpp, two in MangaDownloaderConfiguration.cpp and again three in MangaDownloaderMain.cpp. Some are more than 10 lines long, so make sure to not make any mistakes if duplicating them.

Note that you can maybe extend compatibility even further with additional directives like __OpenBSD__ or __NetBSD__ for additional BSDs or __unix__ for a wide range of UNIX systems like AIX or HP-UX. None of which has been tested by me of course.

When all of that is done, it’s compile and install time:

10.) Compile and install

You can compile as a regular user, but the installation needs root by default. I’ll assume you’ll want to install HakuNeko system-wide, so, we’ll leave the installation target directories on their defaults below /usr/local/. While sitting in the unpacked source directory, run:

  • $ gmake
  • # gmake install

If nothing starts to crash and burn, this should compile and install the code. clang will show some warnings during compilation, but you can safely ignore that.

11.) Start up the white kitty

The installation procedure will also conveniently update your window manager as well, if you’re using panels/menus. Here it’s Xfce4:

HakuNeko is showing up as an "Internet" tool

HakuNeko (“White Cat”) is showing up as an “Internet” tool. Makes sense.

With the modifications done properly it should fire up just fine after initializing its Manga connectors:

HakuNeko with the awesomeness that is "Gakkou Gurashi!" being selected from the HTTP source "MangaReader"

HakuNeko with the awesomeness that is “Gakkou Gurashi!” being selected from the HTTP source [MangaReader].

Recently the developers have also added [DynastyScans] as a source, which provides access to partially “rather juicy” Yuri Dōjinshi (self-published amateur and sometimes semi-professional works) of well-known Manga/Anime, if you’re into that. Yuri, that is (“girls love”). Mind you, not all, but a lot of the stuff on DynastyScans can be considered NSFW and likely 18+, just as a word of warning:

HakuNeko fetching a Yuru Yuri Dōjinshi from DynastyScans, bypassing their download limits by not fetching packaged ZIPs - it works perfectly!

HakuNeko fetching a Yuru Yuri Dōjinshi called “Secret Flowers” from DynastyScans, bypassing their download limits by not fetching packaged ZIPs – it works perfectly!

Together with a good comic book reader that can read both plain JPEG-filled folders and stuff like packaged .cbz files, HakuNeko makes FreeBSD a capable comic book / Manga reading system. My personal choice for a reader to accompany HakuNeko would be [QComicBook], which you can easily get on FreeBSD. There are others you can fetch from the official package repository as well though.

Final result:

HakuNeko and QComicBook make a good team on FreeBSD UNIX

HakuNeko and QComicBook make a good team on FreeBSD UNIX – I like the reader even more than ComicRack on Windows.

And, at the very end, one more thing, even though you’re likely going to be aware of this already: Just like Anime fansubs, fan-translated Manga or even Dōjinshi are sitting in a legal grey zone, as long as the book in question hasn’t been licensed in your country. It’s being tolerated, but if it does get licensed, ownership of a fan-translated version will likely become illegal, which means you should actually buy the stuff at that point in time.

Just wanted to have that said as well.

Should you have trouble building HakuNeko on FreeBSD 10 UNIX (maybe because I missed something), please let me know in the comments!

Sep 022015
 

colorecho logoRecently, I am doing a lot of audio/video transcoding, and for that I’m using tools like mkvextract, mkvmerge, x264, avconv and fghaacenc on the command line. Mostly this is to get rid of old files that are either very inefficiently encoded – like DivX3 or MPEG-2 – or are just too large/high-bitrate for what the content’s worth. Since I do have large batches of relatively similar files, I wrote myself a few nice loops that would walk through an entire stack of videos and process all of it automatically. Because I want to see at first glance how far through a given video stack a script is, I wanted to output some colored notifications on the shell after each major processing step, telling me the current video number. That way, I could easily see how far the job had progressed. We’re talking processes here that take multiple days, so it makes sense.

Turns out that this was harder than expected. In the old DOS days, you would just load ANSI.SYS for that, and get ANSI color codes to work with. The same codes still work in modern UNIX terminals. But not within the Windows cmd for whatever reason. Now there is an interface for that kind of stuff, but seemingly no tools to use it. Like e.g. the echo command simply can’t make use of it.

Now there is project porting ANSI codes back to the cmd, and it’s called [ANSICON]. Basically just a DLL hooked into the cmd. Turns out I can’t use that, because I’m already using [clink] to make my cmd more UNIX-like and actually usable, which also is additional code hooked into the cmd, and the two confict with each other. More information about that [here].

So how the hell can I do this then? Turns out there is a batch script that can do it, but it’s a very outlandish hack that doesn’t seem to want to play nice with my shells if I do @echo off instead of echo off as well. I guess it’s clinks work breaking stuff here, but I’m not sure. Still, here is the code, as found by [Umlüx]:

expand/collapse source code
  1. ECHO OFF
  2. SETLOCAL EnableDelayedExpansion
  3.  
  4. FOR /F "tokens=1,2 delims=#" %%A IN ('"PROMPT #$H#$E# & ECHO ON & FOR %%b IN (1) DO REM"') DO (
  5.   SET "DEL=%%A"
  6. )
  7.  
  8. ECHO say the name of the colors, dont read
  9.  
  10. CALL :ColorText 0a "blue"
  11. CALL :ColorText 0C "green"
  12. CALL :ColorText 0b "red"
  13. ECHO.
  14. CALL :ColorText 19 "yellow"
  15. CALL :ColorText 2F "black"
  16. CALL :ColorText 4e "white"
  17.  
  18. GOTO :eof
  19.  
  20. :ColorText
  21.  
  22. ECHO OFF
  23.  
  24.  "%~2"
  25. findstr /v /a:%1 /R "^$" "%~2" NUL
  26. DEL "%~2" > NUL 2>&1
  27.  
  28. GOTO :eof

So since those two solutions didn’t work for me, what else could I try?

Then I had that crazy idea: Since the interface is there, why not write a little C program (actually ended up being C++ instead) that uses it? Since I was on Linux at the time, I tried to write it there and attempt my first cross-compile for Windows using a pre-built [MinGW-w64] compiler, that luckily just played along nicely on my CentOS 6.6 Linux. You can get pre-built binaries for 32-Bit Windows targets [here], and for 64-bit Windows targets [here]. Thing is, I know zero C++, so that took some time and it’s quite crappy, but here’s the code:

expand/collapse source code
  1. #include <iostream>
  2. #include "windows.h"
  3.  
  4. // Set NT commandline color:
  5. void SetColor(int value) {
  6.     SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), value);
  7. }
  8.  
  9. int main(int argc, char *argv[]) {
  10.   SetColor(atoi(argv[2]));
  11.   std::cout << "\r\n" << argv[1] << "\r\n";
  12.   SetColor(7);
  13.   return 0;
  14. }

Update 2016-06-22: Since I wanted to work with wide character strings in recent days (so, non-ANSI Unicode characters), my colorecho command failed, because it couldn’t handle them at all. There was no wide character support. I decided to update the program to also work with that using Windows-typical UCS-2le/UTF-16, here’s the code:

expand/collapse source code
  1. #include <iostream>
  2. #include <io.h>
  3. #include <fcntl.h>
  4. #include "windows.h"
  5.  
  6. // Set NT commandline color:
  7. void SetColor(int value) {
  8.     SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), value);
  9. }
  10.  
  11. // Unicode main
  12. int wmain(int argc, wchar_t *argv[]) {
  13.   SetColor(_wtoi(argv[2]));
  14.   _setmode(_fileno(stdout), _O_U16TEXT);      // Set stdout to UCS-2le/UTF-16
  15.   std::wcout << "\r\n" << argv[1] << "\r\n";  // Write unicode
  16.   SetColor(7);                                // Set color back to default
  17.   return 0;
  18. }

I also updated the download section and the screenshots further below. End of update.

This builds nicely using MinGW on Linux for 64-bit Windows targets. Like so: While sitting in the MinGW directory (I put my code there as echo.cpp as well, for simplicities’ sake), run: ./bin/x86_64-w64-mingw32-g++ echo.cpp -o colorecho-x86_64.exe. If you want to target 32-bit Windows instead, you’d need to use the proper MinGW for that (it comes in a separate archive): ./bin/i686-w64-mingw32-g++ echo.cpp -o colorecho-x86_64.exe

When building like that, you’d also need to deploy libstdc++-6.so and libgcc_s_seh-1.dll along with the EXE files though. The DLLs can be obtained from your local MinGW installation directory, subfolder ./x86_64-w64-mingw32/lib/ for the 64-bit or ./i686-w64-mingw32/lib/ for the 32-bit compiler. If you don’t want to do that and rather rely on Microsofts own C++ redistributables, you can also compile it with Microsoft VisualStudio Express, using its pre-prepared command line. You can find that here, if you have Microsofts VisualStudio installed – version 2010 in my case:

VC2010 command lines

The command lines of Visual Studio (click to enlarge)

Here, the “Visual Studio Command Prompt” is for a 32-bit build target, and “Visual Studio x64 Win64 Command Prompt” is for building 64-bit command line programs. Choose the appropriate one, then change into a directory where you have echo.cpp and run the following command: cl /EHsc /W4 /nologo /Fecolorecho.exe echo.cpp, giving you the executable colorecho.exe.

Alternatively, you can just download my pre-compiled versions here:

Update 2016-06-22: And here are the Unicode versions:

End of update.

Note that this tool does exactly what I need it to do, but it’ll likely not do exactly what you’d need it to do. Like e.g. the line breaks it adds before and after its output. That’s actually a job for the shells echo command to do, not some command line tool. But I just don’t care. So that’s why it’s basically a piece of crap for general use. The syntax is as follows, as shown for the 64-bit VC10 build:

colorecho-x86_64-vc10.exe "I am yellow!" 14

When run, it looks like this:

Colorecho running

colorecho invoked on the Windows cmd

So the first parameter is the string to be echoed, the second one is the color number. That number is 2-digit and can affect both the foreground and the background. Here the first 16 of them, which are foreground only:

  • 0: Black
  • 1: Dark blue
  • 2: Dark green
  • 3: Dark cyan
  • 4: Dark red
  • 5: Dark purple
  • 6: Dark yellow
  • 7: Light grey
  • 8: Dark grey
  • 9: Light blue
  • 10: Light green
  • 11: Light cyan
  • 12: Light red
  • 13: Light purple
  • 14: Light yellow
  • 15: White

If you go higher than that, you’ll also start changing the background colors and you’ll get different combinations of foreground and background colorization.

The background colors actually follow the same order, black, dark blue, dark green, etc. Numbers from 0..15 are on black background, numbers from 16..31 are on a dark blue background and so on. This makes up pretty much the same list:

  • 0..15: 16 foreground colors as listed above on the default black background, and
  • 16..31: on a dark blue background
  • 32..47: on a dark green background
  • 48..63: on a dark cyan background
  • 64..79: on a dark red background
  • 80..95: ona dark purple background
  • 96..111: on a dark yellow background
  • 112..127: on a light grey background
  • 128..143: on a dark grey background
  • 144..159: on a light blue background
  • 160..175: on a light green background
  • 176..191: on a light cyan background
  • 192..207: on a light red background
  • 208..223: on a light purple background
  • 224..239: on a light yellow background
  • 240..255: on a white background

Going over 255 will simply result in an overflow causing the system to start from the beginning, so 256 is equal to 0, 257 to 1, 260 to 4 and so on.

Update 2016-06-22: With the Unicode version, you can now even do stupid stuff like this:

colorecho unicode with DejaVu Sans Mono font on Windows cmd

It could be useful for some people? Maybe?

Note that this has been rendered using the DejaVu Sans Mono font, you can get it from [here]. Also, you need to tell the Windows cmd which fonts to allow (only monospaced TTFs work), you can read how to do that [here], it’s a relatively simple registry hack. I have yet to find a font that would work and also render CJK characters though, but with the fonts I myself have available at the moment, asian stuff won’t work, you’ll just get some boxes instead:

colorecho unicode CJK fail

Only characters which have proper glyphs for all desired codepoints in the font in use can be rendered, at least on Windows XP.

Note that this is not a limitation of colorecho, it does output the characters correctly, “ですね” in this case. It’s just that DejaVu Sans Mono has no font glyphs to actually draw them on screen. If you ever come across a neat cmd-terminal-compatible font that can render most symbols and CJK, please tell me in the comments, I’d be quite happy about that! End of update.

If somebody really wants to use this (which I doubt, but hey), but wishes it to do things in a more sane way, just request it in the comments. Or, if you can, just take the code, change and recompile it by yourself. You can get Microsofts VisualStudio Express – in the meantime called [VisualStudio Community] – free of charge anyway, and MinGW is free software to begin with.

Nov 142014
 

MOD Music on FreeBSD 10 with xmmsAs much as I am a WinAmp 2 lover on Microsoft Windows, I am an xmms lover on Linux and UNIX. And by that I mean xmms1, not 2. And not Audacious. Just good old xmms. Not only does it look like a WinAmp 2 clone, it even loads WinAmp 2/5 skins and supports a variety of rather obscure plugins, which I partially need and love (like for Commodore 64 SID tunes via libsidplay or libsidplay2 with awesome reSID engine support).

Recently I started playing around with a free Laptop I got and which I wanted to be my operating systems test bed. Currently, I am evaluating – and probably keeping – FreeBSD 10 UNIX. Now it ain’t perfect, but it works pretty well after a bit of work. Always using Linux or XP is just a bit boring by now. ;)

One of the problems I had was with xmms. While the package is there, its built-in tracker module file (mod, 669, it, s3m etc.) support is broken. Play one of those and its xmms-mikmod plugin will cause a segmentation fault immediately when talking to a newer version of libmikmod. Also, recompiling [multimedia/xmms] from the ports tree produced a binary with the same fault. Then I found this guy [Jakob Steltner] posting about the problem on the ArchLinux bugtracker, [see here].

Based on his work, I created a ports tree compatible patch patch-drv_xmms.c, here is the source code:

  1. --- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
  2. +++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
  3. @@ -117,6 +117,10 @@
  4.         return VC_Init();
  5.  }
  6.  
  7. +static void xmms_CommandLine(CHAR * commandLine)
  8. +{
  9. +}
  10. +
  11.  MDRIVER drv_xmms =
  12.  {
  13.         NULL,
  14. @@ -126,7 +130,8 @@
  15.  #if (LIBMIKMOD_VERSION &gt; 0x030106)
  16.          "xmms",
  17.          NULL,
  18. -#endif
  19. +#endif
  20. +       xmms_CommandLine, // Was missing
  21.          xmms_IsThere,
  22.         VC_SampleLoad,
  23.         VC_SampleUnload

So that means recompiling xmms from source by yourself. But fear not, it’s relatively straightforward on FreeBSD 10.

Assuming that you unpacked your ports tree as root by running portsnap fetch and portsnap extract without altering anything, you will need to put the file in /usr/ports/multimedia/xmms/files/ and then cd to that directory on a terminal. Now run make && make install as root. The patch will be applied automatically.

Now one can once again run xmms and module files just work as they’re supposed to:

A patched xmms playing a MOD file on FreeBSD 10

xmms playing a MOD file on FreeBSD 10 via a patched xmms-mikmod (click to enlarge)

The sad thing is, soon xmms will no longer be with us, at least on FreeBSD. It’s now considered abandonware, as all development has ceased. The xmms port on FreeBSD doesn’t even have a maintainer anymore and it’s [scheduled for deletion] from the ports tree together with all its plugins from [ports/audio] and [ports/multimedia]. I just wish I could speak C to some degree and work on that myself. But well…

Seems my favorite audio player is finally dying, but as with all the other old software I like and consider superior to modern alternatives, I’m gonna keep it alive for as long as possible!

You can download Jakobs’ patch that I adapted for the FreeBSD ports tree right here:

Have fun! :)

Edit: xmms once again has a maintainer for its port on FreeBSD, so we can rejoice! I have [submitted the patch to FreeBSDs bugzilla] as suggested by [SirDice] on the [FreeBSD Forums], and Naddy, the ports maintainer integrated the patch promptly! It’s been committed since [revision 375356]! So now you don’t need to patch it by hand and recompile it from source code anymore – at least not on x86 machines. You can either build it from ports as-is or just pull a fully working binary version from the packages repository on FreeBSD 10.x by running pkg install xmms.

I tested this on FreeBSD 10.0 and 10.1 and it works like a charm, so thanks fly out to Jakob and Naddy! Even though my involvement here was absolutely minimal, it feels awesome to do something good, and help improving a free software project, even if only marginally so. :)

Sep 232014
 

CD burning logo[1] At work I usually have to burn a ton of heavily modified Knoppix CDs for our lectures every year or so. The Knoppix distribution itself is being built by me and a colleague to get a highly secure read-only, server-controlled environment for exams and lectures. Now, usually I’m burning on both a Windows box with Ahead [Nero], and on Linux with the KDE tool [K3B] (despite being a Gnome 2 user), both GUI tools. My Windows box had 2 burners, my Linux box one. To speed things up and increase disc quality at the same time the idea was to plug more burners into the machines and burn each individual disc slower, but parallelized.

I was shocked to learn that K3B can actually not burn to multiple burners at once! I thought I was just being blind, stumbling through the GUI like an idiot, but it’s actually really not there. Nero on the other hand managed to do this for what I believe is already the better part of a decade!

True disc burning stations are just too expensive, like 500€ for the smaller ones instead of the 80-120€ I had to spend on a bunch of drives, so what now? Was I building this for nothing?

Poor mans disc station

Poor mans disc station. Also a shitty photograph, my apologies for that, but I had no real camera available at work.

Well, where there is a shell, there’s a way, right? Being the lazy ass that I am, I was always reluctant to actually use the backend tools of K3B on the command line myself. CD/DVD burning was something I had just always done on a GUI. But now was the time to script that stuff myself, and for simplicities sake I just used the bash. In addition to the shell, the following core tools were used:

  • cut
  • grep
  • mount
  • sudo (For a dismount operation, might require editing /etc/sudoers)

Also, the following additional tools were used (most Linux distributions should have them, conservative RedHat derivatives like CentOS can get the stuff from [EPEL]):

  • [eject(eject and retract drive trays)
  • [sdparm(read SATA device information)
  • sha512sum (produce and compare high-quality checksums)
  • wodim (burn optical discs)

I know there are already scripts for this purpose, but I just wanted to do this myself. Might not be perfect, or even good, but here we go. The work(-in-progress) is divided into three scripts. The first one is just a helper script generating a set of checksum files from a master source (image file or disc) that you want to burn to multiple discs later on, I call it create-checksumfiles.sh. We need one file for each burner device node later, because sha512sum needs that to verify freshly burned discs, so that’s why this exists:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. wrongpath=1 # Path for the source/master image is set to invalid in the
  4.             # beginning.
  5.  
  6. # Getting path to the master CD or image file from the user. This will be
  7. # used to generate the checksum for later use by multiburn.sh
  8. until [ $wrongpath -eq 0 ]
  9. do
  10.   echo -e "Please enter the file name of the master image or device"
  11.   echo -e "(if it's a physical disc) to create our checksum. Please"
  12.   echo -e 'provide a full path always!'
  13.   echo -e "e.g.: /home/myuser/isos/master.iso"
  14.   echo -e "or"
  15.   echo -e "/dev/sr0\n"
  16.   read -p "&gt; " -e master
  17.  
  18.   if [ -b $master -o -f $master ] &amp;&amp; [ -n "$master" ]; then
  19.     wrongpath=0 # If device or file exists, all ok: Break this loop.
  20.   else
  21.     echo -e "\nI can find neither a file nor a device called $master.\n"
  22.   fi
  23. done
  24.  
  25. echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"
  26.  
  27. checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.
  28.  
  29. # Getting device node name prefix of the users' CD/DVD burners from the
  30. # user.
  31. echo -e "Now please enter the device node prefix of your disc burners."
  32. echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
  33. echo -e "etc."
  34. read -p "&gt; " -e devnode
  35.  
  36. # Getting number of burners in the system from the user.
  37. echo -e "\nNow enter the total number of attached physical CD/DVD burners."
  38. read -p "&gt; " -e burners
  39.  
  40. ((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!
  41.  
  42. echo -e "\nDone, creating the following files with the following contents"
  43. echo -e "for later use by the multiburner for disc verification:"
  44.  
  45. # Creating the per-burner checksum files for later use by multiburn.sh.
  46. for ((i=0;i&lt;=$burners;i++))
  47. do
  48.   echo -e " * sum$i.txt: $checksum $devnode$i"
  49.   echo "$checksum $devnode$i" &gt; sum$i.txt
  50. done
  51.  
  52. echo -e ""

As you can see it’s getting its information from the user interactively on the shell. It’s asking the user where the master medium to checksum is to be found, what the users burner / optical drive devices are called, and how many of them there are in the system. When done, it’ll generate a checksum file for each burner device, called e.g. sum0.txt, sum1.txt, … sum<n>.txt.

Now to burn and verify media in a parallel fashion, I’m using an old concept I have used before. There are two more scripts, one is the controller/launcher, which will then spawn an arbitrary amount of the second script, that I call a worker. First the controller script, here called multiburn.sh:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. if [ $# -eq 0 ]; then
  4.   echo -e "\nPlease specify the number of rounds you want to use for burning."
  5.   echo -e "Each round produces a set of CDs determined by the number of"
  6.   echo -e "burners specified in $0."
  7.   echo -e "\ne.g.: ./multiburn.sh 3\n"
  8.   exit
  9. fi
  10.  
  11. #@========================@
  12. #| User-configurable part:|
  13. #@========================@
  14.  
  15. # Path that the image resides in.
  16. prefix="/home/knoppix/"
  17.  
  18. # Image to burn to discs.
  19. image="knoppix-2014-09.iso"
  20.  
  21. # Number of rounds are specified via command line parameter.
  22. copies=$1
  23.  
  24. # Number of available /dev/sr* devices to be used, starting
  25. # with and including /dev/sr0 always.
  26. burners=3
  27.  
  28. # Device node name used on your Linux system, like "/dev/sr" for burners
  29. # called /dev/sr0, /dev/sr1, etc.
  30. devnode="/dev/sr"
  31.  
  32. # Number of blocks per complete disc. You NEED to specify this properly!
  33. # Failing to do so will break the script. You can read the block count 
  34. # from a burnt master disc by running e.g. 
  35. # ´sdparm --command=capacity /dev/sr*´ on it.
  36. blocks=340000
  37.  
  38. # Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
  39. speed=32
  40.  
  41. #@===========================@
  42. #|NON user-configurable part:|
  43. #@===========================@
  44.  
  45. # Checking whether all required tools are present first:
  46. # Checking for eject:
  47. if [ ! `which eject 2&gt;/dev/null` ]; then
  48.   echo -e "\e[0;33meject not found. $0 cannot operate without eject, you'll need to install"
  49.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  50.   exit
  51. fi
  52. # Checking for sdparm:
  53. if [ ! `which sdparm 2&gt;/dev/null` ]; then
  54.   echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  55.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  56.   exit
  57. fi
  58. # Checking for sha512sum:
  59. if [ ! `which sha512sum 2&gt;/dev/null` ]; then
  60.   echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  61.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  62.   exit
  63. fi
  64. # Checking for sudo:
  65. if [ ! `which sudo 2&gt;/dev/null` ]; then
  66.   echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  67.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  68.   exit
  69. fi
  70. # Checking for wodim:
  71. if [ ! `which wodim 2&gt;/dev/null` ]; then
  72.   echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  73.   echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  74.   exit
  75. fi
  76.  
  77. ((burners--)) # Reducing number of burners by one as we also have a burner "0".
  78.  
  79. # Initial burner ejection:
  80. echo -e "\nEjecting trays of all burners...\n"
  81. for ((g=0;g&lt;=$burners;g++))
  82. do
  83.   eject $devnode$g &amp;
  84. done
  85. wait
  86.  
  87. # Ask user for confirmation to start the burning session.
  88. echo -e "Burner trays ejected, please insert the discs and"
  89. echo -e "press any key to start.\n"
  90. read -n1 -s # Wait for key press.
  91.  
  92. # Retract trays on first round. Waiting for disc will be done in
  93. # the worker script afterwards.
  94. for ((l=0;l&lt;=$burners;l++))
  95. do
  96.   eject -t $devnode$l &amp;
  97. done
  98.  
  99. for ((i=1;i&lt;=$copies;i++)) # Iterating through burning rounds.
  100. do
  101.   for ((h=0;h&lt;=$burners;h++)) # Iterating through all burners per round.
  102.   do
  103.     echo -e "Burning to $devnode$h, round $i."
  104.     # Burn image to burners in the background:
  105.     ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &amp;
  106.   done
  107.   wait # Wait for background processes to terminate.
  108.   ((j=$i+1));
  109.   if [ $j -le $copies ]; then
  110.     # Ask user for confirmation to start next round:
  111.     echo -e "\nRemove discs and place new discs in the drives, then"
  112.     echo -e "press a key for the next round #$j."
  113.     read -n1 -s # Wait for key press.
  114.     for ((k=0;k&lt;=$burners;k++))
  115.     do
  116.       eject -t $devnode$k &amp;
  117.     done
  118.     wait
  119.   else
  120.     # Ask user for confirmation to terminate script after last round.
  121.     echo -e "\n$i rounds done, remove discs and press a key for termination."
  122.     echo -e "Trays will close automatically."
  123.     read -n1 -s # Wait for key press.
  124.     for ((k=0;k&lt;=$burners;k++))
  125.     do
  126.       eject -t $devnode$k &amp; # Pull remaining empty trays back in.
  127.     done
  128.     wait
  129.   fi
  130. done

This one will take one parameter on the command line which will define the number of “rounds”. Since I have to burn a lot of identical discs this makes my life easier. If you have 5 burners, and you ask the script to go for 5 rounds that would mean you get 5 × 5 = 25 discs, if all goes well. It also needs to know the size of the medium in blocks for a later phase. For now you have to specify that within the script. The documentation inside shows you how to get that number, basically by checking a physical master disc with sdparm –command=capacity.

Other things you need to specify are the path to the image, the image files’ name, the device node name prefix, and the burning speed in factor notation. Also, of course, the number of physical burners available in the system. When run, it’ll eject all trays, prompt the user to put in discs, and launch the burning & checksumming workers in parallel.

The controller script will wait for all background workers within a round to terminate, and only then prompt the user to remove and replace all discs with new blank media. If this is the last round already, it’ll prompt the user to remove the last media set, and will then retract all trays by itself at the press of any key. All tray ejection and retraction is done automatically, so with all your drive trays still empty and closed, you launch the script, it’ll eject all drive trays for you, and retract after a keypress signaling the script all trays have been loaded by the user etc.

Let’s take a look at the worker script, which is actually doing the burning & verifying, I call this burn-and-check-worker.sh:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. burner=$1   # Burner number for this process.
  4. image=$2    # Image file to burn.
  5. blocks=$3   # Image size in blocks.
  6. round=$4    # Current round (purely to show the info to the user).
  7. speed=$5    # Burning speed.
  8. devnode=$6  # Device node prefix (devnode+burner = burner device).
  9. bwait=0     # Timeout variable for "blank media ready?" waiting loop.
  10. mwait=0     # Timeout variable for automount waiting loop.
  11. swait=0     # Timeout variable for "disc ready?" waiting loop.
  12. m=0         # Boolean indicating automount failure.
  13.  
  14. echo -e "Now burning $image to $devnode$burner, round $round."
  15.  
  16. # The following code will check whether the drive has a blank medium
  17. # loaded ready for writing. Otherwise, the burning might be started too
  18. # early when using drives with slow disc access.
  19. until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
  20. do
  21.   ((bwait++))
  22.   if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
  23.     echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
  24.     echo -e "(Was trying to burn to $devnode$burner in round $round,"
  25.     echo -e "failed to detect any blank medium in the drive.)\e[0m"
  26.     eject $devnode$burner
  27.     exit
  28.   fi
  29.   sleep 1 # Sleep 1 second before next check.
  30. done
  31.  
  32. wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.
  33.  
  34. # Notify user if burning failed.
  35. if [[ $? != 0 ]]; then
  36.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  37.   echo -e "Ejecting and aborting this thread.\e[0m\n"
  38.   eject $devnode$burner
  39.   exit
  40. fi
  41.  
  42. # The following code will eject and reload the disc to clear the device
  43. # status and then wait for the drive to become ready and its disc to
  44. # become readable (checking the discs block count as output by sdparm).
  45. eject $devnode$burner &amp;&amp; eject -t $devnode$burner
  46. until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
  47. do
  48.   ((swait++))
  49.   if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
  50.     echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
  51.     echo -e "(Was trying to access $devnode$burner in round $round,"
  52.     echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
  53.     exit
  54.   fi
  55.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  56. done
  57.  
  58. # The next part is only necessary if your system auto-mounts optical media.
  59. # This is usually the case, but if your system doesn't do this, you need to
  60. # comment the next block out. This will otherwise wait for the disc to
  61. # become mounted. We need to dismount afterwards for proper checksumming.
  62. until [ -n "`mount | grep $devnode$burner`" ]
  63. do
  64.   ((mwait++))
  65.   if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
  66.     echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
  67.     echo -e "Attempting to carry on..."
  68.     echo -e "(Was waiting for disc on $devnode$burner to automount in"
  69.     echo -e "round $round for 30 seconds.)\e[0m\n."
  70.     m=1
  71.     break
  72.   fi
  73.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  74. done
  75. if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  76.   sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  77.   sudo umount $devnode$burner # Dismount burner as root/superuser.
  78. fi
  79.  
  80. # On to the checksumming.
  81. echo -e "Now comparing checksums for $devnode$burner, round $round."
  82. sha512sum -c sum$burner.txt # Comparing checksums.
  83. if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  84.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
  85. fi
  86.  
  87. eject $devnode$burner # Ejecting disc after completion.

So as you can probably see, this is not very polished, scripts aren’t using configuration files yet (would be a nice to have), and it’s still a bit chaotic when it comes to actual usability and smoothness. It does work quite well however, with the device/disc readiness checking as well as the anti-automount workaround having been the major challenges (now I know why K3B ejects the disc before starting its checksumming, it’s simply impossible to read from the disc after burning finishes).

When run, it looks like this (user names have been removed and paths altered for the screenshot):

Multiburner

“multiburner.sh” at work. I was lucky enough to hit a bad disc, so we can see the checksumming at work here. The disc actually became unreadable near its end. Verification is really important for reliable disc deployment.

When using a poor mans disc burning station like this, I would actually recommend putting stickers on the trays like I did. That way, you’ll immediately know which disc to throw into the garbage bin.

This could still use a lot of polishing, and it’s quite sad, that the “big” GUI tools can’t do parallel burning, but I think I can now make due. Oh, and I actually also tried Gnomes “brasero” burning tool, and that one is far too minimalistic and can also not burn to multiple devices at the same time. They may be other GUI fatsos that can do it, but I didn’t want to try and get any of those installed on my older CentOS 6 Linux, so I just did it the UNIX way, even if not very elegantly. ;)

Maybe this can help someone out there, even though I think there might be better scripts than mine to get it done, but still. Otherwise, it’s just documentation for myself again. :)

Edit: Updated the scripts to implement a proper blank media detection to avoid burning starting prematurely in rare cases. In addition to that, I added some code to detect burning errors (where the burning process itself would fail) and notify the user about it. Also applied some cosmetic changes.

Edit 2: Added tool detection to multiburn.sh, and removed redundant color codes in warning & error messages in burn-and-check-worker.sh.

[1] Article logo based on the works of Lorian and Marcin Sochacki, “DVD.png” licensed under the CC BY-SA 3.0.

Jul 112014
 

TK-IP101 logoAttention please: This article contains some pretty negative connotations about the software shipped with the TRENDnet TK-IP101 KVM-over-IP product. While I will not remove what I have written, I have to say that TRENDnet went lengths to support me in getting things done, including allowing me to decompile and re-release their Java software in a fixed form under the free GNU General Public license. Please [read this] to learn more. This is extremely nice, and so it shall be stated before you read anything bad about this product, so you can see things in perspective! And no, TRENDnet has not asked me to post this paragraph, those are my own words entirely.

I thought that being able to manage my server out-of-band would be a good idea. It does sound good, right? Being able to remotely control it even if the kernel has crashed and being able to remotely access everything down to the BIOS level. A job for a KVM-over-IP switch. So I got this slightly old [TK-IP101] from TRENDnet. Turns out that wasn’t the smartest move, and it’s actually a 400€ piece of hardware. The box itself seems pretty ok at first, connecting to your KVM switch fabric or a single server via PS/2, USB and VGA. Plus, you can hook up a local PS/2 keyboard and mouse too. Offering what was supposed to be highly secure SSL PKI autentication via server+client certificates, so that only clients with the proper certificate may connect plus a web interface, this sounded really good!

TRENDnet TK-IP101

TRENDnet TK-IP101

It all breaks down when it comes to the software though. First of all, the guide for certificate creation that is supposed to be found on the CD that comes with the box is just not there. Also, the XCA software TRENDnet suggests one should use was also missing. Not good. Luckily, the software is open source and can be downloaded from the [XCA SourceForge project]. It’s basically a graphical OpenSSL front end. Create a PEM-encoded root certificate, PEM-encoded server certificate and a PKCS#12 client certificate, the latter signed by the root cert. So much for that. Oh, and I uploaded that TRENDnet XCA guide for you in case it’s missing on your CD too, its a bit different for the newer version of XCA, but just keep in mind to create keys beforehand and to use certificate requests instead of certificates. You then need to sign the requests with the root certificate. With that information plus the guide you should be able to manage certificate creation:

But it doesn’t end there. First I tried the Windows based viewer utility that comes with its own certificate import tool. Import works, but the tool will not do client+server authentication. What it WILL do before terminating itself is this:

TK-IP101 IPViewer.exe bug

TK-IP101 IPViewer.exe bug

I really tried to fix this. I even ran it on Linux with Wine just to do an strace on it, looking for failing open() calls. Nothing. So I thought… Why not try the second option, the Java Viewer that goes by the name of KViewer.jar? Usually I don’t install Java, but why not try it out with Oracles Java 1.7u60, eh? Well:

So yeah. What the hell happened there? It took me days to determine the exact cause, but I’ll cut to the chase: With Java 1.6u29, Oracle introduced multiple changes in the way SSL/TLS worked, also due to the disclosure of the BEAST vulnerability. When testing, I found that the software would work fine when run with JRE 1.6u27, but not with later versions. Since Java code is pretty easily decompiled (thanks fly out to Martin A. for pointing that out) and the Viewer just came as a JAR file, I thought I’d embark on the adventure of decompiling Java code using the [Java Decompiler]:

Java Decompiler decompiling KViewer.jar's Classes

Java Decompiler decompiling KViewer.jar’s Classes

This results in surprisingly readable code. That is, if you’re into Java. Which I am not. But yeah. The Java Decompiler is pretty convenient as it allows you to decompile all classes within a JAR and to extract all other resources along with the generated *.java files. And those I imported into a Java development environment I knew, Eclipse Luna.

Eclipse Luna

Eclipse Luna

Eclipse Luna (using a JDK 7u60) immediately complained about 15 or 16 errors and about 60 warnings. Mostly that was missing primitive declarations and other smaller things that even I managed to fix, got rid even of the warnings. But the SSL bug persisted in my Java 7 build just as it did before. See the following two traces, tracing SSL and handshaking errors, one working ok on JRE 1.6u27, and one broken on JRE 1.7u60:

So first I got some ideas from stuff [posted here at Oracle], and added the following two system properties in varying combinations directly in the Main class of KViewer.java:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

This didn’t really do any good though, especially since “interoperable” mode should work anyway and is being set as the default. But today I found [this information on an IBM site]!

It seems that Oracle fixed the BEAST vulnerability in Java 1.6u29 amongst other things. They seem to have done this by disallowing renegotiations for affected implementations of CBCs (Cipher-Block Chaining). Now, this KVM switch can negotiate only a single cipher: SSL_RSA_WITH_3DES_EDE_CBC_SHA. See that “CBC” in there? Yeah, right. And it got blocked, because the implementation in that aged KVM box is no longer considered safe. Since you can’t just switch to a stream-based RC4 cipher, Java has no other choice but to drop the connection! Unless…  you do this:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

Setting the jsse.enableCBCProtection property to false before the negotiation / handshake will make your code tolerate CBC ciphers vulnerable to BEAST attacks. Recompiling KViewer with all the code fixes including this one make it work fine with 2-way PKI authentication using a client certificate on both Java 1.7u60 and even Java 1.8u5. I have tested this using the 64-Bit x86 VMs on CentOS 6.5 Linux as well as on Windows XP Professional x64 Edition and Windows 7 Professional SP1 x64.

De-/Recompiled & "fixed" KViewer connecting to a machine much older even than its own crappy code

De-/Recompiled & “fixed” KViewer.jar connecting to a machine much older even than its own crappy code

I fear I cannot give you the modified source code, as TRENDnet would probably hunt me down, but I’ll give you the compiled byte code at least, the JAR file, so you can use it yourself. If you wanna check out the code, you could just decompile it yourself, losing only my added comments: [KViewer.jar]. (ZIPped, fixed / modified to work on Java 1.7+)

Both the modified code and the byte code “binary” JAR have been returned to TRENDnet in the context of my open support ticket there. I hope they’ll welcome it with open arms instead of suing me for decompiling their Java viewer.

In reality, even this solution is nowhere near perfect. While it does at least allow you to run modern Java runtime environments instead of highly insecure older ones plus using pretty secure PKI auth, it still doesn’t fix the man-in-the-middle-attack issues at hand. TRENDnet should fix their KVM firmware, enable it to run the TLSv1.2 protocol with AES256 Galois-Counter-Mode ciphers (GCM) and fix the many, many problems in their viewer clients. The TK-IP101 being an end-of-life product means that this is likely never gonna happen though.

It does say a lot when the consumer has to hack up the software of a supposedly high-security 400€ networking hardware piece by himself just to make it work properly.

I do still hope that TRENDnet will react positively to this, as they do not offer a modern replacement product to supersede the TK-IP101.

May 292014
 

Truecrypt LogoJust recently, I was happily hacking away at the Truecrypt 7.1a source code to enhance its abilities under Linux, and everybody was eagerly awaiting the next version of the open source disk encryption software since the developers told me they were working on “UEFI+GPT booting”, and now BOOM. Truecrypt website gone, forum gone, all former versions’ downloads gone. Replaced by a redirection to Truecrypts SourceForge site, showing a very primitive page telling users to migrate to Bitlocker on Windows and Filevault on MacOSX. And told to just “install some crypto stuff on Linux and follow the documentation”.

Seriously, what the fuck?

Just look at this shit (a snippet from the OSX part):

The Truecrypt website now

The Truecrypt website now

Farther up they’re saying the same thing, warning the user that it is not secure with the following addition: “as it may contain unfixed security issues”

There is also a new Truecrypt version 7.2 stripped of most of the functionality. It can only be used to decrypt and mount anymore, so this is their “migration version”. Funny thing is, the GPG signatures and keys seem to check out. It’s truly the Truecrypt developers’ keys that were used for signing the binaries.

Trying to get you a screenshot of the old web site for comparison from the WayBackMachine, you get this:

Can't fetch http://www.truecrypt.org from the WayBackMachine

Can’t fetch http://www.truecrypt.org from the WayBackMachine. Access denied.

Now, before I give you the related links, let me sum up the current theories as to what might have occurred here:

  • http://www.truecrypt.org has been attacked and compromised, along with the SourceForge Account (denied by SourceForge administrators atm) and the signing keys.
  • A 3-letter agency has put pressure on the Truecrypt foundation, forcing them to implement a back door. The devs burn the project instead.
  • The Truecrypt developers had enough of the pretty lacking donation support from the community and just let it die.
  • The crowdfunded Truecrypt Audit project found something very nasty (seems not to be the case according to auditors).
  • Truecrypt was an NSA project all along, and maintenance has become tedious. So they tell people to migrate to NSA-compromised solutions that are less work, as they don’t have to write the code themselves (Bitlocker, Filevault). Or, maybe an unannounced NSA backdoor was discovered after all. Of course, any compromise of commercial products stands unproven.

Here are some links from around the world, including statements by cryptographers who are members of the Truecrypt audit project:

If this is legit, it’s really, really, really bad. One of the worst things that could’ve happened. Ever. I pray that this is just a hack/deface and nothing more, but it sure as hell ain’t looking good!

There is no real cross-platform alternative, Bitlocker is not available to all Windows users, and we may be left with nothing but a big question mark over our heads. I hope that more official statements will come, but given the clandestine nature of the TC developers, this might never happen…

Update: This starts to look more and more legit. So if this is truly the end, I will dearly miss the Truecrypt forum. Such a great community with good, capable people. I learned a lot there. So Dan, nkro, xtxfw, catBot/booBot, BeardedBlunder and all you many others whose nicks my failing brain can not remember: I will likely never find you guys again on the web, but thanks for all your contributions!

Update 2: Recently, a man called Steve Barnhart, who had contact with Truecrypt auditor Matthew Green said, that a Truecrypt developer named “David” had told him via email, that whichever developers were still left had lost interest in the project. The conversation can be read [here]!

I once got a reply from a Truecrypt developer in early 2013, when asking about the state of UEFI+GPT bootloader code too…

I just dug up that email from my archive, and the address contained the full name of the sender. And yes, it was a “David”. This could very well be the nail in the coffin. Sounds as if it was truly not the NSA this time around.

Apr 282014
 

Truecrypt LogoJust recently a user from the Truecrypt forums reported a problem about Truecrypt on Linux that puzzled me at first. It took quite some digging to get to the bottom of it, which is why I am posting this here. See [this thread] (note: The original TrueCrypt project is dead by now, and its forums are offline) and my reply to it. Basically, it was all about not being able to mount more than 8 Truecrypt volumes at once.

So, Truecrypt is using the device mapper and loop device nodes to mount its encrypted volumes after which the file systems inside said volumes are being mounted. That user called iapx86 in the Truecrypt forums did just that successfully until he hit a limit with the 9th volume he was attempting to mount. So he had quite a few containers lying around that he wanted mounted simultaneously.

At first I thought this was some weird Truecrypt bug. Then I searched for the culprit within the Linux device mapper, when it finally hit me. It had to be the loop device driver. Actually, Truecrypt even told me so already, see here:

TC loop error

Truecrypt reporting an error related to a loop device

After just a bit more research it became clear that the loop driver has this limit set. So now, the first part is to find out whether you even have a loop module like on ArchLinux for instance. Other distributions like RedHat Enterprise Linux and its descendants like CentOS, SuSE and so on have it compiled into the Linux kernel directly. First, see whether you have the module:

lsmod | grep loop

If your system is very modern, the loop.ko module might no longer have been loaded automatically. In that case please try the following as root or with sudo:

modprobe loop

In case this works (more below if it doesn’t), you may adjust loops’ options at runtime, one of which is the limit imposed on the number of loop device nodes there may be at once. In my example, we’ll raise this from below 10 to a nice value of 128, run this command as root or with sudo:

sysctl loop.max_loop=128

Now retry, and you’ll most likely find this working. To make the change permanent, you can either add loop.max_loop=128 as a kernel option to your boot loaders kernel line, in the file /boot/grub/grub.conf for grub v1 for instance, or you can also edit /etc/sysctl.conf, adding a line saying loop.max_loop=128. In both cases the change should be applied at boot time.

If you cannot find any loop module: In case modprobe loop only tells you something like “FATAL: Module loop not found.”, the loop driver will likely have been compiled directly into your Linux kernel. As mentioned above, this is common for multiple Linux distributions. In that case, changing loops’ options at runtime becomes impossible. In such a case you have to take the kernel parameter road. Just add the following to your kernel line in your boot loaders configuration file  and mind the missing loop. in front, this is because we’re now no longer dealing with a kernel module option, but with a kernel option:

max_loop=128

You can add this right after all other options in the kernel line, in case there are any. On a classic grub v1 setup on CentOS 6 Linux, this may look somewhat like this in /boot/grub/grub.conf just to give you an example:

title CentOS (2.6.32-431.11.2.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 ro root=/dev/sda1 LANG=en_US.UTF-8 crashkernel=128M rhgb quiet max_loop=128
        initrd /initramfs-2.6.32-431.11.2.el6.x86_64.img

You will need to scroll horizontally above to see all the options given to the kernel, max_loop=128 being the last one. After this, reboot and your machine will be able to use up to 128 loop devices. In case you need even more than that, you can also raise that limit even further, like to 256 or whatever you need. There may be some upper hard limit, but I haven’t experimented enough with this. People have reported at least 256 to work and I guess very few people would need even more on single-user workstations with Truecrypt installed.

In any case, the final result should look like this, here we can see 10 devices mounted, which is just the beginning of what is now possible when it comes to the amount of volumes you may use simultaneously:

TC without loop error

With the loop driver reconfigured, Truecrypt no longer complains about running out of loop devices

And that’s that!

Update: Ah, I totally forgot, that iapx86 on the Truecrypt forums also wanted to have more slots available, because Truecrypt limits the slots for mounting to 64. Of course, if your loop driver can give you 128 or 256 or whatever devices, we wouldn’t wanna have Truecrypt impose yet another limit on us, right? This one is tricky however, as we will need to modify the Truecrypt source code and recompile it ourselves. The code we need to modify sits in Core/CoreBase.h. Look for this part:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 64; }

What we need to alter is the constant returned by the C++ virtual function GetLastSlotNumber(). In my case I just tried 256, so I edited the code to look like this:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 256; }

Now I recompiled Truecrypt from source with make. Keep in mind that you need to download the PKCS #11 headers first, right into the directory where the Makefile also sits:

wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h

Also, please pay attention to the dependencies listed in the Linux section of the Readme.txt that you get after unpacking the source code just to make sure you have everything Truecrypt needs to link against. When compiled and linked successfully by running make in the directory where the Makefile is, the resulting binary will be Main/truecrypt. You can just copy that over your existing binary which may for instance be called /usr/local/bin/truecrypt and you should get something like this:

More mounting slots for Truecrypt on Linux

More than 64 mounting slots for Truecrypt on Linux

Bingo! Looking good. Now I won’t say this is perfectly safe and that I’ve tested it inside and out, so do this strictly at your own risk! For me it seems to work fine. Your mileage may vary.

Update 2: Since this kinda fits in here, I decided to show you how I tested the operation of this. Creating many volumes to check whether the modifications worked required an automated process. After all, we’d need more than 64 volumes created and mounted! Since Truecrypt does not allow for automated volume creation without manual user interaction, I used the Tcl-based expect system to interact with the Truecrypt dialogs. At first I would feed Truecrypt a static string for its required entropy data of 320 characters. But thanks to the help of some capable and helpful coders on [StackOverflow], I can now present one of their solutions (I’m gonna pick the first one provided).

I split this up into 3 scripts, which is not overly elegant, but works for me. The first script, which I shall call tc-create-master.sh will just run a loop. The user can specify the amount of iterations in the script. That number represents the amount of volumes to be created. In the loop, a Tcl expect script is being called many times to create the actual volumes. That expect script I called tc-create-worker.sh. Master first, as you can see I have started with 128 volumes, which is twice as much as the vanilla Truecrypt binary would allow you to use or 16 times as many loop devices the Linux kernel would usually let you create at the same time:

#!/bin/bash
# Define number of volumes to create here:
n=128
 
for i in {1..$n} 
do 
  ./tc-create-worker.sh $i 
done

And this is the worker written in Tcl/expect, enhanced thanks to StackOverflow users. This is platform-independent too, not like my crappy original that you can also see at StackOverflow and in the Truecrypt forums:

#!/usr/bin/expect
 
proc random_str {n} {
  for {set i 1} {$i &lt;= $n} {incr i} {
    append data [format %c [expr {33+int(94*rand())}]]
  }
  return $data
}
 
set num [lindex $argv]
set entropyfeed [random_str 320]
puts $entropyfeed
spawn truecrypt -t --create /home/user/vol$num.tc --encryption=AES --filesystem=none --hash=SHA-512 --password=test --size=33554432 --volume-type=normal
expect "\r\nEnter keyfile path" {
  send "\r"
}
expect "Please type at least 320 randomly chosen characters and then press Enter:" {
  send "$entropyfeed\r"
  exp_continue
}

This part first generates a random floating-point number between 0,0 and 1,0. Then it multiplies that by 94 to reach something like 0..93,999˙. Then, it adds that value to 33 and makes an integer out of it by truncating the parts behind the comma, resulting in random numbers in the range of 33..126. Now how does this make any sense? We want to generate random printable characters to feed to Truecrypt as entropy data!? Well, in the 7-bit ASCII table, the range of printable characters starts at number 33 and ends at number 126. And that’s what format %c does with those numbers, converting them according to their address in the ASCII table. Cool, eh? I thought that was pretty cool.

The rest is just waiting for Truecrypts dialogs and feeding the proper input to it, first just an <ENTER> keypress (\r) and second the entropy data plus another <ENTER>. I chose not to let Truecrypt format the containers with an actual file system to save some time and make the process faster. Now, mounting is considerably easier:

#!/bin/bash
# Define number of volumes to mount here:
n=128
 
for i in {1..$n}
do
  truecrypt -t -k "" --protect-hidden=no --password=test --slot=$i --filesystem=none vol$i.tc
done

Unfortunately, I lost more speed here than I had won by not formatting the volumes, because I chose the expensive and strong SHA-512 hash algorithm for my containers. RIPEMD-160 would’ve been a lot faster. As it is, mounting took several minutes for 128 volumes, but oh well, still not too excessive. Unmounting all of them at once is a lot faster and can be achieved by running the simple command truecrypt -t -d. Since no screenshot that I can make can prove that I can truly mount 128 volumes at once (unless I forge one by concatenation), it’s better to let the command line show it:

[thrawn@linuxbox ~]$ echo &amp;&amp; mount | grep truecrypt | wc -l 
 
128

Nah, forget that, I’m gonna give you that screenshot anyway, stitched or not, it’s too hilarious to skip that, oh and don’t mind the strange window decorations, that’s because I opened Truecrypt on Linux via remote SSH+X11 on a Windows XP x64 machine with GDI UI:

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us!

Oh man… I guess the sky’s the limit. I just wonder why some of those limits (8 loop devices maximum in the Linux kernel, 64 mounting slots in Truecrypt) are actually in place by default…

Edit: Actually, the limit is now confirmed to be 256 volumes due to a hard coded limit in the loop.ko driver. Well, still better than the 8 we started with, eh?