Nov 192016
 

FreeBSD GMABoost logoRecently, after finding out that the old Intel GMA950 profits greatly from added memory bandwidth (see [here]), I wondered if the overclocking mechanism applied by the Windows tool [here] had leaked into the public after all this time. The developer of said tool refused to open source the software even after it turning into abandonware – announced support for GMA X3100 and X4500 as well as MacOS X and Linux never came to be. Also, he did not say how he managed to overclock the GMA950 in the first place.

Some hackers disassembled the code of the GMABooster however, and found out that all that’s needed is a simple PCI register modification that you could probably apply by yourself on Microsoft Windows by using H.Oda!s’ [WPCREdit].

Tools for PCI register modification do exist on Linux and UNIX as well of course, so I wondered whether I could apply this knowledge on FreeBSD UNIX too. Of course, I’m a few years late to the party, because people have already solved this back in 2011! But just in case the scripts and commands disappear from the web, I wanted this to be documented here as well. First, let’s see whether we even have a GMA950 (of course I do, but still). It should be PCI device 0:0:2:0, you can use FreeBSDs’ own pciconf utility or the lspci command from Linux:

# lspci | grep "00:02.0"
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
 
# pciconf -lv pci0:0:2:0
vgapci0@pci0:0:2:0:    class=0x030000 card=0x30aa103c chip=0x27a28086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller'
    class      = display
    subclass   = VGA

Ok, to alter the GMA950s’ render clock speed (we are not going to touch it’s 2D “desktop” speed), we have to write certain values into some PCI registers of that chip at 0xF0hex and 0xF1hex. There are three different values regulating clockspeed. Since we’re going to use setpci, you’ll need to install the sysutils/pciutils package on your machine via # pkg install pciutils. I tried to do it with FreeBSDs’ native pciconf tool, but all I managed was to crash the machine a lot! Couldn’t get it solved that way (just me being too stupid I guess), so we’ll rely on a Linux tool for this. Here is my version of the script, which I call gmaboost.sh. I placed that in /usr/local/sbin/ for global execution:

  1. #!/bin/sh
  2.  
  3. case "$1" in
  4.   200) clockStep=34 ;;
  5.   250) clockStep=31 ;;
  6.   400) clockStep=33 ;;
  7.   *)
  8.     echo "Wrong or no argument specified! You need to specify a GMA clock speed!" >&2
  9.     echo "Usage: $0 [200|250|400]" >&2
  10.     exit 1
  11.   ;;
  12. esac
  13.  
  14. setpci -s 02.0 F0.B=00,60
  15. setpci -s 02.0 F0.B=$clockStep,05
  16.  
  17. echo "Clockspeed set to "$1"MHz"

Now you can do something like this: # gmaboost.sh 200 or # gmaboost.sh 400, etc. Interestingly, FreeBSDs’ i915_kms graphics driver seems to have set the 3D render clock speed of my GMA950 to 400MHz already, so there was nothing to be gained for me in terms of performance. I can still clock it down to conserve energy though. A quick performance comparison using a crappy custom-recorded ioquake3 demo shows the following results:

  • 200MHz: 30.6fps
  • 250MHz: 35.8fps
  • 400MHz: 42.6fps

Hardware was a Core 2 Duo T7600 and the GPU was making use of two DDR-II/667 4-4-4 memory modules in dual channel configuration. Resolution was 1400×1050 with quite a few changes in the Quake III configuration to achieve more performance, so your results won’t be comparable, even when running ioquake3 on identical hardware. I’d post my ~/.ioquake3/baseq3/q3config.cfg here, but in my stupidity I just managed to freaking wipe the file out. Now I have to redo all the tuning, pfh.

But in any case, this really works!

Unfortunately, it only applies to the GMA950. And I still wonder what it was that was so wrong with # pciconf -w -h pci0:0:2:0 0xF0 0060 && pciconf -w -h pci0:0:2:0 0xF0 3405 and the like. I tried a few combinations just in case my byte order was messed up or in case I really had to write single bytes instead of half-words, but either the change wouldn’t apply at all, or the machine would just lock up. Would be nice to do this with only BSD tools on actual FreeBSD UNIX, but I guess I’m just too stupid for pciconf

Sep 072016
 

TeamViewer on Linux logoI’m not exactly a big fan of TeamViewer, since you’ll never know what’s going to happen with that traffic of yours, so I prefer VNC over SSH instead. A few weeks ago I got TeamViewer access to a remote workstation machine for the purpose of processing A/V files however. Basically, it was about video and audio transcoding on said machine.

Since the stream meta data (like the language of an audio stream) wasn’t always there, I wanted to check it by playing back the files remotely in foobar2000 or MPC-HC. TeamViewer does offer a feature to relay the audio from a remote machine to your local box, as long as the remote server has some kind of soundcard / sound chip installed. I was using TeamViewer 11 – the newest version at the time of writing – to connect from CentOS 6.8 Linux to a Windows 7 Professional machine. Playing back audio yielded nothing but silence though.

Now, TeamViewer is actually not native Linux software. Both its Linux and MacOS X versions come with a bundled Wine 1.6 distribution preconfigured to run the 32-bit TeamViewer Windows binary. It was thus logical to assume that the configuration of TeamViewers’ built-in Wine was broken. This may happen in cases where you upgrade TeamViewer from previous releases (which is what I had done, 7 -> 8 -> 9 -> 11).

There are a multitude of proposed solutions to fix this, and since none of them worked for me as-is, I’d like to add my own to the mix. The first useful hint came from [here]. You absolutely need a working system-wide Wine setup for this. I already had one that I needed for work anyway, namely Wine 1.8.6 from the [EPEL] repository, configured using [winetricks]. We’re going to take some files from that installation and essentially replace TeamViewers’ own Wine with the one distributed by EPEL.

So I had TeamViewer 11 installed in /opt/teamviewer/ and some important configuration files for it in ~/.local/share/teamviewer11/ and ~/.config/teamviewer/. First, we backup the wine files of TeamViewer and replace them with the platform ones (the paths may vary depending on your Linux distribution, but the file names should not):

# mv /opt/teamviewer/tv_bin/wine/bin/wine /opt/teamviewer/tv_bin/wine/bin/wine.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wineserver /opt/teamviewer/tv_bin/wine/bin/wineserver.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/wine-preloader.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/wine.BACKUP/
# cp /usr/bin/wine /usr/bin/wineserver /usr/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/
# cp /usr/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/
# cp -r /usr/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/

This will replace all the binaries and libraries, in my case shoving Wine 1.8.6 underneath TeamViewer. This isn’t all that’s needed however. We’ll also need the system registry hive of your working Wine installation (with sound). That should be stored in ~/.wine/system.reg! Let’s replace TeamViewers’ own hive with this one:

$ mv ~/.local/share/teamviewer11/system.reg ~/.local/share/teamviewer11/system.reg.BACKUP
$ cp ~/.wine/system.reg ~/.local/share/teamviewer11/

Ok, and the final part is adding the proper Linux audio backend to this Wines’ configuration. That part is stored in ~/.wine/user.reg. Replacing the whole file didn’t work for me though, as TeamViewer would crash upon launch, probably missing some keys from its own user.reg. So, let’s just edit its file instead, open ~/.local/share/teamviewer11/system.reg with your favorite text editor and add the following line in a proper location (it’s sorted alphabetically):

[Software\\Wine\\Drivers\\winepulse.drv] 1473239241

The corresponding file should be found within TeamViewers’ replaced Wine distribution now by the way, in my case it’s /opt/teamviewer/tv_bin/wine/lib/wine/fakedlls/winepulse.drv.

Now, run the TeamViewer profile updater (Some people say it’s required to make this work, it wasn’t for me, but it didn’t hurt either): $ /opt/teamviewer/tv_bin/TeamViewer --update-profile and then its’ Wine configuration: $ /opt/teamviewer/tv_bin/TeamViewer --winecfg. After that, you should be greeted with this:

TeamViewer 11 running its own winecfg

TeamViewer 11 running its own copy of winecfg.

Before the modifications, the configuration window would show “None” as the driver, without any way to change it. So no audio, whereas we have Pulseaudio now. Press “Test Sound” if you want to check whether it truly works. I haven’t tested the ALSA backend by the way. In my case, as soon as the registry was fixed, Wine just autoselected Pulseaudio, which is fine for me.

Now launch TeamViewer and check out the audio options in this submenu:

TeamViewer preferences

The TeamViewer 11 preferences can be found here.

It should look like this:

TeamViewers audio options

Make sure “Play computer sounds and music” is checked! (click to enlarge)

Now, after having connected and logged in, you may also wish to verify the conference audio settings in TeamViewers’ top menu:

TeamViewer 11 conference audio settings

TeamViewer 11 conference audio settings, make sure “Computer sound” is checked!

When you play a sound file on the remote computer, you should hear it on your local one as well. With that, I can finally test the audio files I’m supposed to use on that remote machine for their actual language (which is a rather important detail) where meta data isn’t available.

This seems to be a problem of TeamViewers installation / update procedure which hasn’t been addressed for several major released now. I presume just removing all traces of TeamViewer and installing it from scratch might also do the trick, but I didn’t try it for myself.

Ah, and one more thing: If you can’t launch TeamViewer on CentOS 6.x because you’re getting the following error…

teamviewerd error

TeamViewer Daemon not running…

…forget about the solutions on the web on top of what this message is telling you. TeamViewer 11 uses a systemd-style script for launching its daemon on Linux now, and that won’t do on SysV init systems. Just become root and launch the crap manually: # /opt/teamviewer/tv_bin/teamviewerd &, then press <CTRL>+<d> and it works!

Let’s hope that daemon isn’t doing anything evil while running as root. :roll:

Jul 272016
 

x264 logoSince I’ve been doing a bit of Anime batch video transcoding with x264 and x265 in the last few months, I thought I’d document this for myself here. My goal was to loop over an arbitrary amount of episodes and just batch-transcode them all at once. And that on three different operating systems: Windows (XP x64), Linux (CentOS 6.8 x86_64) and FreeBSD 10.3 UNIX, x86_64. Since I’ve started to split the work across multiple machines, I lost track of what was where and which machine finished what, and when.

So I thought, why not let the loop send me a small notification email upon completion? And that’s what I did. On Linux and UNIX this relies on the bash shell and the mailx command. Please note that I’m talking about [Heirloom mailx], not some other mail program by the same name! I’m mentioning this, because there is a different default mailx on FreeBSD, that won’t work for this. That’s why I put alias mailx='/usr/local/bin/mailx' in my ~/.bash_profile on that OS after installing the right program to make it the default for my user.

On Windows, my loops depend on my own [colorecho] command (you can replace that with cmds’ ECHO if you want) as well as the command line mailer [blat]. Note that, if you need to use SSL/TLS encryption when mailing, blat can’t do that. A suitable replacement could be [mailsend]. Please note, that mailsend does not work on Windows XP however.

In the x265 case, avconv (from the [libav] package) is required on all platforms. You can get my build for Windows [here]. If you don’t like it, the wide-spread [ffmpeg] can be a suitable drop-in replacement.

Now, when setting up blat on Windows, make sure to run blat -help first, and learn the details about blat -install. You need to run that with certain parameters to set it up for your SMTP mail server. For whatever reason, blat reads some of that data from the registry (ew…), and blat -install will set that up for you.

Typically, when I transcode, I do so on the elementary streams rather than .mkv files directly. So I’d loop through some source files and extract the needed streams. Let’s say we have “A series – episode 01.mkv” and some more, all the way up to “A series – episode 13.mkv”, then, assuming track #0 is the video stream…

On Windows:

FOR %I IN (01,02,03,04,05,06,07,08,09,10,11,12,13) DO mkvextract tracks "A series - episode %I.mkv" ^
 0:%I\video.h264

On Linux/UNIX:

for i in {01..13}; do mkvextract tracks "A series - episode $i.mkv" 0:$i/video.h264; done

mkvextract will create the non-existing subfolder for us, and a x264 transcoding loop would then look like this on Windows:

expand/collapse source code
cmd /V /C "ECHO OFF & SET MACHINE=NOVASTORM& SET EPNUM=13& SET SERIES="AnimeX"& (FOR %I IN ^
 (01,02,03,04,05,06,07,08,09,10,11,12,13) DO "c:\Program Files\VFX\x264cli\x264-10b.exe" --fps ^
 24000/1001 --preset veryslow --tune animation --open-gop -b 16 --b-adapt 2 --b-pyramid normal -f ^
 -2:0 --bitrate 2500 --aq-mode 1 -p 1 --slow-firstpass --stats %I\v.stats -t 2 --no-fast-pskip ^
 --cqm flat --non-deterministic --demuxer lavf %I\video.h264 -o %I\pass1.264 & colorecho "Pass 1 ^
 done for Episode %I/"!EPNUM!" of "!SERIES!"" 10 & ECHO. & ^
 "c:\Program Files\VFX\x264cli\x264-10b.exe" --fps 24000/1001 --preset veryslow --tune animation ^
 --open-gop -b 16 --b-adapt 2 --b-pyramid normal -f -2:0 --bitrate 2500 --aq-mode 1 -p 2 --stats ^
 %I\v.stats -t 2 --no-fast-pskip --cqm flat --non-deterministic --demuxer lavf %I\video.h264 -o ^
 %I\pass2.264 & colorecho "Pass 2 done for Episode %I/"!EPNUM!" of "!SERIES!"" 10) & echo !SERIES! ^
 transcoding complete | blat - -t "myself@another.mailhost.com" -c "myself@mailhost.com" -s "x264 ^
 notification from !MACHINE!" & SET MACHINE= & SET EPNUM= & SET SERIES="

Note that I always write all the iteration out in full here. That’s because cmd can’t do loops with leading zeroes in the iterator. The reason for this is that those source files usually have them in their lower episode numbers. If it wasn’t 01,02, … ,12,13, but 1,2, … ,12,13 instead, you could do FOR /L %I IN (1,1,13) DO. But this isn’t possible in my case. Even if elements need alphanumeric names like here,  FOR %I IN (01,02,03,special1,special2,ova1,ova2) DO, you still won’t need that syntax on Linux/UNIX because the bash can have iterator groups like for i in {{01..13},special1,special2,ova1,ova2}; do. Makes me despise the cmd once more. ;)

Edit:

Ah, according to [this], you can actually do something like cmd /V /C "FOR /L %I IN (1,1,13) DO (SET "fI=00%I" & echo "!fI!:~-2")", holy shit. It actually works and gives you leading zeroes. :~-2 for 2 digits, :~-3 for three. Expand fI for more in this example. I mean, what is this even? Some number formatting magic? I probably don’t even wanna know… Couldn’t find any way of having several groups for the iterator however. Meh. Still don’t like it.

So, well, it’s like this on Linux/UNIX:

expand/collapse source code
(export MACHINE=BEAST EPNUM=13 SERIES='AnimeX'; for i in {01..13}; do nice -n19 x264 --fps \
24000/1001 --preset veryslow --tune animation --open-gop -b 16 --b-adapt 2 --b-pyramid normal -f \
-2:0 --bitrate 2500 --aq-mode 1 -p 1 --slow-firstpass --stats $i/v.stats -t 2 --no-fast-pskip \
--cqm flat --non-deterministic --demuxer lavf $i/video.h264 -o $i/pass1.264 && echo && echo -e \
"\e[1;31m`date +%H:%M`, pass 1 done for episode $i/$EPNUM of $SERIES\e[0m" && echo && nice -n19 \
x264 --fps 24000/1001 --preset veryslow --tune animation --open-gop -b 16 --b-adapt 2 --b-pyramid \
normal -f -2:0 --bitrate 2500 --aq-mode 1 -p 2 --stats $i/v.stats -t 2 --no-fast-pskip --cqm flat \
--non-deterministic --demuxer lavf $i/video.h264 -o $i/pass2.264 && echo && echo -e \
"\e[1;31m`date +%H:%M`, pass 2 done for episode $i/$EPNUM of $SERIES\e[0m" && echo; done && echo \
"$SERIES transcoding complete" | mailx -s "x264 notification from $MACHINE" -r \
"myself@mailhost.com" -c "myself@another.mailhost.com" -S smtp-auth="login" -S \
smtp="smtp.mailhost.com" -S smtp-auth-user="myuser" -S smtp-auth-password="mysecurepassword" \
myself@mailhost.com)

The variable $MACHINE or %MACHINE%/!MACHINE! specifies the machines’ host name. This will be noted in the email, so I know which machine just completed something. $EPNUM – or %EPNUM%/!EPNUM! on Windows – is used for periodic updates on the shell. The output would be like “Pass 1 done for Episode 07/13 of AnimeX” in green on Windows and bold red on Linux/UNIX (just change the color to your liking).

Finally, $SERIES aka %SERIES%/!SERIES! would be the series’ name. So say, the UNIX machine named “BEAST” above is done with this loop. The email would come with the subject line “x264 notification from BEAST” and would read “AnimeX transcoding complete” in plain text. That’s all.

Please note, that cmd batch on Windows is extremely creepy. Every whitespace (especially the leading ones when doing multi-line like this for display) needs to be exactly where it is. The same goes for double quotes where you might think they aren’t needed. They are! Also, this needs delayed variable expansion once again, which is why we see variables like !EPNUM! instead of %EPNUM% and why it’s called in a subshell by running cmd /V /C.

On Linux/UNIX we don’t need to rely on some specific API like cmds’ SetConsoleTextAttribute() to print colors, as most terminals understand ANSI color codes.

And this is what it looks like for x265:

Windows:

expand/collapse source code
cmd /V /C "ECHO OFF & SET MACHINE=NOVASTORM& SET EPNUM=13& SET SERIES="AnimeX"& (FOR %I IN ^
 (01,02,03,04,05,06,07,08,09,10,11,12,13) DO avconv -r 24000/1001 -i %I\video.h264 -f yuv4mpegpipe ^
 -pix_fmt yuv420p -r 24000/1001 - 2>NUL | "C:\Program Files\VFX\x265cli-mb\x265.exe" - --y4m -D 10 ^
 --fps 24000/1001 -p veryslow --pmode --pme --open-gop --ref 6 --bframes 16 --b-pyramid --bitrate ^
 2500 --rect --amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 ^
 --psy-rdoq 5.0 --rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 ^
 --pass 1 --slow-firstpass --stats %I\v.stats --sar 1 --range full -o %I\pass1.h265 & colorecho ^
 "Pass 1 done for Episode %I/"!EPNUM!" of "!SERIES!"" 10 & ECHO. & avconv -r 24000/1001 -i ^
 %I\video.h264 -f yuv4mpegpipe -pix_fmt yuv420p -r 24000/1001 - 2>;NUL | ^
 "C:\Program Files\VFX\x265cli-mb\x265.exe" - --y4m -D 10 --fps 24000/1001 -p veryslow --pmode ^
 --pme --open-gop --ref 6 --bframes 16 --b-pyramid --bitrate 2500 --rect --amp --aq-mode 3 ^
 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0 --rdoq-level 1 ^
 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 2 --stats %I\v.stats --sar ^
 1 --range full -o %I\pass2.h265 & colorecho "Pass 2 done for Episode %I/"!EPNUM!" of "!SERIES!"" ^
 10) & echo !SERIES! transcoding complete | blat - -t "myself@another.mailhost.com" -c ^
 "myself@mailhost.com" -s "x265 notification from !MACHINE!" & SET MACHINE= & SET EPNUM= & SET ^
 SERIES="

Linux/UNIX:

expand/collapse source code
(export MACHINE=BEAST EPNUM=13 SERIES='AnimeX'; for i in {01..13}; do avconv -r 24000/1001 -i \
$i/video.h264 -f yuv4mpegpipe -pix_fmt yuv420p -r 24000/1001 - 2>/dev/null | nice -19 x265 - --y4m \
-D 10 --fps 24000/1001 -p veryslow --open-gop --ref 6 --bframes 16 --b-pyramid --bitrate 2500 \
--rect --amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq \
5.0 --rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 1 \
--slow-firstpass --stats $i/v.stats --sar 1 --range full -o $i/pass1.h265 && echo && echo -e \
"\e[1;31m`date +%H:%M`, pass 1 done for episode $i/$EPNUM of $SERIES\e[0m" && echo && avconv -r \
24000/1001 -i $i/video.h264 -f yuv4mpegpipe -pix_fmt yuv420p -r 24000/1001 - 2>/dev/null | nice \
-19 x265 - --y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --ref 6 --bframes 16 --b-pyramid \
--bitrate 2500 --rect --amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd \
1.6 --psy-rdoq 5.0 --rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 \
--pass 2 --stats $i/v.stats --sar 1 --range full -o $i/pass2.h265 && echo && echo -e \
"\e[1;31m`date +%H:%M`, pass 2 done for episode $i/$EPNUM of $SERIES\e[0m" && echo; done && echo \
"$SERIES transcoding complete" | mailx -s "x265 notification from $MACHINE" -r \
"myself@mailhost.com" -c "myself@another.mailhost.com" -S smtp-auth="login" -S \
smtp="smtp.mailhost.com" -S smtp-auth-user="myuser" -S smtp-auth-password="mysecurepassword" \
myself@mailhost.com)

And that’s it. The loops for audio transcoding are simpler, as that part is so fast, it doesn’t need email notifications. Runs for minutes rather than days. When all is done, I’d usually fire up the MKVToolnix GUI, and prepare a mux for the first episode. There is a nice “copy command line to clipboard” function there when you click on “Muxing” after everything is set up. With that I can build another loop that muxes everything to final .mkv files. On Windows that part is more complicated if you want Unicode support, so I needed to create input files by using a generator I wrote in Perl for that, but that’s for another day… :)

Oh, and if you wanna ssh into your Linux or UNIX boxes from afar to check on your transcoders, consider launching them on a GNU screen] session. It’s immensely useful! Too bad it won’t work on the Windows cmd. :(

Jan 272016
 

HakuNeko logoJust yesterday I’ve showed you how to modify and compile the [HakuNeko] Manga Ripper so it can work on FreeBSD 10 UNIX, [see here] for some background info. I also mentioned that I couldn’t get it to build on CentOS 6 Linux, something I chose to investigate today. After flying blind for a while trying to fix include paths and other things, I finally got to the core of the problem, which lies in HakuNekos’ src/CurlRequest.cpp, where the code calls CURLOPT_ACCEPT_ENCODING from cURLs typecheck-gcc.h. This is critical stuff, as [cURL] is the software library needed for actually connecting to websites and downloading the image files of Manga/Comics.

It turns out that CURLOPT_ACCEPT_ENCODING wasn’t always called that. With cURL version 7.21.6, it was renamed to that from the former CURLOPT_ENCODING as you can read [here]. And well, CentOS 6 ships with cURL 7.19.7…

When running $ ./configure && make from the unpacked HakuNeko source tree without fixing anything, you’ll run into this problem:

g++ -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -o obj/CurlRequest.cpp.o src/CurlRequest.cpp
src/CurlRequest.cpp: In member function ‘void CurlRequest::SetCompression(wxString)’:
src/CurlRequest.cpp:122: error: ‘CURLOPT_ACCEPT_ENCODING’ was not declared in this scope
make: *** [obj/CurlRequest.cpp.o] Error 1

So you’ll have to fix the call in src/CurlRequest.cpp! Look for this part:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Change CURLOPT_ACCEPT_ENCODING to CURLOPT_ENCODING. The rest can stay the same, as the name is all that has really changed here. It’s functionally identical as far as I can tell. So it should look like this:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Save the file, go back to the main source tree and you can do:

  • $ ./configure && make
  • # make install

And done! Works like a charm:

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux!

And now, for your convenience I fixed up the Makefile and rpm/SPECS/specfile.spec a little bit to build proper rpm packages as well. I can provide them for CentOS 6.x Linux in both 32-bit as well as 64-bit x86 flavors:

You need to unzip these first, because I was too lazy to allow the rpm file type in my blogging software.

The naked rpms have also been submitted to the HakuNeko developers as a comment to their [More Linux Packages] support ticket which you’re supposed to use for that purpose, so you can get them from there as well. Not sure if the developers will add the files to the projects’ official downloads.

This build of HakuNeko has been linked against the wxWidgets 2.8.12 GUI libraries, which come from the official CentOS 6.7 package repositories. So you’ll need to install wxGTK to be able to use the white kitty:

  • # yum install wxGTK

After that you can install the .rpm package of your choice. For a 64-bit system for instance, enter the folder where the hakuneko_1.3.12_el6_x86_64.rpm file is, run # yum localinstall ./hakuneko_1.3.12_el6_x86_64.rpm and confirm the installation.

Now it’s time to have fun using HakoNeko on your Enterprise Linux system! Totally what RedHat intended you to use it for! ;) :roll:

Jun 072015
 

Kung Fury: Street Rage logoSo after the release of that crazy crowdfunded (and free of charge) movie [Kung Fury], there is also a game! Now that was fast. Made by the Swedish game developers of [Hello There], the game is basically a clone of [One Finger Death Punch], as many gamers have already pointed out. Not that anybody seems to mind that – me included. It’s a superficially very simple 2-button street fighting game, where one button means “punch/kick/whatever to the left” and the other “punch/kick/whatever to the right”. Don’t let the seeming simplicity fool you though. There is more skill involved than you might think…

So let’s have a look at the intro of the game, which strongly resembles an 80s arcade machine style:

Kung Fury: Street Rage; That's our Hero!

Kung Fury: Street Rage; That’s our Hero!

So with the use of some Direct3D 9.0c shaders, the game simulates the look of an old CRT monitor, just like the arcade machines of old had! At the press of a button or after waiting for a bit we’re greeted with this:

Kung Fury: Street Rage - Insert Coin

Insert Coin!

Another button press and we can hear our virtual player throwing a coin into the machine, which gives us three lives (after being hit three times, we’d go down for good). And then, whenever any enemy approaches us from either side, you just press left or right to punch, kick, shoot or electrify the guy. It’s ok, they’re all Nazis anyway. We do this with our pals Barbarianna, Triceracop and Hackerman standing around in the background – all three as seen in the movie of course, just like all the enemies we’re beating up:

Kung Fury: Street Rage; Beat 'em up!

There are splatter effects even!

That screenshot is from the very beginning of the game, where we can only see our lowest-end Nazi foes. There are some Swedish Aryans too, which can take two hits, then that clone chick with Kung Fury essence infused into her, which needs a more advanced left/right combo to put down, and more. Like the kicking machine and the mysterious Ninja, all as seen in the movie. As long as you don’t miss too much (you have limited range) or get hit, you’ll build up a score bonus too. Not sure if there are more enemies than that, I haven’t really gotten that far yet.

Actually, I did reach a new High Score while doing those screenshots accidentally, leaving both chicks behind me, pretty neat:

Kung Fury: Street Rage; New High Score!

A new High Score!

Now Thor might still be doable, but Hackerman will be one tough nut to crack. I don’t think I’ll ever make it to the top though, the game is pretty damn hard. As it progresses, it starts speeding up more and more, and it’ll also throw more of the harder enemies at you, which will require quick reaction and sharp perception to get the combos right. “Just mash two buttons” may sound easy as said, but don’t underestimate it! Like with “One Finger Death Punch”, only the most skillful players will have a chance to reach the top!

When you’ve got enough, just press <Esc> (on the PC), and you’re asked whether you really want to quit. In an interesting way:

Kung Fury: Street Rage; Quitting the game.

Quitting/bluescreening the game.

If you confirm to quit the game, you’ll get another shader-based CRT effect thrown in:

Kung Fury: Street Rage; Quitting the game

It’s shutting down…

I haven’t really managed to play this for more than 5 minutes in a row, which sounds like very little, but this game is extremely fast-paced, so I can’t take much more in one go. ;) It’s quite a lot of fun though, and while not as sophisticated as “One Finger Death Punch”, it’s awesome in its own right, given the Kung Fury cheesiness, the CRT look and the chiptune-like soundtrack of the game.

The game is available in both paid and partially also free editions on several platforms now, and while I’ve read that the free versions do have ads, the paid ones definitely don’t, as I can vouch for on the PC platform at least. So here are the links:

  • [PC version] @ Steam for 1.99€ / $2.50. Supports >=Windows XP, >=MacOS X 10.6 and SteamOS plus regular Linux on x86_32/x86_64.
  • [PC+ARM version] @ Windows Store for $2.29. Supports Windows >=8.0 on x86_32/x86_64 and ARM architectures.
  • [Android version] @ Google Play for free or for 2.46€ with ads removed. Also available as a [separate APK file]. Supports Android >=2.0.1.
  • [iOS version] @ iTunes for free or for $1.99 with ads removed. Supports iOS >=6.0 on the iPhone 5/6, iPad and iPod Touch.

Now if you’ll excuse me, I’ll take another shot at number 3! :)

Sep 232014
 

CD burning logo[1] At work I usually have to burn a ton of heavily modified Knoppix CDs for our lectures every year or so. The Knoppix distribution itself is being built by me and a colleague to get a highly secure read-only, server-controlled environment for exams and lectures. Now, usually I’m burning on both a Windows box with Ahead [Nero], and on Linux with the KDE tool [K3B] (despite being a Gnome 2 user), both GUI tools. My Windows box had 2 burners, my Linux box one. To speed things up and increase disc quality at the same time the idea was to plug more burners into the machines and burn each individual disc slower, but parallelized.

I was shocked to learn that K3B can actually not burn to multiple burners at once! I thought I was just being blind, stumbling through the GUI like an idiot, but it’s actually really not there. Nero on the other hand managed to do this for what I believe is already the better part of a decade!

True disc burning stations are just too expensive, like 500€ for the smaller ones instead of the 80-120€ I had to spend on a bunch of drives, so what now? Was I building this for nothing?

Poor mans disc station

Poor mans disc station. Also a shitty photograph, my apologies for that, but I had no real camera available at work.

Well, where there is a shell, there’s a way, right? Being the lazy ass that I am, I was always reluctant to actually use the backend tools of K3B on the command line myself. CD/DVD burning was something I had just always done on a GUI. But now was the time to script that stuff myself, and for simplicities sake I just used the bash. In addition to the shell, the following core tools were used:

  • cut
  • grep
  • mount
  • sudo (For a dismount operation, might require editing /etc/sudoers)

Also, the following additional tools were used (most Linux distributions should have them, conservative RedHat derivatives like CentOS can get the stuff from [EPEL]):

  • [eject(eject and retract drive trays)
  • [sdparm(read SATA device information)
  • sha512sum (produce and compare high-quality checksums)
  • wodim (burn optical discs)

I know there are already scripts for this purpose, but I just wanted to do this myself. Might not be perfect, or even good, but here we go. The work(-in-progress) is divided into three scripts. The first one is just a helper script generating a set of checksum files from a master source (image file or disc) that you want to burn to multiple discs later on, I call it create-checksumfiles.sh. We need one file for each burner device node later, because sha512sum needs that to verify freshly burned discs, so that’s why this exists:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. wrongpath=1 # Path for the source/master image is set to invalid in the
  4.             # beginning.
  5.  
  6. # Getting path to the master CD or image file from the user. This will be
  7. # used to generate the checksum for later use by multiburn.sh
  8. until [ $wrongpath -eq 0 ]
  9. do
  10.   echo -e "Please enter the file name of the master image or device"
  11.   echo -e "(if it's a physical disc) to create our checksum. Please"
  12.   echo -e 'provide a full path always!'
  13.   echo -e "e.g.: /home/myuser/isos/master.iso"
  14.   echo -e "or"
  15.   echo -e "/dev/sr0\n"
  16.   read -p "&gt; " -e master
  17.  
  18.   if [ -b $master -o -f $master ] &amp;&amp; [ -n "$master" ]; then
  19.     wrongpath=0 # If device or file exists, all ok: Break this loop.
  20.   else
  21.     echo -e "\nI can find neither a file nor a device called $master.\n"
  22.   fi
  23. done
  24.  
  25. echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"
  26.  
  27. checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.
  28.  
  29. # Getting device node name prefix of the users' CD/DVD burners from the
  30. # user.
  31. echo -e "Now please enter the device node prefix of your disc burners."
  32. echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
  33. echo -e "etc."
  34. read -p "&gt; " -e devnode
  35.  
  36. # Getting number of burners in the system from the user.
  37. echo -e "\nNow enter the total number of attached physical CD/DVD burners."
  38. read -p "&gt; " -e burners
  39.  
  40. ((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!
  41.  
  42. echo -e "\nDone, creating the following files with the following contents"
  43. echo -e "for later use by the multiburner for disc verification:"
  44.  
  45. # Creating the per-burner checksum files for later use by multiburn.sh.
  46. for ((i=0;i&lt;=$burners;i++))
  47. do
  48.   echo -e " * sum$i.txt: $checksum $devnode$i"
  49.   echo "$checksum $devnode$i" &gt; sum$i.txt
  50. done
  51.  
  52. echo -e ""

As you can see it’s getting its information from the user interactively on the shell. It’s asking the user where the master medium to checksum is to be found, what the users burner / optical drive devices are called, and how many of them there are in the system. When done, it’ll generate a checksum file for each burner device, called e.g. sum0.txt, sum1.txt, … sum<n>.txt.

Now to burn and verify media in a parallel fashion, I’m using an old concept I have used before. There are two more scripts, one is the controller/launcher, which will then spawn an arbitrary amount of the second script, that I call a worker. First the controller script, here called multiburn.sh:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. if [ $# -eq 0 ]; then
  4.   echo -e "\nPlease specify the number of rounds you want to use for burning."
  5.   echo -e "Each round produces a set of CDs determined by the number of"
  6.   echo -e "burners specified in $0."
  7.   echo -e "\ne.g.: ./multiburn.sh 3\n"
  8.   exit
  9. fi
  10.  
  11. #@========================@
  12. #| User-configurable part:|
  13. #@========================@
  14.  
  15. # Path that the image resides in.
  16. prefix="/home/knoppix/"
  17.  
  18. # Image to burn to discs.
  19. image="knoppix-2014-09.iso"
  20.  
  21. # Number of rounds are specified via command line parameter.
  22. copies=$1
  23.  
  24. # Number of available /dev/sr* devices to be used, starting
  25. # with and including /dev/sr0 always.
  26. burners=3
  27.  
  28. # Device node name used on your Linux system, like "/dev/sr" for burners
  29. # called /dev/sr0, /dev/sr1, etc.
  30. devnode="/dev/sr"
  31.  
  32. # Number of blocks per complete disc. You NEED to specify this properly!
  33. # Failing to do so will break the script. You can read the block count 
  34. # from a burnt master disc by running e.g. 
  35. # ´sdparm --command=capacity /dev/sr*´ on it.
  36. blocks=340000
  37.  
  38. # Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
  39. speed=32
  40.  
  41. #@===========================@
  42. #|NON user-configurable part:|
  43. #@===========================@
  44.  
  45. # Checking whether all required tools are present first:
  46. # Checking for eject:
  47. if [ ! `which eject 2&gt;/dev/null` ]; then
  48.   echo -e "\e[0;33meject not found. $0 cannot operate without eject, you'll need to install"
  49.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  50.   exit
  51. fi
  52. # Checking for sdparm:
  53. if [ ! `which sdparm 2&gt;/dev/null` ]; then
  54.   echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  55.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  56.   exit
  57. fi
  58. # Checking for sha512sum:
  59. if [ ! `which sha512sum 2&gt;/dev/null` ]; then
  60.   echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  61.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  62.   exit
  63. fi
  64. # Checking for sudo:
  65. if [ ! `which sudo 2&gt;/dev/null` ]; then
  66.   echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  67.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  68.   exit
  69. fi
  70. # Checking for wodim:
  71. if [ ! `which wodim 2&gt;/dev/null` ]; then
  72.   echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  73.   echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  74.   exit
  75. fi
  76.  
  77. ((burners--)) # Reducing number of burners by one as we also have a burner "0".
  78.  
  79. # Initial burner ejection:
  80. echo -e "\nEjecting trays of all burners...\n"
  81. for ((g=0;g&lt;=$burners;g++))
  82. do
  83.   eject $devnode$g &amp;
  84. done
  85. wait
  86.  
  87. # Ask user for confirmation to start the burning session.
  88. echo -e "Burner trays ejected, please insert the discs and"
  89. echo -e "press any key to start.\n"
  90. read -n1 -s # Wait for key press.
  91.  
  92. # Retract trays on first round. Waiting for disc will be done in
  93. # the worker script afterwards.
  94. for ((l=0;l&lt;=$burners;l++))
  95. do
  96.   eject -t $devnode$l &amp;
  97. done
  98.  
  99. for ((i=1;i&lt;=$copies;i++)) # Iterating through burning rounds.
  100. do
  101.   for ((h=0;h&lt;=$burners;h++)) # Iterating through all burners per round.
  102.   do
  103.     echo -e "Burning to $devnode$h, round $i."
  104.     # Burn image to burners in the background:
  105.     ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &amp;
  106.   done
  107.   wait # Wait for background processes to terminate.
  108.   ((j=$i+1));
  109.   if [ $j -le $copies ]; then
  110.     # Ask user for confirmation to start next round:
  111.     echo -e "\nRemove discs and place new discs in the drives, then"
  112.     echo -e "press a key for the next round #$j."
  113.     read -n1 -s # Wait for key press.
  114.     for ((k=0;k&lt;=$burners;k++))
  115.     do
  116.       eject -t $devnode$k &amp;
  117.     done
  118.     wait
  119.   else
  120.     # Ask user for confirmation to terminate script after last round.
  121.     echo -e "\n$i rounds done, remove discs and press a key for termination."
  122.     echo -e "Trays will close automatically."
  123.     read -n1 -s # Wait for key press.
  124.     for ((k=0;k&lt;=$burners;k++))
  125.     do
  126.       eject -t $devnode$k &amp; # Pull remaining empty trays back in.
  127.     done
  128.     wait
  129.   fi
  130. done

This one will take one parameter on the command line which will define the number of “rounds”. Since I have to burn a lot of identical discs this makes my life easier. If you have 5 burners, and you ask the script to go for 5 rounds that would mean you get 5 × 5 = 25 discs, if all goes well. It also needs to know the size of the medium in blocks for a later phase. For now you have to specify that within the script. The documentation inside shows you how to get that number, basically by checking a physical master disc with sdparm –command=capacity.

Other things you need to specify are the path to the image, the image files’ name, the device node name prefix, and the burning speed in factor notation. Also, of course, the number of physical burners available in the system. When run, it’ll eject all trays, prompt the user to put in discs, and launch the burning & checksumming workers in parallel.

The controller script will wait for all background workers within a round to terminate, and only then prompt the user to remove and replace all discs with new blank media. If this is the last round already, it’ll prompt the user to remove the last media set, and will then retract all trays by itself at the press of any key. All tray ejection and retraction is done automatically, so with all your drive trays still empty and closed, you launch the script, it’ll eject all drive trays for you, and retract after a keypress signaling the script all trays have been loaded by the user etc.

Let’s take a look at the worker script, which is actually doing the burning & verifying, I call this burn-and-check-worker.sh:

expand/collapse source code
  1. #!/bin/bash
  2.  
  3. burner=$1   # Burner number for this process.
  4. image=$2    # Image file to burn.
  5. blocks=$3   # Image size in blocks.
  6. round=$4    # Current round (purely to show the info to the user).
  7. speed=$5    # Burning speed.
  8. devnode=$6  # Device node prefix (devnode+burner = burner device).
  9. bwait=0     # Timeout variable for "blank media ready?" waiting loop.
  10. mwait=0     # Timeout variable for automount waiting loop.
  11. swait=0     # Timeout variable for "disc ready?" waiting loop.
  12. m=0         # Boolean indicating automount failure.
  13.  
  14. echo -e "Now burning $image to $devnode$burner, round $round."
  15.  
  16. # The following code will check whether the drive has a blank medium
  17. # loaded ready for writing. Otherwise, the burning might be started too
  18. # early when using drives with slow disc access.
  19. until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
  20. do
  21.   ((bwait++))
  22.   if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
  23.     echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
  24.     echo -e "(Was trying to burn to $devnode$burner in round $round,"
  25.     echo -e "failed to detect any blank medium in the drive.)\e[0m"
  26.     eject $devnode$burner
  27.     exit
  28.   fi
  29.   sleep 1 # Sleep 1 second before next check.
  30. done
  31.  
  32. wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.
  33.  
  34. # Notify user if burning failed.
  35. if [[ $? != 0 ]]; then
  36.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  37.   echo -e "Ejecting and aborting this thread.\e[0m\n"
  38.   eject $devnode$burner
  39.   exit
  40. fi
  41.  
  42. # The following code will eject and reload the disc to clear the device
  43. # status and then wait for the drive to become ready and its disc to
  44. # become readable (checking the discs block count as output by sdparm).
  45. eject $devnode$burner &amp;&amp; eject -t $devnode$burner
  46. until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
  47. do
  48.   ((swait++))
  49.   if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
  50.     echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
  51.     echo -e "(Was trying to access $devnode$burner in round $round,"
  52.     echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
  53.     exit
  54.   fi
  55.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  56. done
  57.  
  58. # The next part is only necessary if your system auto-mounts optical media.
  59. # This is usually the case, but if your system doesn't do this, you need to
  60. # comment the next block out. This will otherwise wait for the disc to
  61. # become mounted. We need to dismount afterwards for proper checksumming.
  62. until [ -n "`mount | grep $devnode$burner`" ]
  63. do
  64.   ((mwait++))
  65.   if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
  66.     echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
  67.     echo -e "Attempting to carry on..."
  68.     echo -e "(Was waiting for disc on $devnode$burner to automount in"
  69.     echo -e "round $round for 30 seconds.)\e[0m\n."
  70.     m=1
  71.     break
  72.   fi
  73.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  74. done
  75. if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  76.   sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  77.   sudo umount $devnode$burner # Dismount burner as root/superuser.
  78. fi
  79.  
  80. # On to the checksumming.
  81. echo -e "Now comparing checksums for $devnode$burner, round $round."
  82. sha512sum -c sum$burner.txt # Comparing checksums.
  83. if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  84.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
  85. fi
  86.  
  87. eject $devnode$burner # Ejecting disc after completion.

So as you can probably see, this is not very polished, scripts aren’t using configuration files yet (would be a nice to have), and it’s still a bit chaotic when it comes to actual usability and smoothness. It does work quite well however, with the device/disc readiness checking as well as the anti-automount workaround having been the major challenges (now I know why K3B ejects the disc before starting its checksumming, it’s simply impossible to read from the disc after burning finishes).

When run, it looks like this (user names have been removed and paths altered for the screenshot):

Multiburner

“multiburner.sh” at work. I was lucky enough to hit a bad disc, so we can see the checksumming at work here. The disc actually became unreadable near its end. Verification is really important for reliable disc deployment.

When using a poor mans disc burning station like this, I would actually recommend putting stickers on the trays like I did. That way, you’ll immediately know which disc to throw into the garbage bin.

This could still use a lot of polishing, and it’s quite sad, that the “big” GUI tools can’t do parallel burning, but I think I can now make due. Oh, and I actually also tried Gnomes “brasero” burning tool, and that one is far too minimalistic and can also not burn to multiple devices at the same time. They may be other GUI fatsos that can do it, but I didn’t want to try and get any of those installed on my older CentOS 6 Linux, so I just did it the UNIX way, even if not very elegantly. ;)

Maybe this can help someone out there, even though I think there might be better scripts than mine to get it done, but still. Otherwise, it’s just documentation for myself again. :)

Edit: Updated the scripts to implement a proper blank media detection to avoid burning starting prematurely in rare cases. In addition to that, I added some code to detect burning errors (where the burning process itself would fail) and notify the user about it. Also applied some cosmetic changes.

Edit 2: Added tool detection to multiburn.sh, and removed redundant color codes in warning & error messages in burn-and-check-worker.sh.

[1] Article logo based on the works of Lorian and Marcin Sochacki, “DVD.png” licensed under the CC BY-SA 3.0.

Apr 282014
 

Truecrypt LogoJust recently a user from the Truecrypt forums reported a problem about Truecrypt on Linux that puzzled me at first. It took quite some digging to get to the bottom of it, which is why I am posting this here. See [this thread] (note: The original TrueCrypt project is dead by now, and its forums are offline) and my reply to it. Basically, it was all about not being able to mount more than 8 Truecrypt volumes at once.

So, Truecrypt is using the device mapper and loop device nodes to mount its encrypted volumes after which the file systems inside said volumes are being mounted. That user called iapx86 in the Truecrypt forums did just that successfully until he hit a limit with the 9th volume he was attempting to mount. So he had quite a few containers lying around that he wanted mounted simultaneously.

At first I thought this was some weird Truecrypt bug. Then I searched for the culprit within the Linux device mapper, when it finally hit me. It had to be the loop device driver. Actually, Truecrypt even told me so already, see here:

TC loop error

Truecrypt reporting an error related to a loop device

After just a bit more research it became clear that the loop driver has this limit set. So now, the first part is to find out whether you even have a loop module like on ArchLinux for instance. Other distributions like RedHat Enterprise Linux and its descendants like CentOS, SuSE and so on have it compiled into the Linux kernel directly. First, see whether you have the module:

lsmod | grep loop

If your system is very modern, the loop.ko module might no longer have been loaded automatically. In that case please try the following as root or with sudo:

modprobe loop

In case this works (more below if it doesn’t), you may adjust loops’ options at runtime, one of which is the limit imposed on the number of loop device nodes there may be at once. In my example, we’ll raise this from below 10 to a nice value of 128, run this command as root or with sudo:

sysctl loop.max_loop=128

Now retry, and you’ll most likely find this working. To make the change permanent, you can either add loop.max_loop=128 as a kernel option to your boot loaders kernel line, in the file /boot/grub/grub.conf for grub v1 for instance, or you can also edit /etc/sysctl.conf, adding a line saying loop.max_loop=128. In both cases the change should be applied at boot time.

If you cannot find any loop module: In case modprobe loop only tells you something like “FATAL: Module loop not found.”, the loop driver will likely have been compiled directly into your Linux kernel. As mentioned above, this is common for multiple Linux distributions. In that case, changing loops’ options at runtime becomes impossible. In such a case you have to take the kernel parameter road. Just add the following to your kernel line in your boot loaders configuration file  and mind the missing loop. in front, this is because we’re now no longer dealing with a kernel module option, but with a kernel option:

max_loop=128

You can add this right after all other options in the kernel line, in case there are any. On a classic grub v1 setup on CentOS 6 Linux, this may look somewhat like this in /boot/grub/grub.conf just to give you an example:

title CentOS (2.6.32-431.11.2.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 ro root=/dev/sda1 LANG=en_US.UTF-8 crashkernel=128M rhgb quiet max_loop=128
        initrd /initramfs-2.6.32-431.11.2.el6.x86_64.img

You will need to scroll horizontally above to see all the options given to the kernel, max_loop=128 being the last one. After this, reboot and your machine will be able to use up to 128 loop devices. In case you need even more than that, you can also raise that limit even further, like to 256 or whatever you need. There may be some upper hard limit, but I haven’t experimented enough with this. People have reported at least 256 to work and I guess very few people would need even more on single-user workstations with Truecrypt installed.

In any case, the final result should look like this, here we can see 10 devices mounted, which is just the beginning of what is now possible when it comes to the amount of volumes you may use simultaneously:

TC without loop error

With the loop driver reconfigured, Truecrypt no longer complains about running out of loop devices

And that’s that!

Update: Ah, I totally forgot, that iapx86 on the Truecrypt forums also wanted to have more slots available, because Truecrypt limits the slots for mounting to 64. Of course, if your loop driver can give you 128 or 256 or whatever devices, we wouldn’t wanna have Truecrypt impose yet another limit on us, right? This one is tricky however, as we will need to modify the Truecrypt source code and recompile it ourselves. The code we need to modify sits in Core/CoreBase.h. Look for this part:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 64; }

What we need to alter is the constant returned by the C++ virtual function GetLastSlotNumber(). In my case I just tried 256, so I edited the code to look like this:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 256; }

Now I recompiled Truecrypt from source with make. Keep in mind that you need to download the PKCS #11 headers first, right into the directory where the Makefile also sits:

wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h

Also, please pay attention to the dependencies listed in the Linux section of the Readme.txt that you get after unpacking the source code just to make sure you have everything Truecrypt needs to link against. When compiled and linked successfully by running make in the directory where the Makefile is, the resulting binary will be Main/truecrypt. You can just copy that over your existing binary which may for instance be called /usr/local/bin/truecrypt and you should get something like this:

More mounting slots for Truecrypt on Linux

More than 64 mounting slots for Truecrypt on Linux

Bingo! Looking good. Now I won’t say this is perfectly safe and that I’ve tested it inside and out, so do this strictly at your own risk! For me it seems to work fine. Your mileage may vary.

Update 2: Since this kinda fits in here, I decided to show you how I tested the operation of this. Creating many volumes to check whether the modifications worked required an automated process. After all, we’d need more than 64 volumes created and mounted! Since Truecrypt does not allow for automated volume creation without manual user interaction, I used the Tcl-based expect system to interact with the Truecrypt dialogs. At first I would feed Truecrypt a static string for its required entropy data of 320 characters. But thanks to the help of some capable and helpful coders on [StackOverflow], I can now present one of their solutions (I’m gonna pick the first one provided).

I split this up into 3 scripts, which is not overly elegant, but works for me. The first script, which I shall call tc-create-master.sh will just run a loop. The user can specify the amount of iterations in the script. That number represents the amount of volumes to be created. In the loop, a Tcl expect script is being called many times to create the actual volumes. That expect script I called tc-create-worker.sh. Master first, as you can see I have started with 128 volumes, which is twice as much as the vanilla Truecrypt binary would allow you to use or 16 times as many loop devices the Linux kernel would usually let you create at the same time:

#!/bin/bash
# Define number of volumes to create here:
n=128
 
for i in {1..$n} 
do 
  ./tc-create-worker.sh $i 
done

And this is the worker written in Tcl/expect, enhanced thanks to StackOverflow users. This is platform-independent too, not like my crappy original that you can also see at StackOverflow and in the Truecrypt forums:

#!/usr/bin/expect
 
proc random_str {n} {
  for {set i 1} {$i &lt;= $n} {incr i} {
    append data [format %c [expr {33+int(94*rand())}]]
  }
  return $data
}
 
set num [lindex $argv]
set entropyfeed [random_str 320]
puts $entropyfeed
spawn truecrypt -t --create /home/user/vol$num.tc --encryption=AES --filesystem=none --hash=SHA-512 --password=test --size=33554432 --volume-type=normal
expect "\r\nEnter keyfile path" {
  send "\r"
}
expect "Please type at least 320 randomly chosen characters and then press Enter:" {
  send "$entropyfeed\r"
  exp_continue
}

This part first generates a random floating-point number between 0,0 and 1,0. Then it multiplies that by 94 to reach something like 0..93,999˙. Then, it adds that value to 33 and makes an integer out of it by truncating the parts behind the comma, resulting in random numbers in the range of 33..126. Now how does this make any sense? We want to generate random printable characters to feed to Truecrypt as entropy data!? Well, in the 7-bit ASCII table, the range of printable characters starts at number 33 and ends at number 126. And that’s what format %c does with those numbers, converting them according to their address in the ASCII table. Cool, eh? I thought that was pretty cool.

The rest is just waiting for Truecrypts dialogs and feeding the proper input to it, first just an <ENTER> keypress (\r) and second the entropy data plus another <ENTER>. I chose not to let Truecrypt format the containers with an actual file system to save some time and make the process faster. Now, mounting is considerably easier:

#!/bin/bash
# Define number of volumes to mount here:
n=128
 
for i in {1..$n}
do
  truecrypt -t -k "" --protect-hidden=no --password=test --slot=$i --filesystem=none vol$i.tc
done

Unfortunately, I lost more speed here than I had won by not formatting the volumes, because I chose the expensive and strong SHA-512 hash algorithm for my containers. RIPEMD-160 would’ve been a lot faster. As it is, mounting took several minutes for 128 volumes, but oh well, still not too excessive. Unmounting all of them at once is a lot faster and can be achieved by running the simple command truecrypt -t -d. Since no screenshot that I can make can prove that I can truly mount 128 volumes at once (unless I forge one by concatenation), it’s better to let the command line show it:

[thrawn@linuxbox ~]$ echo &amp;&amp; mount | grep truecrypt | wc -l 
 
128

Nah, forget that, I’m gonna give you that screenshot anyway, stitched or not, it’s too hilarious to skip that, oh and don’t mind the strange window decorations, that’s because I opened Truecrypt on Linux via remote SSH+X11 on a Windows XP x64 machine with GDI UI:

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us!

Oh man… I guess the sky’s the limit. I just wonder why some of those limits (8 loop devices maximum in the Linux kernel, 64 mounting slots in Truecrypt) are actually in place by default…

Edit: Actually, the limit is now confirmed to be 256 volumes due to a hard coded limit in the loop.ko driver. Well, still better than the 8 we started with, eh?

Sep 192013
 

Knoppix logoSince 2 or 3 years I have been building a Knoppix-based Linux-on-CD distribution for teaching (exercises and exams) at our University. A colleague of mine has written the proper server/client tools for source code submission and evaluation while I have adapted and hardened the system for our needs, based on tons of modifications on the CD and also on the server side, from where a lot of local behavior is controlled by server-side scripts which can dis-/enable the network or USB ports on clients, but which can also shut clients down or verify the media to effectively prevent CD forgery (Its pretty cool to see them students try, so far they never came even remotely close to breaking it).

Now I am supposed to ensure that the system can use sound. It seems some Java-based exercises may contain sound creation and playback in the future. To make sure the system can support new sound chips (which it currently does not) and potentially also new GPUs I tried to hack a new Linux kernel into the system to replace the good old 2.6.32.6, which had the ALSA sound system built in, so I couldn’t update it as a kernel module. Also, updating the whole 6.2.1 thing to Knoppix 7 is just not an option, as it would be far too much work. The kernel update it had to be.

And so the adventure began…

Continue reading »

Jul 182013
 

Buffalo logoSince a colleague of mine has [rooted] our Buffalo Terastation III NAS (TS-XLC62) at work a while back, we changed the remote shell from Telnet to SSH and did a few other nice hacks. But there is one problem: The TS-XLC62 does not monitor the hard drives’ health by SMART, even though parts of the required smartmontools are installed on the tiny embedded Linux system. They’re just stitting there unused, just like the sshd before.

Today I’m going to show you how you can make this stuff work and how to enable SMART email notifications on this system, which has no standard Linux mail command, but a Buffalo-specific tmail command instead. We will enable the background smartd service, and configure it properly for this specific Terastation model. All of the steps shown here are done on a rooted TS-XLC62, so make sure you’re always root here:

Buffalo Terastation IIIThe smartmontools on the box are actually almost complete. Only the drive database and init scripts are missing, and for some reason, running update-smart-drivedb on the system would fail. So we need to get the database from another Linux/UNIX or even Windows machine running smartmontools. Usually, on Linux you can find the file here: /usr/local/share/smartmontools/drivedb.h“. Copy it onto the Terastation using scp from another *nix box: scp /usr/local/share/smartmontools/drivedb.h root@<terastation-host-or-ip>:/usr/local/share/smartmontools/“. You can use [FileZilla] or [puTTY] to copy stuff over from a Windows machine instead.

Note that this only makes sense if you have smartmontools 5.40 or newer (smartctl -V tells the version). Older releases cannot have their drive databases updated seperately, but it will most likely still work fine.

Now, log in to your Terastation using Telnet or SSH, and you can test whether it’s working by running a quick info check on one of the hard drives. We will need to specify the controller type as marvell, as the SATA controller of the Marvell Feroceon  MV78XX0 SoC in the box cannot be addressed by regular ATA/SCSI commands. Run:

smartctl -d marvell -i /dev/sda

In my case I get this, as I have already replaced the first failing Seagate hard drive with an even crappier WD one already (yeah, yeah, I know, but it was the only one available), it’s also not yet known by the smartmontools database:

smartctl version 5.37 [arm-none-linux-gnueabi] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD20EARX-00PASB0
Serial Number:    WD-WCAZAL555899
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Thu Jul 18 09:54:53 2013 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Now that that’s done we should make sure that smartmontools’ daemon called smartd will be running in the background doing regular checks on the drives. But since we will need to configure email notifications for that, we need to make sure that smartd can send emails first. The Terastation has no mail command however, only some Buffalo tmail command, that is no valid drop-in replacement for mail as the syntax is different.

So we need to write some glue-code, that will then later be invoked by smartd. I call this mailto.sh, and I’ll place it in /usr/local/sbin/. It’s based on [this article], that gave me a non-working solution on my Terastation for several reasons, but that’s easily fixed up, so it shall look somewhat like this (you’ll need to fill in several of the variables with your own data of course), oh, and as always, don’t forget to do chmod 550 on it when it’s done:

expand/collapse source code
  1. #! /bin/bash
  2. ##############################################################
  3. # Written as glue code, so that smartmontools/smartd can use #
  4. # Buffalos own "tmail", as we don't have "mail" installed    #
  5. # on the Terastation.                                        #
  6. ##############################################################
  7.  
  8. # User-specific declarations:
  9.  
  10. TMP_FILE=/tmp/Smartctl.error.txt
  11. SMTP=&lt;PUT SMTP HOST NAME HERE&gt;
  12. SMTP_PORT=25
  13. FROM=&lt;PUT "FROM" EMAIL ADDRESS HERE&gt;
  14. SUBJECT="SMART Error"
  15. FROM_NAME=&lt;PUT "FROM" NAME HERE. LIKE "TERASTATION MAILER"&gt;
  16. ENCODING="UTF-8"
  17. BYTE=8
  18.  
  19. # Code:
  20.  
  21. # Write email metadata to the temp file (smartd gives us this):
  22. echo To:  $SMARTD_ADDRESS &gt; $TMP_FILE 
  23. echo Subject:  "$SMARTD_SUBJECT" &gt;&gt; $TMP_FILE 
  24. echo &gt;&gt; $TMP_FILE 
  25. echo &gt;&gt; $TMP_FILE 
  26.  
  27. # Save the email message (STDIN) to the temp file:
  28. cat &gt;&gt; $TMP_FILE 
  29.  
  30. # Append the output of smartctl -a to the message:
  31. smartctl -a -d $SMARTD_DEVICETYPE $SMARTD_DEVICE &gt;&gt; $TMP_FILE 
  32.  
  33. # Now email the message to the user using Buffalos mailer:
  34. tmail -s $SMTP -t $SMARTD_ADDRESS -f $FROM -sub $SUBJECT \
  35. -h $FROM_NAME -c $ENCODING -b $BYTE -s_port $SMTP_PORT &lt; $TMP_FILE 
  36.  
  37. # Delete temporary file
  38. rm -f $TMP_FILE

So this is our mailer script wrapping the stuff coming from smartd's invocation of mail around Buffalos own tmail. Now how do we make smartd call this? Let’s edit /usr/local/etc/smartd.conf to make it happen, fill in your email address where it says here, like you changed all the variables in mailto.sh before:

  1. # Monitor all four harddrives in the Buffalo Terastation with self-tests running
  2. # on Sunday 01:00AM for disk 1, 02:00AM for disk 2, 03:00AM for disk 3 and 04:00AM
  3. # for disk 4:
  4.  
  5. /dev/sda -d marvell -a -s L/../../7/01 -m &lt;EMAIL&gt; -M exec /usr/local/sbin/mailto.sh
  6. /dev/sdb -d marvell -a -s L/../../7/02 -m &lt;EMAIL&gt; -M exec /usr/local/sbin/mailto.sh
  7. /dev/sdc -d marvell -a -s L/../../7/03 -m &lt;EMAIL&gt; -M exec /usr/local/sbin/mailto.sh
  8. /dev/sdd -d marvell -a -s L/../../7/04 -m &lt;EMAIL&gt; -M exec /usr/local/sbin/mailto.sh

Now if you want to test the functionality of the mailer beforehand, you can use this instead:

/dev/sda -d marvell -a -s L/../../7/01 -m &lt;EMAIL&gt; -M exec /usr/local/sbin/mailto.sh -M test

To test it, just run smartd -d on the shell. This will give you debugging output including some warnings caused by a bit of unexpected output that tmail will pass to smartd. This is non-critical though, it should look similar to this:

smartd version 5.37 [arm-none-linux-gnueabi]
Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

Opened configuration file /usr/local/etc/smartd.conf
Configuration file /usr/local/etc/smartd.conf parsed.
Device: /dev/sda, opened
Device: /dev/sda, is SMART capable. Adding to "monitor" list.
Monitoring 1 ATA and 0 SCSI devices
Executing test of /usr/local/sbin/mailto.sh to <EMAIL> ...
Test of /usr/local/sbin/mailto.sh to <EMAIL> produced unexpected 
output (50 bytes) to STDOUT/STDERR: 
smtp_port 25
Get smtp portnum 25
pop3_port (null)

Test of /usr/local/sbin/mailto.sh to <EMAIL>: successful

Now you can kill smartd on a secondary shell by running the following command. We will be re-using this in an init script later too, as the Terastation init functions are leaving quite a lot to be desired, so I’ll go into the details a bit:

kill `ps | grep smartd | grep -v grep | cut -f1 -d"r"`

This command will get the process id of smartd and feed it to the kill command. The delimiter “r” is used for the cut command, because whitespace won’t work in some cases where the leading character of the ps output is also a whitespace, so it’ll match the first letter of the user running smartd, which has to be root.

To understand this better, just run ps | grep smartd | grep -v grep while smartd is running. If the PID is 5-digit, the leading character will be a number from the PID, but if it is 4-digit, the leading character is a whitespace instead, which would make cut -f1 -d " " report an empty string in our case, hence cut -f1 -d"r"… Very dirty, I know… Don’t care though. ;) You may remove the -M test directive from /usr/local/etc/smartd.conf now, if you’ve played around with that, so the smart spam will stop. :roll:

Finally, to make our monitoring run as a smooth auto-starting daemon in the background, we will need to write ourselves that init script. The default smartmontools one won’t work out of the box, as a few functions like killproc or daemon are missing on the Terastations embedded Linux. Yeah, I was too lazy to port them over. So a few adaptions will make it happen in a simplified fashion. See this reduced and adapted init script called smartd sitting in /etc/init.d/:

expand/collapse source code
  1. #! /bin/sh
  2. SMARTD_BIN=/usr/local/sbin/smartd
  3.  
  4. RETVAL=0
  5. prog=smartd
  6. pidfile=/var/lock/subsys/smartd
  7. config=/usr/local/etc/smartd.conf
  8.  
  9. start()
  10. {
  11.         [ $UID -eq 0 ] || exit 4
  12.         [ -x $SMARTD_BIN ] || exit 5
  13.         [ -f $config ] || exit 6
  14.         echo -n $"Starting $prog: "
  15.         $SMARTD_BIN $smartd_opts
  16.         RETVAL=$?
  17.         echo
  18.         [ $RETVAL = 0 ] &amp;&amp; touch $pidfile
  19.         return $RETVAL
  20. }
  21.  
  22. stop()
  23. {
  24.         [ $UID -eq 0 ] || exit 4
  25.         echo -n $"Shutting down $prog: "
  26.         kill `ps | grep smartd | grep -v grep | cut -f1 -d"r"`
  27.         RETVAL=$?
  28.         echo
  29.         rm -f $pidfile
  30.         return $RETVAL
  31. }
  32.  
  33. *)
  34.         echo $"Usage: $0 {start|stop}"
  35.         RETVAL=2
  36.         [ "$1" = 'usage' ] &amp;&amp; RETVAL=0
  37.  
  38. esac
  39.  
  40. exit $RETVAL

So yeah, instead of killproc we’re making due with kill and most of the service functions have been removed, limiting the script to start and stop. Plus, it will not check for multiple start invocations in this version, so it’s possible to start multiple smartd daemons and stop will only work for one running process at a time, so you’ll need to pay attention. Could be fixed easily, but I think it’s good enough that way. To make smartd start on boot, link it properly, somewhat like that, I guess S90 should be fine:

ln -s /etc/init.d/smartd /etc/rc.d/sysinit.d/S90smartd

Also, you can start and stop smartd from the shell more conveniently now without having to run smartd in the foreground and kill it from a secondary shell as it doesn’t have CTRL+C kill it. You can now just do these two things instead, like on any other SysVinit system, only with the limitations described above:

root@TS-XLC62:~# /etc/init.d/smartd stop
Shutting down smartd: Terminated
root@TS-XLC62:~# /etc/init.d/smartd start
Starting smartd: 
root@TS-XLC62:~#

Better, eh? Now, welcome your SMART monitoring-enabled Buffalo Terastation with email notifications being sent on any upcoming hard drive problems detected by courtesy of smartmontools! :cool:

Edit: And here is a slighty more sophisticated init script, that will detect whether smartd is already running or not on start, so that multiple starts can no longer happen. It will also detect if smartd has been killed from outside the scope of the init scripts (like when it crashed or something) by looking at the PID file:

expand/collapse source code
  1. #! /bin/sh
  2. SMARTD_BIN=/usr/local/sbin/smartd
  3. RETVAL=0
  4. prog=smartd
  5. pidfile=/var/lock/subsys/smartd
  6. config=/usr/local/etc/smartd.conf
  7.  
  8. start()
  9. {
  10.   [ $UID -eq 0 ] || exit 4
  11.   [ -x $SMARTD_BIN ] || exit 5
  12.   [ -f $config ] || exit 6
  13.   if [ -f $pidfile ]; then
  14.     echo "PID file $pidfile found! Will not start,"
  15.     echo "smartd probably already running!"
  16.     PID=`ps | grep smartd | grep -v grep | grep -v "smartd start" | cut -f1 -d"r"`
  17.     if [ ${#PID} -gt 0 ]; then
  18.       echo "Trying to determine smartd PID: $PID"
  19.     elif [ ${#PID} -eq 0 ]; then
  20.       echo "No running smartd process found. You may want to"
  21.       echo "delete $pidfile and then try again."
  22.     fi
  23.     exit 6
  24.   elif [ ! -f $pidfile ]; then
  25.     echo -n $"Starting $prog: "
  26.     $SMARTD_BIN $smartd_opts
  27.     RETVAL=$?
  28.     echo
  29.     [ $RETVAL = 0 ] &amp;&amp; touch $pidfile
  30.     return $RETVAL
  31.   fi
  32. }
  33.  
  34. stop()
  35. {
  36.   [ $UID -eq 0 ] || exit 4
  37.   PID=`ps | grep smartd | grep -v grep | grep -v "smartd stop" | cut -f1 -d"r"`
  38.   if [ ${#PID} -eq 0 ]; then
  39.     echo "Error: No running smartd process detected!"
  40.     echo "Cleaning up..."
  41.     echo -n "Removing $pidfile if there is one... "
  42.     rm -f $pidfile
  43.     echo "Done."
  44.     exit 6
  45.   elif [ ${#PID} -gt 0 ]; then
  46.     echo -n $"Shutting down $prog: "
  47.     kill `ps | grep smartd | grep -v grep | grep -v "smartd stop" | cut -f1 -d"r"`
  48.     RETVAL=$?
  49.     echo
  50.     rm -f $pidfile
  51.     return $RETVAL
  52.   fi
  53. }
  54.  
  55. case "$1" in
  56.   start)
  57.     start
  58.     ;;
  59.   stop)
  60.     stop
  61.     ;;
  62.   restart)
  63.     stop
  64.     start
  65.     ;;
  66.   status)
  67.     ps | grep smartd | grep -v grep | grep -v status
  68.     RETVAL=$?
  69.     ;;
  70.   *)
  71.     echo $"Usage: $0 {start|stop|restart|status}"
  72.     RETVAL=2
  73.     [ "$1" = 'usage' ] &amp;&amp; RETVAL=0
  74. esac
  75.  
  76. exit $RETVAL

Jun 132013
 

Wine LogoSure there are ways to compile the components of my x264 benchmark on [almost any platform]. But you never get the “reference” version of it. The one originally published for Microsoft Windows and the one really usable for direct comparisons. A while back I tried to run that Windows version on Linux using [Wine], but it wouldn’t work because it needs a shell. It never occurred to me that I could maybe just copy over a real cmd.exe from an actual Windows. A colleague looked it up in the Wine AppDB, and it seems the cmd.exe only has [bronze support status] as of Wine version 1.3.35, suggesting some major problems with the shell.

Nevertheless, I just tried using my Wine 1.4.1 on CentOS 6.3 Linux, and it seems support has improved drastically. All cmd.exe shell builtins seem to work nicely. It was just a few tools that didn’t like Wines userspace Windows API, especially timethis.exe, which also had problems talking to ReactOS. I guess it wants something from the Windows NT kernel API that Wine cannot provide in its userspace reimplementation.

But: You can make cmd.exe just run one subcommand and then terminate using the following syntax:

cmd.exe /c <command to run including switches>

Just prepend the Unix time command plus the wine invocation and you’ll get a single Windows command (or batch script) run within cmd.exe on Wine, and get the runtime out of it at the end. Somewhat like this:

time wine cmd.exe /c <command to run including switches>

Easy enough, right? So does this work with the Win32 version of x264? Look for yourself:

So as you can see it does work. It runs, it detects all instruction set extensions (SSE…) just as if it was 100% native, and as you can see from the htop and Linux system monitor screens, it utilizes all four CPU cores or all eight threads / logical CPUs to be more precise. By now this runs at around 3fps+ on a Core i7 950, so I assume it’s slower than on native Windows.

Actually, the benchmark publication itself currently knows several flags for making results “not reference / not comparable”. One is the flag for custom x264 versions / compilations, one is for virtualized systems and one for systems below minium specifications. The Wine on Linux setup wouldn’t fit into any of those. Definitely not a custom version, running on a machine that satisfies my minimum system specs, leaving the VM stuff to debate. Wine is per definition a runtime environment, not an emulator, not a VM hypervisor or paravirtualizer. It just reimplements the Win32/64 API, mapping certain function calls to real Linux libraries or (where the user configures it as such) to real Microsoft or 3rd party DLLs copied over. That’s not emulation. But it’s not quite the same as running on native Windows either.

I haven’t fully decided yet, but I think I will mark those results as “green” in the [results list], extending the meaning of that flag from virtual machines to virtual machines AND Wine, otherwise it doesn’t quite seem right.

gt;