May 312017

HakuNeko logo1.) What for?

Usually, porting my favorite manga ripper [HakuNeko] would involve slightly more exotic target platforms like [FreeBSD UNIX]. This changed with version 1.4.2 however, as this version – the most current at the time of writing – would no longer compile on Windows machines due to some issues with its build toolchain. And that’s the most common platform for the tool!

This is what the lead developer had to say about the issue while even suggesting the use of [FMD] instead of HakuNeko on Windows:

“The latest release does not compile under windows due to some header include contradictions of sockets […]”
    -in a [comment] to HakuNeko ticket #142 by [Ronny Wegener], HakuNeko project leader

Normally I wouldn’t mind that much and keep using 1.4.1 for now, but unfortunately this is not an option. Quite a few Manga websites have changed by now, breaking compatibility with the previous version. As this is breaking most of HakuNekos’ functionality for some important sites, it became quite unusable on Windows, leaving Linux as the only officially supported platform.

As using virtual machines or remote VNC/X11 servers for HakuNeko proved to be too tedious for me, I thought I’d try to build this by myself. As the MSYS2/MinGW Windows toolchain seemed to be broken for 1.4.2, I tried – for the very first time – to cross-compile on Linux, choosing CentOS 7.3 x86_64 and MinGW32 4.9.3 for the task. This was quite the challenge, given that HakuNeko comes completely unprepared for cross-compiling.

2.) First, the files

Took me many hours / days to get it done – 100% statically linked too – and it’s finished now! What I won’t provide is an installer (don’t care about that), but here are my v1.4.2 builds for Windows:

As of today, those versions have been tested successfully on the following operating systems:

  • Windows XP Professional SP3 / POSReady2009
  • Windows XP Professional x64 Edition SP2 w. Server 2003 updates
  • Windows Server 2003 R2 x64 SP2
  • Windows Vista Enterprise x64 SP2
  • Windows 7 Professional x64 SP1
  • Windows 10 Professional x64 build #1607

Please be aware that not all of the functionality has been tested by me, just a few downloads that wouldn’t have worked with 1.4.1, a few more random downloads here and there, plus chapter-wise .cbz packaging. It’s looking pretty good I think. Here are some sample screen shots as “proof” of it working (click to enlarge):

HakuNeko 1.4.2 downloading "Kobayashi-san Chi no Maid Dragon" on XP x64

HakuNeko 1.4.2 downloading “Kobayashi-san Chi no Maid Dragon” on XP x64 (Note: I own that Manga in paper form)

3.) What has been done to make this work?

3a.) Initial work:

First of all, cross-compiling is a bottomless, hellish pit, a horrible place that you never want to enter unless a.) The build toolchain of the software you wanna compile is very well prepared for it or b.) you really have/want to get your hands on that build or c.) you hate yourself so much you have to hurt yourself or d.) you actually enjoy c.).

The reasons for choosing cross-compiling were that Ronny Wegener had said, that the MSYS2/MinGW32 build would fail on Windows, plus it would require GCC version 5.3 to link with the bundled, pre-built static libraries (OpenSSL, cURL, wxWidgets).

So I thought it would be more likely to work if I were to run my own MinGW setup on Linux, not relying on the bundled stuff but linking against the libraries that come with MinGW32 4.9.3 on my platform of choice – CentOS 7.3 Linux.

One exception was the GUI library wxWidgets 3.0.2 that I had to cross-compile and statically link by myself as well, but luckily, that’s easy despite its size. wxWidgets is one piece of software that does come well-prepared for cross-compiling! In my case, that made it as simple as this (parallel compile with 6 CPUs):

$ ./configure --prefix=/usr/local/i686-w64-mingw32 --host=i686-w64-mingw32 --build=x86_64-linux \
 --enable-unicode --with-expat --with-regex --with-opengl --with-libpng --with-libjpeg --with-libtiff \
 --with-zlib --with-msw --enable-ole --enable-uxtheme --disable-shared
$ make -j6
# make install

3b.) HakuNeko build toolchain / Makefile modifications for cross-compiling:

HakuNeko is much harder, and I don’t even remember half of what I did, but most of it was manually editing the ./Makefile after $ ./configure --config-mingw32 would have produced something rather broken.

Let’s get to it, file paths are relative to the source root. First, edit the following parts of the ./Makefile (you need to look for them in different places of the file). First, the PREFIX, should be in the bottom half of the file:

PREFIX = /usr/local/i686-w64-mingw32/

CC and the CFLAGS:

CC = i686-w64-mingw32-g++
CFLAGS = -c -Wall -O2 -std=c++11 \
 -I/usr/local/i686-w64-mingw32/lib/wx/include/i686-w64-mingw32-msw-unicode-static-3.0 \
 -I/usr/local/i686-w64-mingw32/include/wx-3.0 -D_FILE_OFFSET_BITS=64 -D__WXMSW__ -mthreads \
 -DCURL_STATICLIB -I/usr/i686-w64-mingw32/sys-root/mingw/include

Add -DPORTABLE, if you want to build the portable version of HakuNeko.

Then, the Windows resource compiler, controlled by RC and RCFLAGS:

RC = /usr/bin/i686-w64-mingw32-windres
RCFLAGS = -J rc -O coff -F pe-i386 -I/usr/i686-w64-mingw32/sys-root/mingw/include \

And finally, the static linking part, which is the hardest stuff to get done right, LD, LDFLAGS and LDLIBS:

LD = i686-w64-mingw32-g++
LDFLAGS = -s -static -static-libgcc -static-libstdc++ -mwindows -DCURL_STATICLIB
LDLIBS = -L/usr/local/i686-w64-mingw32/lib   -Wl,--subsystem,windows -mwindows \
 -lwx_mswu_xrc-3.0-i686-w64-mingw32 -lwx_mswu_webview-3.0-i686-w64-mingw32 \
 -lwx_mswu_qa-3.0-i686-w64-mingw32 -lwx_baseu_net-3.0-i686-w64-mingw32 \
 -lwx_mswu_html-3.0-i686-w64-mingw32 -lwx_mswu_adv-3.0-i686-w64-mingw32 \
 -lwx_mswu_core-3.0-i686-w64-mingw32 -lwx_baseu_xml-3.0-i686-w64-mingw32 \
 -lwx_baseu-3.0-i686-w64-mingw32 -L/usr/i686-w64-mingw32/sys-root/mingw/lib -lcurl -lidn -liconv \
 -lssh2 -lssl -lcrypto -lpng -ljpeg -ltiff -lexpat -lwxregexu-3.0-i686-w64-mingw32 -lz -lrpcrt4 \
 -lwldap32 -loleaut32 -lole32 -luuid -lws2_32 -lwinspool -lwinmm -lshell32 -lcomctl32 -lcomdlg32 \
 -ladvapi32 -lwsock32 -lgdi32

Took a while to find the libraries (and static library order!) necessary to satisfy all the dependencies properly.

If you need it, here is the modified Makefile I’ve used to cross-compile:

  • [HakuNeko Makefile] for cross-compiling HakuNeko 1.4.2 for Windows on CentOS 7.3 x86_64 Linux (needs statically linked & installed wxWidgets first).

3c.) Source code modifications:

However, something will still not be quite right, because some of the crypto libraries will provide the MD5 functions MD5_Init(), MD5_Update() as well as MD5_Final(), and those are already defined by HakuNeko itself. This will break the static linking, as redundant definitions won’t work. We’ll rely on the libraries (libcrypto, libssl), and comment the built-in stuff out in src/v7/v7.c:

  1. void MD5_Init(MD5_CTX *c);
  2. void MD5_Update(MD5_CTX *c, const unsigned char *data, size_t len);
  3. void MD5_Final(unsigned char *md, MD5_CTX *c);


  1. /* void MD5_Init(MD5_CTX *c);
  2.  * void MD5_Update(MD5_CTX *c, const unsigned char *data, size_t len);
  3.  * void MD5_Final(unsigned char *md, MD5_CTX *c);
  4.  */

On top of that, the configure system may have generated src/main.cpp as well as src/main.h. Those are bogus files, turning the entire tool into nothing but one large “Hello World” program. Plus, that’s hard to debug, as the binary won’t even output “Hello World” on a Windows terminal when it’s built as a GUI tool. I only found the issue when testing it with Wine on Linux. ;)

Please delete src/main.cpp and src/main.h before continuing.

Now, if you’re really lucky, you should be able to run something like $ make -j6 and watch everything work out nicely. Or watch it crash and burn, which is the much, much more likely scenario, given I’ve likely only given you half of what I did to the build tools.

Well, in any case, no need to run $ make install of course, just grab the binary build/msw/bin/hakuneko.exe and copy it off to some Windows machine, then try to run it. If you’ve built the portable version, you may wish to rename the file to hakuneko-portable.exe, just like the official developers do.

4.) The future

Let’s just hope that the developers of HakuNeko can get this fixed for versions >=1.4.3, because I really, really don’t want to keep doing this. It’s extremely painful, as cross-compiling is exactly the kind of living hell I heard a lot of people saying it is! I think it’s a miracle I managed to compile and run it at all, and it was so frustrating and tedious for somebody like me (who isn’t a developer).

The statement that this took “hours / days” wasn’t an exaggeration. I think it was something like 10-12 man hours of pure frustration to get it done. I mean, it does feel pretty nice when it finally works, but I wouldn’t bet on myself being able to do this again for future versions…

So please, make it work on Windows again, if possible, and keep HakuNeko cross-platform! It’s still my favorite tool for the task! Thanks! :)

Mar 012017

Notepadqq @ CentOS 6 Linux logoIt’s rather rare for me to look for a replacement of some good Windows software for Linux/UNIX instead of the other way around, but the source code editor [Notepad++] is one example of such software. The program gedit on the Gnome 2 desktop environment of my old CentOS 6 enterprise Linux isn’t bad, but it isn’t exactly good either. The thing I was missing most was a search & replace engine capable of regular expressions.

Of course, vi can do it, but at times, vi can be a bit hard to use, so I kinda looked for a Notepad++ replacement. What I found was [Notepadqq], which is basically a clone using the Qt5 UI. However, this editor is clearly made for more modern systems, but I still looked for a way to get it to compile and run on my CentOS 6.8 x86_64 Linux system. And I found one. Here are the most important prerequisites:

  • A new enough GCC (I used 6.2.0), because the v4.4.7 platform compiler won’t work with the modern C++ stuff
  • Qt5 libraries from the [EPEL] repository
  • git

First, you’ll want a new compiler. That part is surprisingly easy, but a bit time consuming. First, download a fresh GCC tarball from a server on the [mirrors list], those are in the releases/ subdirectory, so a file like gcc-6.3.0.tar.bz2 (My version is still 6.2.0). It seems Notepadqq only needs some GCC 5, but since our platform compiler won’t cut it anyway, why not just use the latest?

Now, once more, this will take time, could well be hours, so you might wanna do the compilation step over night, the last step needs root privileges:

$ tar -xzvf ./gcc-6.3.0.tar.bz2
$ cd ./gcc-6.3.0/
$ ./configure --program-suffix="-6.3.0"
$ make
# make install

And when you do this, please never forget to add a --program-suffix for the configuration step!  You might seriously fuck things up if you miss that! So double-check it!

When that’s finally done, let’s handle Qt5 next. I’ll be using a binary distribution to make things easy, and yeah, I didn’t just install the necessary packages, I got the whole Qt5 blob instead, too lazy to do the cherry picking. Oh, and if you don’t have it, add git as well:

# yum install
# yum install qt5* git

I assume # yum install qt5-qtwebkit qt5-qtwebkit-devel qt5-qtsvg qt5-qtsvg-devel qt5-qttools qt5-qttools-devel should also be enough according to the requirements, but I didn’t try that. Now, enter a free directory or one you generally use for source code and fetch the latest Notepadqq version (this will create a subfolder we’ll cd to):

$ git clone
$ cd ./notepadqq

After that, we need to make sure that we’ll be using the correct compiler and that we’re linking against the correct libraries that came with it (like*). To do that, set the following environment variables, assuming you’re using the bash as your shell (use lib/ instead of lib64/ folders if you’re on 32-bit x86):

$ export CC="gcc-6.3.0"
$ export CXX="g++-6.3.0"
$ export CPP="cpp-6.3.0"
$ export CFLAGS="-I/usr/local/include/ -L/usr/local/lib64/"
$ export CXXFLAGS="-I/usr/local/include/ -L/usr/local/lib64/"
$ export LDFLAGS="-L/usr/local/lib64/"

The C-related settings are probably not necessary as Qt5 stuff should be pure C++, but you’ll never know, so let’s play it safe.

With that we’re including and linking against the correct libraries and we’ll be using our modern compiler as well. Time to actually compile Notepadqq. To do that, we’ll still need to tell it where to find the Qt5 versions of the qmake and lrelease binaries, but luckily, we can solve that with some simple configuration options. So, let’s do this, the last step requires root privileges again, from within the notepadqq/ directory that git clone created for us:

$ ./configure --qmake /usr/bin/qmake-qt5 --lrelease /usr/bin/lrelease-qt5
$ make
# make install

Now, there are some weird linking issues that I never got fixed on CentOS (some developer please tell me how, I have the same crap when building x265!). Because of that we still can’t launch Notepadqq as-is, we need to give it an LD_LIBRARY_PATH to find the proper libraries at runtime. Let’s just create an executable launcher script /usr/local/sbin/ for that. Edit it and enter the following code:

LD_LIBRARY_PATH="/usr/local/lib64" /usr/local/bin/notepadqq "$@"

Use this as your launcher script for Notepadqq and you’re good to go with your Notepad++ replacement on good old CentOS 6.x:

Running Notepadqq on CentOS 6 Linux

Running the latest Notepadqq on CentOS 6 Linux with Qt5 version 5.6.1, state 2017-03-01

Now, let’s see whether it’s even that good actually… :roll:

Nov 192016

FreeBSD GMABoost logoRecently, after finding out that the old Intel GMA950 profits greatly from added memory bandwidth (see [here]), I wondered if the overclocking mechanism applied by the Windows tool [here] had leaked into the public after all this time. The developer of said tool refused to open source the software even after it turning into abandonware – announced support for GMA X3100 and X4500 as well as MacOS X and Linux never came to be. Also, he did not say how he managed to overclock the GMA950 in the first place.

Some hackers disassembled the code of the GMABooster however, and found out that all that’s needed is a simple PCI register modification that you could probably apply by yourself on Microsoft Windows by using H.Oda!s’ [WPCREdit].

Tools for PCI register modification do exist on Linux and UNIX as well of course, so I wondered whether I could apply this knowledge on FreeBSD UNIX too. Of course, I’m a few years late to the party, because people have already solved this back in 2011! But just in case the scripts and commands disappear from the web, I wanted this to be documented here as well. First, let’s see whether we even have a GMA950 (of course I do, but still). It should be PCI device 0:0:2:0, you can use FreeBSDs’ own pciconf utility or the lspci command from Linux:

# lspci | grep "00:02.0"
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
# pciconf -lv pci0:0:2:0
vgapci0@pci0:0:2:0:    class=0x030000 card=0x30aa103c chip=0x27a28086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller'
    class      = display
    subclass   = VGA

Ok, to alter the GMA950s’ render clock speed (we are not going to touch it’s 2D “desktop” speed), we have to write certain values into some PCI registers of that chip at 0xF0hex and 0xF1hex. There are three different values regulating clockspeed. Since we’re going to use setpci, you’ll need to install the sysutils/pciutils package on your machine via # pkg install pciutils. I tried to do it with FreeBSDs’ native pciconf tool, but all I managed was to crash the machine a lot! Couldn’t get it solved that way (just me being too stupid I guess), so we’ll rely on a Linux tool for this. Here is my version of the script, which I call I placed that in /usr/local/sbin/ for global execution:

  1. #!/bin/sh
  3. case "$1" in
  4.   200) clockStep=34 ;;
  5.   250) clockStep=31 ;;
  6.   400) clockStep=33 ;;
  7.   *)
  8.     echo "Wrong or no argument specified! You need to specify a GMA clock speed!" >&2
  9.     echo "Usage: $0 [200|250|400]" >&2
  10.     exit 1
  11.   ;;
  12. esac
  14. setpci -s 02.0 F0.B=00,60
  15. setpci -s 02.0 F0.B=$clockStep,05
  17. echo "Clockspeed set to "$1"MHz"

Now you can do something like this: # 200 or # 400, etc. Interestingly, FreeBSDs’ i915_kms graphics driver seems to have set the 3D render clock speed of my GMA950 to 400MHz already, so there was nothing to be gained for me in terms of performance. I can still clock it down to conserve energy though. A quick performance comparison using a crappy custom-recorded ioquake3 demo shows the following results:

  • 200MHz: 30.6fps
  • 250MHz: 35.8fps
  • 400MHz: 42.6fps

Hardware was a Core 2 Duo T7600 and the GPU was making use of two DDR-II/667 4-4-4 memory modules in dual channel configuration. Resolution was 1400×1050 with quite a few changes in the Quake III configuration to achieve more performance, so your results won’t be comparable, even when running ioquake3 on identical hardware. I’d post my ~/.ioquake3/baseq3/q3config.cfg here, but in my stupidity I just managed to freaking wipe the file out. Now I have to redo all the tuning, pfh.

But in any case, this really works!

Unfortunately, it only applies to the GMA950. And I still wonder what it was that was so wrong with # pciconf -w -h pci0:0:2:0 0xF0 0060 && pciconf -w -h pci0:0:2:0 0xF0 3405 and the like. I tried a few combinations just in case my byte order was messed up or in case I really had to write single bytes instead of half-words, but either the change wouldn’t apply at all, or the machine would just lock up. Would be nice to do this with only BSD tools on actual FreeBSD UNIX, but I guess I’m just too stupid for pciconf

Aug 032016

xmms logoYeah, I’m still using the good old xmms v1 on Linux and UNIX. Now, the boxes I used to play music on were all 32-bit x86 so far. Just some old hardware. Recently I tried to play some of my [AAC-LC] files on my 64-bit Linux workstation, and to my surprise, it failed to play the file, giving me the error Pulse coding not allowed in short blocks when started from the shell (just so I can read the error output).

I thought this had to have something to do with my file and not the player, so I tried .aac files with ADTS headers instead of audio packed into an mp4/m4a container. The result was the same. Also, the encoding (SBR/VBR or even HE-AAC, which sucks for music anyway) didn’t make a difference. Then I found [this post] on the Gentoo forums, showing that the problem was really the architecture. 32-bit builds wouldn’t fail, but the source code of the [libfaad2] decoding library of xmms’ mp4 plugin wasn’t ready for 64-bit.

xmms’ xmms_mp4Plugin-0.4 comes with a pretty old libfaad2, 2.0 or something. I will show you how to upgrade that to version 2.7, which is fixed for x86_64 (Readily fixed source code is also provided at the bottom). We’ll also fix up other parts of that plugin so it can compile using more modern versions of GCC and clang, and my test platform for this would be a rather conservative CentOS 6.8 Linux. First, get the source code:

I’m assuming you already have xmms installed, otherwise obtain version 1.2.11 from [here]!

Step 1, libfaad2:

Unpack both archives from above, then enter the mp4 plugins’ source directory. You’ll find a libfaad2/ directory in there. Delete or move it and all of its contents. From the faad2 source tree, copy the directory libfaad/ from there to libfaad2/ in the plugins’ directory, replacing the deleted one. Now that’s the source, but we also need the updated headers so that the xmms plugin can link against the newer libfaad2. To do that, copy the two files include/faad.h and include/neaacdec.h from the faad2 2.7 source tree to the directory include/ in the plugin source tree. Overwrite faad.h if prompted.

Step 2, libmp4v2:

Also, the bundled libmp4v2 of the mp4 plugin is very old and broken on modern systems due to code issues. Let’s fix them, so back to the mp4 plugin source tree. We need to fix some invalid pure specifiers in the following files: libmp4v2/mp4property.h, libmp4v2/mp4property.cpp, libmp4v2/rtphint.h and libmp4v2/rtphint.cpp.

To do so, open them in a text editor and search and replace all occurences of = NULL with = 0. This will fix all assignments and comparisons that would otherwise break. When using vi or vim, you can do :%s/=\sNULL/= 0/g for this.

On top of that, we need to fix a invalid const char* to char* conversion in libmp4v2/rtphint.cpp as well. Open it, and hop to line number 325, you’ll see this:

  1. char* pSlash = strchr(pRtpMap, '/');

Replace it with this:

  1. const char* pSlash = strchr(pRtpMap, '/');

And we’re done!

Now, in the mp4 plugins’ source root directory, run $ ./bootstrap && ./configure. Hopefully, no errors will occur. If all is ok, run $ make. Given you have xmms 1.2.11 installed, it should compile and link fine now. Last step: # make install.

This has been tested with my current GCC 4.4.7 platform compiler as well as GCC 4.9.3 on CentOS 6.8 Linux. Also, it has been tested with clang 3.4.1 on FreeBSD 10.3 UNIX, also x86_64. Please note that FreeBSD 10.3 needs extensive modifications to its build tools as well, so I can’t provide fixed source for this. However, packages on FreeBSD have already been fixed in that regard, so you can just # pkg install xmms-faad2 and it’s done anyway. This is likely the case for several modern Linux distros as well.

Let’s play:

xmms playing AAC-LC Freebsd 10.3 UNIX on x86_64

Yeah, it works. In this case on FreeBSD.

And the shell would say:

2-MPEG-4 AAC Low Complexity profile
MP4 - 2 channels @ 44100 Hz

Perfect! And here is the [fixed source code] upgraded with libfaad2 2.7, so you don’t have to do anything by yourself other than $ ./bootstrap && ./configure && make and # make install! Ah, actually I’m not so sure whether xmms itself builds fine on CentOS 6.8 from source these days… Maybe I’ll check that out another day. ;)

Jan 272016

HakuNeko logoJust yesterday I’ve showed you how to modify and compile the [HakuNeko] Manga Ripper so it can work on FreeBSD 10 UNIX, [see here] for some background info. I also mentioned that I couldn’t get it to build on CentOS 6 Linux, something I chose to investigate today. After flying blind for a while trying to fix include paths and other things, I finally got to the core of the problem, which lies in HakuNekos’ src/CurlRequest.cpp, where the code calls CURLOPT_ACCEPT_ENCODING from cURLs typecheck-gcc.h. This is critical stuff, as [cURL] is the software library needed for actually connecting to websites and downloading the image files of Manga/Comics.

It turns out that CURLOPT_ACCEPT_ENCODING wasn’t always called that. With cURL version 7.21.6, it was renamed to that from the former CURLOPT_ENCODING as you can read [here]. And well, CentOS 6 ships with cURL 7.19.7…

When running $ ./configure && make from the unpacked HakuNeko source tree without fixing anything, you’ll run into this problem:

g++ -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -o obj/CurlRequest.cpp.o src/CurlRequest.cpp
src/CurlRequest.cpp: In member function ‘void CurlRequest::SetCompression(wxString)’:
src/CurlRequest.cpp:122: error: ‘CURLOPT_ACCEPT_ENCODING’ was not declared in this scope
make: *** [obj/CurlRequest.cpp.o] Error 1

So you’ll have to fix the call in src/CurlRequest.cpp! Look for this part:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Change CURLOPT_ACCEPT_ENCODING to CURLOPT_ENCODING. The rest can stay the same, as the name is all that has really changed here. It’s functionally identical as far as I can tell. So it should look like this:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Save the file, go back to the main source tree and you can do:

  • $ ./configure && make
  • # make install

And done! Works like a charm:

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux!

And now, for your convenience I fixed up the Makefile and rpm/SPECS/specfile.spec a little bit to build proper rpm packages as well. I can provide them for CentOS 6.x Linux in both 32-bit as well as 64-bit x86 flavors:

You need to unzip these first, because I was too lazy to allow the rpm file type in my blogging software.

The naked rpms have also been submitted to the HakuNeko developers as a comment to their [More Linux Packages] support ticket which you’re supposed to use for that purpose, so you can get them from there as well. Not sure if the developers will add the files to the projects’ official downloads.

This build of HakuNeko has been linked against the wxWidgets 2.8.12 GUI libraries, which come from the official CentOS 6.7 package repositories. So you’ll need to install wxGTK to be able to use the white kitty:

  • # yum install wxGTK

After that you can install the .rpm package of your choice. For a 64-bit system for instance, enter the folder where the hakuneko_1.3.12_el6_x86_64.rpm file is, run # yum localinstall ./hakuneko_1.3.12_el6_x86_64.rpm and confirm the installation.

Now it’s time to have fun using HakoNeko on your Enterprise Linux system! Totally what RedHat intended you to use it for! ;) :roll:

Jan 262016

HakuNeko logoSince I’ve started using FreeBSD as a Linux and Windows replacement, I’ve naturally always been looking at porting my “known good” software over to the UNIX OS, or at replacing it by something that gets the job done without getting on my nerves too much at the same time. For most parts other than TrueCrypt, that was quite achievable, even though I had to endure varying degrees of pain getting there. Now, my favorite Manga / Comic ripper on Windows, [HakuNeko] was the next piece of software on the list. It’s basically just a more advanced Manga website parser and downloader based on stuff like [cURL], [OpenSSL] or the [wxWidgets] GUI libraries.

I didn’t even know this until recently (shame on me for never looking closely), but HakuNeko is actually free software licensed under the MIT license. Unfortunately, the source code and build system are quite Linux- and Windows-centric, and there exist neither packages nor ports of it for FreeBSD UNIX. Actually, the code doesn’t even build on my CentOS 6.7 Linux right now (I have yet to figure out the exact problem), but I managed to fix it up so it can compile and work on FreeBSD! And here’s how, step by step:

1.) Prerequisites

Please note that from here on, terminal commands are shown in this form: $ command or # command. Commands starting with a $ are to be executed as a regular user, and those starting with # have to be executed as the superuser root.

Ok, this has been done on FreeBSD 10.2 x86_32 using HakuNeko 1.3.12, both are current at the time of writing. I guess it might work on older and future releases of FreeBSD with different releases of HakuNeko as well, but hey, who knows?! That having been said, you’ll need the following software on top of FreeBSD for the build system to work (I may have missed something here, if so, just install the missing stuff like shown below):

  • cURL
  • GNU sed
  • GNU find
  • bash
  • OpenSSL
  • wxWidgets 2.8.x

Stuff that’s not on your machine can be fetched and installed as root from the official package repository, Like e.g.: # pkg install gsed findutils bash wx28-gtk2 wx28-gtk2-common wx28-gtk2-contrib wx28-gtk2-contrib-common

Of course you’ll need the HakuNeko source code as well. You can get it from official sources (see the link in first paragraph) or download it directly from here in the version that I’ve used successfully. If you take my version, you need 7zip for FreeBSD as well: # pkg install p7zip.

Unpack it:

  • $ 7z x hakuneko_1.3.12_src.7z (My version)
  • $ tar -xzvf hakuneko_1.3.12_src.tar.gz (Official version)

The insides of my archive are just vanilla as well however, so you’ll still need to do all the modifications by yourself.

2.) Replace the shebang lines in all scripts which require it

Enter the unpacked source directory of HakuNeko and open the following scripts in your favorite text editor, then replace the leading shebang lines #!/bin/bash with #!/usr/local/bin/bash:

  • ./configure
  • ./
  • ./
  • ./res/parsers/

It’s always the first line in each of those scripts, see for example:

  1. #!/bin/bash
  3. # import setings from config-default
  4. . ./
  6. # overwrite settings from config-default
  8. CC="clang++"
  9. LD="clang++"

This would have to turn into the following (I also fixed that comment typo while I was at it):

  1. #!/usr/local/bin/bash
  3. # import settings from config-default
  4. . ./
  6. # overwrite settings from config-default
  8. CC="clang++"
  9. LD="clang++"

3.) Replace all sed invocations with gsed invocations in all scripts which call sed

This is needed because FreeBSDs sed and Linux’ GNU sed aren’t exactly that compatible in how they’re being called, different options and all.

In the text editor vi, the expression :%s/sed /gsed /g can do this globally over an entire file (mind the whitespaces, don’t omit them!). Or just use a convenient graphical text editor like gedit or leafpad for searching and replacing all occasions. The following files need sed replaced with gsed:

  • ./configure
  • ./res/parsers/

4.) Replace all find invocations with gfind invocations in ./configure

Same situation as above with GNU find, like :%s/find /gfind /g or so, but only in one file:

  • ./configure

5.) Fix the make check

This is rather cosmetic in nature as $ ./configure won’t die if this test fails, but you may still wish to fix this. Just replace the string make --version with gmake --version (there is only one occurrence) in:

  • ./configure

6.) Fix the DIST variables’ content

I don’t think that this is really necessary either, but while we’re at it… Change the DIST=linux default to DIST=FreeBSD in:

  • ./configure

Again, only one occurrence.

7.) Run ./configure to create the Makefile

Enough with that, let’s run the first part of the build tools:

  • $ ./configure --config-clang

Notice the --config-clang option? We could use GCC as well, but since clang is FreeBSDs new and default platform compiler, you should stick with that whenever feasible. It works for HakuNeko, so we’re gonna use the default compiler, which means you don’t need to install the entire GCC for just this.

There will be error messages looking quite intimidating, like the basic linker test failing, but you can safely ignore those. Has something to do with different function name prefixes in FreeBSDs libc (or whatever, I don’t really get it), but it doesn’t matter.

However, there is one detail that the script will get wrong, and that’s a part of our include path. So let’s handle that:

8.) Fix the includes in the CFLAGS in the Makefile

Find the line containing the string CFLAGS = -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread or similar in the newly created ./Makefile. After the option -O2 add the following: -I/usr/local/include. So it looks like this: CFLAGS = -c -Wall -O2 -I/usr/local/include -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread. That’s it for the Makefile.

9.) Fix the Linux-specific conditionals across the C++ source code

And now the real work starts, because we need to fix up portions of the C++ code itself as well. While the code would build and run fine on FreeBSD, those relevant parts are hidden behind some C++ preprocessor macros/conditionals looking for Linux instead. Thus, important parts of the code can’t even compile on FreeBSD, because the code only knows Linux and Windows. Fixing that isn’t extremely hard though, just a bit of copy, paste and/or modify. First of all, the following files need to be fixed:

  • ./src/MangaConnector.cpp
  • ./src/Logger.cpp
  • ./src/MangaDownloaderMain.cpp
  • ./src/MangaDownloaderConfiguration.cpp

Now, what you should look for are all conditional blocks which look like #ifdef __LINUX__. Each will end with an #endif line. Naturally, there are also #ifdef __WINDOWS__ blocks, but those don’t concern us, as we’re going to use the “Linux-specific” code, if you can call it that. Let me give you an example right out of MangaConnector.cpp, starting at line #20:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Now given that the Linux code builds just fine on FreeBSD, the most elegant and easier version would be to just alter all those #ifdef conditionals to inclusive #if defined ORs, so that they trigger for both Linux and FreeBSD. If you do this, the block from above would need to change to this:

  1. #if defined __LINUX__ || __FreeBSD__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Should you ever want to create different code paths for Linux and FreeBSD, you can also just duplicate it. That way you could later make changes for just Linux or just FreeBSD separately:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif
  4. #ifdef __FreeBSD__
  5. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  6. #endif

Whichever way you choose, you’ll need to find and update every single one of those conditional blocks. There are three in Logger.cpp, three in MangaConnector.cpp, two in MangaDownloaderConfiguration.cpp and again three in MangaDownloaderMain.cpp. Some are more than 10 lines long, so make sure to not make any mistakes if duplicating them.

Note that you can maybe extend compatibility even further with additional directives like __OpenBSD__ or __NetBSD__ for additional BSDs or __unix__ for a wide range of UNIX systems like AIX or HP-UX. None of which has been tested by me of course.

When all of that is done, it’s compile and install time:

10.) Compile and install

You can compile as a regular user, but the installation needs root by default. I’ll assume you’ll want to install HakuNeko system-wide, so, we’ll leave the installation target directories on their defaults below /usr/local/. While sitting in the unpacked source directory, run:

  • $ gmake
  • # gmake install

If nothing starts to crash and burn, this should compile and install the code. clang will show some warnings during compilation, but you can safely ignore that.

11.) Start up the white kitty

The installation procedure will also conveniently update your window manager as well, if you’re using panels/menus. Here it’s Xfce4:

HakuNeko is showing up as an "Internet" tool

HakuNeko (“White Cat”) is showing up as an “Internet” tool. Makes sense.

With the modifications done properly it should fire up just fine after initializing its Manga connectors:

HakuNeko with the awesomeness that is "Gakkou Gurashi!" being selected from the HTTP source "MangaReader"

HakuNeko with the awesomeness that is “Gakkou Gurashi!” being selected from the HTTP source [MangaReader].

Recently the developers have also added [DynastyScans] as a source, which provides access to partially “rather juicy” Yuri Dōjinshi (self-published amateur and sometimes semi-professional works) of well-known Manga/Anime, if you’re into that. Yuri, that is (“girls love”). Mind you, not all, but a lot of the stuff on DynastyScans can be considered NSFW and likely 18+, just as a word of warning:

HakuNeko fetching a Yuru Yuri Dōjinshi from DynastyScans, bypassing their download limits by not fetching packaged ZIPs - it works perfectly!

HakuNeko fetching a Yuru Yuri Dōjinshi called “Secret Flowers” from DynastyScans, bypassing their download limits by not fetching packaged ZIPs – it works perfectly!

Together with a good comic book reader that can read both plain JPEG-filled folders and stuff like packaged .cbz files, HakuNeko makes FreeBSD a capable comic book / Manga reading system. My personal choice for a reader to accompany HakuNeko would be [QComicBook], which you can easily get on FreeBSD. There are others you can fetch from the official package repository as well though.

Final result:

HakuNeko and QComicBook make a good team on FreeBSD UNIX

HakuNeko and QComicBook make a good team on FreeBSD UNIX – I like the reader even more than ComicRack on Windows.

And, at the very end, one more thing, even though you’re likely going to be aware of this already: Just like Anime fansubs, fan-translated Manga or even Dōjinshi are sitting in a legal grey zone, as long as the book in question hasn’t been licensed in your country. It’s being tolerated, but if it does get licensed, ownership of a fan-translated version will likely become illegal, which means you should actually buy the stuff at that point in time.

Just wanted to have that said as well.

Should you have trouble building HakuNeko on FreeBSD 10 UNIX (maybe because I missed something), please let me know in the comments!

Sep 022015

colorecho logoRecently, I am doing a lot of audio/video transcoding, and for that I’m using tools like mkvextract, mkvmerge, x264, avconv and fghaacenc on the command line. Mostly this is to get rid of old files that are either very inefficiently encoded – like DivX3 or MPEG-2 – or are just too large/high-bitrate for what the content’s worth. Since I do have large batches of relatively similar files, I wrote myself a few nice loops that would walk through an entire stack of videos and process all of it automatically. Because I want to see at first glance how far through a given video stack a script is, I wanted to output some colored notifications on the shell after each major processing step, telling me the current video number. That way, I could easily see how far the job had progressed. We’re talking processes here that take multiple days, so it makes sense.

Turns out that this was harder than expected. In the old DOS days, you would just load ANSI.SYS for that, and get ANSI color codes to work with. The same codes still work in modern UNIX terminals. But not within the Windows cmd for whatever reason. Now there is an interface for that kind of stuff, but seemingly no tools to use it. Like e.g. the echo command simply can’t make use of it.

Now there is project porting ANSI codes back to the cmd, and it’s called [ANSICON]. Basically just a DLL hooked into the cmd. Turns out I can’t use that, because I’m already using [clink] to make my cmd more UNIX-like and actually usable, which also is additional code hooked into the cmd, and the two confict with each other. More information about that [here].

So how the hell can I do this then? Turns out there is a batch script that can do it, but it’s a very outlandish hack that doesn’t seem to want to play nice with my shells if I do @echo off instead of echo off as well. I guess it’s clinks work breaking stuff here, but I’m not sure. Still, here is the code, as found by [Umlüx]:

expand/collapse source code
  2. SETLOCAL EnableDelayedExpansion
  4. FOR /F "tokens=1,2 delims=#" %%A IN ('"PROMPT #$H#$E# & ECHO ON & FOR %%b IN (1) DO REM"') DO (
  5.   SET "DEL=%%A"
  6. )
  8. ECHO say the name of the colors, dont read
  10. CALL :ColorText 0a "blue"
  11. CALL :ColorText 0C "green"
  12. CALL :ColorText 0b "red"
  13. ECHO.
  14. CALL :ColorText 19 "yellow"
  15. CALL :ColorText 2F "black"
  16. CALL :ColorText 4e "white"
  18. GOTO :eof
  20. :ColorText
  22. ECHO OFF
  24.  "%~2"
  25. findstr /v /a:%1 /R "^$" "%~2" NUL
  26. DEL "%~2" > NUL 2>&1
  28. GOTO :eof

So since those two solutions didn’t work for me, what else could I try?

Then I had that crazy idea: Since the interface is there, why not write a little C program (actually ended up being C++ instead) that uses it? Since I was on Linux at the time, I tried to write it there and attempt my first cross-compile for Windows using a pre-built [MinGW-w64] compiler, that luckily just played along nicely on my CentOS 6.6 Linux. You can get pre-built binaries for 32-Bit Windows targets [here], and for 64-bit Windows targets [here]. Thing is, I know zero C++, so that took some time and it’s quite crappy, but here’s the code:

expand/collapse source code
  1. #include <iostream>
  2. #include "windows.h"
  4. // Set NT commandline color:
  5. void SetColor(int value) {
  6.     SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), value);
  7. }
  9. int main(int argc, char *argv[]) {
  10.   SetColor(atoi(argv[2]));
  11.   std::cout << "\r\n" << argv[1] << "\r\n";
  12.   SetColor(7);
  13.   return 0;
  14. }

Update 2016-06-22: Since I wanted to work with wide character strings in recent days (so, non-ANSI Unicode characters), my colorecho command failed, because it couldn’t handle them at all. There was no wide character support. I decided to update the program to also work with that using Windows-typical UCS-2le/UTF-16, here’s the code:

expand/collapse source code
  1. #include <iostream>
  2. #include <io.h>
  3. #include <fcntl.h>
  4. #include "windows.h"
  6. // Set NT commandline color:
  7. void SetColor(int value) {
  8.     SetConsoleTextAttribute(GetStdHandle(STD_OUTPUT_HANDLE), value);
  9. }
  11. // Unicode main
  12. int wmain(int argc, wchar_t *argv[]) {
  13.   SetColor(_wtoi(argv[2]));
  14.   _setmode(_fileno(stdout), _O_U16TEXT);      // Set stdout to UCS-2le/UTF-16
  15.   std::wcout << "\r\n" << argv[1] << "\r\n";  // Write unicode
  16.   SetColor(7);                                // Set color back to default
  17.   return 0;
  18. }

I also updated the download section and the screenshots further below. End of update.

This builds nicely using MinGW on Linux for 64-bit Windows targets. Like so: While sitting in the MinGW directory (I put my code there as echo.cpp as well, for simplicities’ sake), run: ./bin/x86_64-w64-mingw32-g++ echo.cpp -o colorecho-x86_64.exe. If you want to target 32-bit Windows instead, you’d need to use the proper MinGW for that (it comes in a separate archive): ./bin/i686-w64-mingw32-g++ echo.cpp -o colorecho-x86_64.exe

When building like that, you’d also need to deploy and libgcc_s_seh-1.dll along with the EXE files though. The DLLs can be obtained from your local MinGW installation directory, subfolder ./x86_64-w64-mingw32/lib/ for the 64-bit or ./i686-w64-mingw32/lib/ for the 32-bit compiler. If you don’t want to do that and rather rely on Microsofts own C++ redistributables, you can also compile it with Microsoft VisualStudio Express, using its pre-prepared command line. You can find that here, if you have Microsofts VisualStudio installed – version 2010 in my case:

VC2010 command lines

The command lines of Visual Studio (click to enlarge)

Here, the “Visual Studio Command Prompt” is for a 32-bit build target, and “Visual Studio x64 Win64 Command Prompt” is for building 64-bit command line programs. Choose the appropriate one, then change into a directory where you have echo.cpp and run the following command: cl /EHsc /W4 /nologo /Fecolorecho.exe echo.cpp, giving you the executable colorecho.exe.

Alternatively, you can just download my pre-compiled versions here:

Update 2016-06-22: And here are the Unicode versions:

End of update.

Note that this tool does exactly what I need it to do, but it’ll likely not do exactly what you’d need it to do. Like e.g. the line breaks it adds before and after its output. That’s actually a job for the shells echo command to do, not some command line tool. But I just don’t care. So that’s why it’s basically a piece of crap for general use. The syntax is as follows, as shown for the 64-bit VC10 build:

colorecho-x86_64-vc10.exe "I am yellow!" 14

When run, it looks like this:

Colorecho running

colorecho invoked on the Windows cmd

So the first parameter is the string to be echoed, the second one is the color number. That number is 2-digit and can affect both the foreground and the background. Here the first 16 of them, which are foreground only:

  • 0: Black
  • 1: Dark blue
  • 2: Dark green
  • 3: Dark cyan
  • 4: Dark red
  • 5: Dark purple
  • 6: Dark yellow
  • 7: Light grey
  • 8: Dark grey
  • 9: Light blue
  • 10: Light green
  • 11: Light cyan
  • 12: Light red
  • 13: Light purple
  • 14: Light yellow
  • 15: White

If you go higher than that, you’ll also start changing the background colors and you’ll get different combinations of foreground and background colorization.

The background colors actually follow the same order, black, dark blue, dark green, etc. Numbers from 0..15 are on black background, numbers from 16..31 are on a dark blue background and so on. This makes up pretty much the same list:

  • 0..15: 16 foreground colors as listed above on the default black background, and
  • 16..31: on a dark blue background
  • 32..47: on a dark green background
  • 48..63: on a dark cyan background
  • 64..79: on a dark red background
  • 80..95: ona dark purple background
  • 96..111: on a dark yellow background
  • 112..127: on a light grey background
  • 128..143: on a dark grey background
  • 144..159: on a light blue background
  • 160..175: on a light green background
  • 176..191: on a light cyan background
  • 192..207: on a light red background
  • 208..223: on a light purple background
  • 224..239: on a light yellow background
  • 240..255: on a white background

Going over 255 will simply result in an overflow causing the system to start from the beginning, so 256 is equal to 0, 257 to 1, 260 to 4 and so on.

Update 2016-06-22: With the Unicode version, you can now even do stupid stuff like this:

colorecho unicode with DejaVu Sans Mono font on Windows cmd

It could be useful for some people? Maybe?

Note that this has been rendered using the DejaVu Sans Mono font, you can get it from [here]. Also, you need to tell the Windows cmd which fonts to allow (only monospaced TTFs work), you can read how to do that [here], it’s a relatively simple registry hack. I have yet to find a font that would work and also render CJK characters though, but with the fonts I myself have available at the moment, asian stuff won’t work, you’ll just get some boxes instead:

colorecho unicode CJK fail

Only characters which have proper glyphs for all desired codepoints in the font in use can be rendered, at least on Windows XP.

Note that this is not a limitation of colorecho, it does output the characters correctly, “ですね” in this case. It’s just that DejaVu Sans Mono has no font glyphs to actually draw them on screen. If you ever come across a neat cmd-terminal-compatible font that can render most symbols and CJK, please tell me in the comments, I’d be quite happy about that! End of update.

If somebody really wants to use this (which I doubt, but hey), but wishes it to do things in a more sane way, just request it in the comments. Or, if you can, just take the code, change and recompile it by yourself. You can get Microsofts VisualStudio Express – in the meantime called [VisualStudio Community] – free of charge anyway, and MinGW is free software to begin with.

Nov 142014

MOD Music on FreeBSD 10 with xmmsAs much as I am a WinAmp 2 lover on Microsoft Windows, I am an xmms lover on Linux and UNIX. And by that I mean xmms1, not 2. And not Audacious. Just good old xmms. Not only does it look like a WinAmp 2 clone, it even loads WinAmp 2/5 skins and supports a variety of rather obscure plugins, which I partially need and love (like for Commodore 64 SID tunes via libsidplay or libsidplay2 with awesome reSID engine support).

Recently I started playing around with a free Laptop I got and which I wanted to be my operating systems test bed. Currently, I am evaluating – and probably keeping – FreeBSD 10 UNIX. Now it ain’t perfect, but it works pretty well after a bit of work. Always using Linux or XP is just a bit boring by now. ;)

One of the problems I had was with xmms. While the package is there, its built-in tracker module file (mod, 669, it, s3m etc.) support is broken. Play one of those and its xmms-mikmod plugin will cause a segmentation fault immediately when talking to a newer version of libmikmod. Also, recompiling [multimedia/xmms] from the ports tree produced a binary with the same fault. Then I found this guy [Jakob Steltner] posting about the problem on the ArchLinux bugtracker, [see here].

Based on his work, I created a ports tree compatible patch patch-drv_xmms.c, here is the source code:

  1. --- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
  2. +++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
  3. @@ -117,6 +117,10 @@
  4.         return VC_Init();
  5.  }
  7. +static void xmms_CommandLine(CHAR * commandLine)
  8. +{
  9. +}
  10. +
  11.  MDRIVER drv_xmms =
  12.  {
  13.         NULL,
  14. @@ -126,7 +130,8 @@
  15.  #if (LIBMIKMOD_VERSION &gt; 0x030106)
  16.          "xmms",
  17.          NULL,
  18. -#endif
  19. +#endif
  20. +       xmms_CommandLine, // Was missing
  21.          xmms_IsThere,
  22.         VC_SampleLoad,
  23.         VC_SampleUnload

So that means recompiling xmms from source by yourself. But fear not, it’s relatively straightforward on FreeBSD 10.

Assuming that you unpacked your ports tree as root by running portsnap fetch and portsnap extract without altering anything, you will need to put the file in /usr/ports/multimedia/xmms/files/ and then cd to that directory on a terminal. Now run make && make install as root. The patch will be applied automatically.

Now one can once again run xmms and module files just work as they’re supposed to:

A patched xmms playing a MOD file on FreeBSD 10

xmms playing a MOD file on FreeBSD 10 via a patched xmms-mikmod (click to enlarge)

The sad thing is, soon xmms will no longer be with us, at least on FreeBSD. It’s now considered abandonware, as all development has ceased. The xmms port on FreeBSD doesn’t even have a maintainer anymore and it’s [scheduled for deletion] from the ports tree together with all its plugins from [ports/audio] and [ports/multimedia]. I just wish I could speak C to some degree and work on that myself. But well…

Seems my favorite audio player is finally dying, but as with all the other old software I like and consider superior to modern alternatives, I’m gonna keep it alive for as long as possible!

You can download Jakobs’ patch that I adapted for the FreeBSD ports tree right here:

Have fun! :)

Edit: xmms once again has a maintainer for its port on FreeBSD, so we can rejoice! I have [submitted the patch to FreeBSDs bugzilla] as suggested by [SirDice] on the [FreeBSD Forums], and Naddy, the ports maintainer integrated the patch promptly! It’s been committed since [revision 375356]! So now you don’t need to patch it by hand and recompile it from source code anymore – at least not on x86 machines. You can either build it from ports as-is or just pull a fully working binary version from the packages repository on FreeBSD 10.x by running pkg install xmms.

I tested this on FreeBSD 10.0 and 10.1 and it works like a charm, so thanks fly out to Jakob and Naddy! Even though my involvement here was absolutely minimal, it feels awesome to do something good, and help improving a free software project, even if only marginally so. :)

Sep 232014

CD burning logo[1] At work I usually have to burn a ton of heavily modified Knoppix CDs for our lectures every year or so. The Knoppix distribution itself is being built by me and a colleague to get a highly secure read-only, server-controlled environment for exams and lectures. Now, usually I’m burning on both a Windows box with Ahead [Nero], and on Linux with the KDE tool [K3B] (despite being a Gnome 2 user), both GUI tools. My Windows box had 2 burners, my Linux box one. To speed things up and increase disc quality at the same time the idea was to plug more burners into the machines and burn each individual disc slower, but parallelized.

I was shocked to learn that K3B can actually not burn to multiple burners at once! I thought I was just being blind, stumbling through the GUI like an idiot, but it’s actually really not there. Nero on the other hand managed to do this for what I believe is already the better part of a decade!

True disc burning stations are just too expensive, like 500€ for the smaller ones instead of the 80-120€ I had to spend on a bunch of drives, so what now? Was I building this for nothing?

Poor mans disc station

Poor mans disc station. Also a shitty photograph, my apologies for that, but I had no real camera available at work.

Well, where there is a shell, there’s a way, right? Being the lazy ass that I am, I was always reluctant to actually use the backend tools of K3B on the command line myself. CD/DVD burning was something I had just always done on a GUI. But now was the time to script that stuff myself, and for simplicities sake I just used the bash. In addition to the shell, the following core tools were used:

  • cut
  • grep
  • mount
  • sudo (For a dismount operation, might require editing /etc/sudoers)

Also, the following additional tools were used (most Linux distributions should have them, conservative RedHat derivatives like CentOS can get the stuff from [EPEL]):

  • [eject(eject and retract drive trays)
  • [sdparm(read SATA device information)
  • sha512sum (produce and compare high-quality checksums)
  • wodim (burn optical discs)

I know there are already scripts for this purpose, but I just wanted to do this myself. Might not be perfect, or even good, but here we go. The work(-in-progress) is divided into three scripts. The first one is just a helper script generating a set of checksum files from a master source (image file or disc) that you want to burn to multiple discs later on, I call it We need one file for each burner device node later, because sha512sum needs that to verify freshly burned discs, so that’s why this exists:

expand/collapse source code
  1. #!/bin/bash
  3. wrongpath=1 # Path for the source/master image is set to invalid in the
  4.             # beginning.
  6. # Getting path to the master CD or image file from the user. This will be
  7. # used to generate the checksum for later use by
  8. until [ $wrongpath -eq 0 ]
  9. do
  10.   echo -e "Please enter the file name of the master image or device"
  11.   echo -e "(if it's a physical disc) to create our checksum. Please"
  12.   echo -e 'provide a full path always!'
  13.   echo -e "e.g.: /home/myuser/isos/master.iso"
  14.   echo -e "or"
  15.   echo -e "/dev/sr0\n"
  16.   read -p "&gt; " -e master
  18.   if [ -b $master -o -f $master ] &amp;&amp; [ -n "$master" ]; then
  19.     wrongpath=0 # If device or file exists, all ok: Break this loop.
  20.   else
  21.     echo -e "\nI can find neither a file nor a device called $master.\n"
  22.   fi
  23. done
  25. echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"
  27. checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.
  29. # Getting device node name prefix of the users' CD/DVD burners from the
  30. # user.
  31. echo -e "Now please enter the device node prefix of your disc burners."
  32. echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
  33. echo -e "etc."
  34. read -p "&gt; " -e devnode
  36. # Getting number of burners in the system from the user.
  37. echo -e "\nNow enter the total number of attached physical CD/DVD burners."
  38. read -p "&gt; " -e burners
  40. ((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!
  42. echo -e "\nDone, creating the following files with the following contents"
  43. echo -e "for later use by the multiburner for disc verification:"
  45. # Creating the per-burner checksum files for later use by
  46. for ((i=0;i&lt;=$burners;i++))
  47. do
  48.   echo -e " * sum$i.txt: $checksum $devnode$i"
  49.   echo "$checksum $devnode$i" &gt; sum$i.txt
  50. done
  52. echo -e ""

As you can see it’s getting its information from the user interactively on the shell. It’s asking the user where the master medium to checksum is to be found, what the users burner / optical drive devices are called, and how many of them there are in the system. When done, it’ll generate a checksum file for each burner device, called e.g. sum0.txt, sum1.txt, … sum<n>.txt.

Now to burn and verify media in a parallel fashion, I’m using an old concept I have used before. There are two more scripts, one is the controller/launcher, which will then spawn an arbitrary amount of the second script, that I call a worker. First the controller script, here called

expand/collapse source code
  1. #!/bin/bash
  3. if [ $# -eq 0 ]; then
  4.   echo -e "\nPlease specify the number of rounds you want to use for burning."
  5.   echo -e "Each round produces a set of CDs determined by the number of"
  6.   echo -e "burners specified in $0."
  7.   echo -e "\ne.g.: ./ 3\n"
  8.   exit
  9. fi
  11. #@========================@
  12. #| User-configurable part:|
  13. #@========================@
  15. # Path that the image resides in.
  16. prefix="/home/knoppix/"
  18. # Image to burn to discs.
  19. image="knoppix-2014-09.iso"
  21. # Number of rounds are specified via command line parameter.
  22. copies=$1
  24. # Number of available /dev/sr* devices to be used, starting
  25. # with and including /dev/sr0 always.
  26. burners=3
  28. # Device node name used on your Linux system, like "/dev/sr" for burners
  29. # called /dev/sr0, /dev/sr1, etc.
  30. devnode="/dev/sr"
  32. # Number of blocks per complete disc. You NEED to specify this properly!
  33. # Failing to do so will break the script. You can read the block count 
  34. # from a burnt master disc by running e.g. 
  35. # ´sdparm --command=capacity /dev/sr*´ on it.
  36. blocks=340000
  38. # Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
  39. speed=32
  41. #@===========================@
  42. #|NON user-configurable part:|
  43. #@===========================@
  45. # Checking whether all required tools are present first:
  46. # Checking for eject:
  47. if [ ! `which eject 2&gt;/dev/null` ]; then
  48.   echo -e "\e[0;33meject not found. $0 cannot operate without eject, you'll need to install"
  49.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  50.   exit
  51. fi
  52. # Checking for sdparm:
  53. if [ ! `which sdparm 2&gt;/dev/null` ]; then
  54.   echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  55.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  56.   exit
  57. fi
  58. # Checking for sha512sum:
  59. if [ ! `which sha512sum 2&gt;/dev/null` ]; then
  60.   echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  61.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  62.   exit
  63. fi
  64. # Checking for sudo:
  65. if [ ! `which sudo 2&gt;/dev/null` ]; then
  66.   echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  67.   echo -e "the tool before $0 can work. Terminating...\e[0m"
  68.   exit
  69. fi
  70. # Checking for wodim:
  71. if [ ! `which wodim 2&gt;/dev/null` ]; then
  72.   echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  73.   echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  74.   exit
  75. fi
  77. ((burners--)) # Reducing number of burners by one as we also have a burner "0".
  79. # Initial burner ejection:
  80. echo -e "\nEjecting trays of all burners...\n"
  81. for ((g=0;g&lt;=$burners;g++))
  82. do
  83.   eject $devnode$g &amp;
  84. done
  85. wait
  87. # Ask user for confirmation to start the burning session.
  88. echo -e "Burner trays ejected, please insert the discs and"
  89. echo -e "press any key to start.\n"
  90. read -n1 -s # Wait for key press.
  92. # Retract trays on first round. Waiting for disc will be done in
  93. # the worker script afterwards.
  94. for ((l=0;l&lt;=$burners;l++))
  95. do
  96.   eject -t $devnode$l &amp;
  97. done
  99. for ((i=1;i&lt;=$copies;i++)) # Iterating through burning rounds.
  100. do
  101.   for ((h=0;h&lt;=$burners;h++)) # Iterating through all burners per round.
  102.   do
  103.     echo -e "Burning to $devnode$h, round $i."
  104.     # Burn image to burners in the background:
  105.     ./ $h $prefix$image $blocks $i $speed $devnode &amp;
  106.   done
  107.   wait # Wait for background processes to terminate.
  108.   ((j=$i+1));
  109.   if [ $j -le $copies ]; then
  110.     # Ask user for confirmation to start next round:
  111.     echo -e "\nRemove discs and place new discs in the drives, then"
  112.     echo -e "press a key for the next round #$j."
  113.     read -n1 -s # Wait for key press.
  114.     for ((k=0;k&lt;=$burners;k++))
  115.     do
  116.       eject -t $devnode$k &amp;
  117.     done
  118.     wait
  119.   else
  120.     # Ask user for confirmation to terminate script after last round.
  121.     echo -e "\n$i rounds done, remove discs and press a key for termination."
  122.     echo -e "Trays will close automatically."
  123.     read -n1 -s # Wait for key press.
  124.     for ((k=0;k&lt;=$burners;k++))
  125.     do
  126.       eject -t $devnode$k &amp; # Pull remaining empty trays back in.
  127.     done
  128.     wait
  129.   fi
  130. done

This one will take one parameter on the command line which will define the number of “rounds”. Since I have to burn a lot of identical discs this makes my life easier. If you have 5 burners, and you ask the script to go for 5 rounds that would mean you get 5 × 5 = 25 discs, if all goes well. It also needs to know the size of the medium in blocks for a later phase. For now you have to specify that within the script. The documentation inside shows you how to get that number, basically by checking a physical master disc with sdparm –command=capacity.

Other things you need to specify are the path to the image, the image files’ name, the device node name prefix, and the burning speed in factor notation. Also, of course, the number of physical burners available in the system. When run, it’ll eject all trays, prompt the user to put in discs, and launch the burning & checksumming workers in parallel.

The controller script will wait for all background workers within a round to terminate, and only then prompt the user to remove and replace all discs with new blank media. If this is the last round already, it’ll prompt the user to remove the last media set, and will then retract all trays by itself at the press of any key. All tray ejection and retraction is done automatically, so with all your drive trays still empty and closed, you launch the script, it’ll eject all drive trays for you, and retract after a keypress signaling the script all trays have been loaded by the user etc.

Let’s take a look at the worker script, which is actually doing the burning & verifying, I call this

expand/collapse source code
  1. #!/bin/bash
  3. burner=$1   # Burner number for this process.
  4. image=$2    # Image file to burn.
  5. blocks=$3   # Image size in blocks.
  6. round=$4    # Current round (purely to show the info to the user).
  7. speed=$5    # Burning speed.
  8. devnode=$6  # Device node prefix (devnode+burner = burner device).
  9. bwait=0     # Timeout variable for "blank media ready?" waiting loop.
  10. mwait=0     # Timeout variable for automount waiting loop.
  11. swait=0     # Timeout variable for "disc ready?" waiting loop.
  12. m=0         # Boolean indicating automount failure.
  14. echo -e "Now burning $image to $devnode$burner, round $round."
  16. # The following code will check whether the drive has a blank medium
  17. # loaded ready for writing. Otherwise, the burning might be started too
  18. # early when using drives with slow disc access.
  19. until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
  20. do
  21.   ((bwait++))
  22.   if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
  23.     echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
  24.     echo -e "(Was trying to burn to $devnode$burner in round $round,"
  25.     echo -e "failed to detect any blank medium in the drive.)\e[0m"
  26.     eject $devnode$burner
  27.     exit
  28.   fi
  29.   sleep 1 # Sleep 1 second before next check.
  30. done
  32. wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.
  34. # Notify user if burning failed.
  35. if [[ $? != 0 ]]; then
  36.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  37.   echo -e "Ejecting and aborting this thread.\e[0m\n"
  38.   eject $devnode$burner
  39.   exit
  40. fi
  42. # The following code will eject and reload the disc to clear the device
  43. # status and then wait for the drive to become ready and its disc to
  44. # become readable (checking the discs block count as output by sdparm).
  45. eject $devnode$burner &amp;&amp; eject -t $devnode$burner
  46. until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
  47. do
  48.   ((swait++))
  49.   if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
  50.     echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
  51.     echo -e "(Was trying to access $devnode$burner in round $round,"
  52.     echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
  53.     exit
  54.   fi
  55.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  56. done
  58. # The next part is only necessary if your system auto-mounts optical media.
  59. # This is usually the case, but if your system doesn't do this, you need to
  60. # comment the next block out. This will otherwise wait for the disc to
  61. # become mounted. We need to dismount afterwards for proper checksumming.
  62. until [ -n "`mount | grep $devnode$burner`" ]
  63. do
  64.   ((mwait++))
  65.   if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
  66.     echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
  67.     echo -e "Attempting to carry on..."
  68.     echo -e "(Was waiting for disc on $devnode$burner to automount in"
  69.     echo -e "round $round for 30 seconds.)\e[0m\n."
  70.     m=1
  71.     break
  72.   fi
  73.   sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
  74. done
  75. if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  76.   sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  77.   sudo umount $devnode$burner # Dismount burner as root/superuser.
  78. fi
  80. # On to the checksumming.
  81. echo -e "Now comparing checksums for $devnode$burner, round $round."
  82. sha512sum -c sum$burner.txt # Comparing checksums.
  83. if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  84.   echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
  85. fi
  87. eject $devnode$burner # Ejecting disc after completion.

So as you can probably see, this is not very polished, scripts aren’t using configuration files yet (would be a nice to have), and it’s still a bit chaotic when it comes to actual usability and smoothness. It does work quite well however, with the device/disc readiness checking as well as the anti-automount workaround having been the major challenges (now I know why K3B ejects the disc before starting its checksumming, it’s simply impossible to read from the disc after burning finishes).

When run, it looks like this (user names have been removed and paths altered for the screenshot):


“” at work. I was lucky enough to hit a bad disc, so we can see the checksumming at work here. The disc actually became unreadable near its end. Verification is really important for reliable disc deployment.

When using a poor mans disc burning station like this, I would actually recommend putting stickers on the trays like I did. That way, you’ll immediately know which disc to throw into the garbage bin.

This could still use a lot of polishing, and it’s quite sad, that the “big” GUI tools can’t do parallel burning, but I think I can now make due. Oh, and I actually also tried Gnomes “brasero” burning tool, and that one is far too minimalistic and can also not burn to multiple devices at the same time. They may be other GUI fatsos that can do it, but I didn’t want to try and get any of those installed on my older CentOS 6 Linux, so I just did it the UNIX way, even if not very elegantly. ;)

Maybe this can help someone out there, even though I think there might be better scripts than mine to get it done, but still. Otherwise, it’s just documentation for myself again. :)

Edit: Updated the scripts to implement a proper blank media detection to avoid burning starting prematurely in rare cases. In addition to that, I added some code to detect burning errors (where the burning process itself would fail) and notify the user about it. Also applied some cosmetic changes.

Edit 2: Added tool detection to, and removed redundant color codes in warning & error messages in

[1] Article logo based on the works of Lorian and Marcin Sochacki, “DVD.png” licensed under the CC BY-SA 3.0.

Jul 112014

TK-IP101 logoAttention please: This article contains some pretty negative connotations about the software shipped with the TRENDnet TK-IP101 KVM-over-IP product. While I will not remove what I have written, I have to say that TRENDnet went lengths to support me in getting things done, including allowing me to decompile and re-release their Java software in a fixed form under the free GNU General Public license. Please [read this] to learn more. This is extremely nice, and so it shall be stated before you read anything bad about this product, so you can see things in perspective! And no, TRENDnet has not asked me to post this paragraph, those are my own words entirely.

I thought that being able to manage my server out-of-band would be a good idea. It does sound good, right? Being able to remotely control it even if the kernel has crashed and being able to remotely access everything down to the BIOS level. A job for a KVM-over-IP switch. So I got this slightly old [TK-IP101] from TRENDnet. Turns out that wasn’t the smartest move, and it’s actually a 400€ piece of hardware. The box itself seems pretty ok at first, connecting to your KVM switch fabric or a single server via PS/2, USB and VGA. Plus, you can hook up a local PS/2 keyboard and mouse too. Offering what was supposed to be highly secure SSL PKI autentication via server+client certificates, so that only clients with the proper certificate may connect plus a web interface, this sounded really good!



It all breaks down when it comes to the software though. First of all, the guide for certificate creation that is supposed to be found on the CD that comes with the box is just not there. Also, the XCA software TRENDnet suggests one should use was also missing. Not good. Luckily, the software is open source and can be downloaded from the [XCA SourceForge project]. It’s basically a graphical OpenSSL front end. Create a PEM-encoded root certificate, PEM-encoded server certificate and a PKCS#12 client certificate, the latter signed by the root cert. So much for that. Oh, and I uploaded that TRENDnet XCA guide for you in case it’s missing on your CD too, its a bit different for the newer version of XCA, but just keep in mind to create keys beforehand and to use certificate requests instead of certificates. You then need to sign the requests with the root certificate. With that information plus the guide you should be able to manage certificate creation:

But it doesn’t end there. First I tried the Windows based viewer utility that comes with its own certificate import tool. Import works, but the tool will not do client+server authentication. What it WILL do before terminating itself is this:

TK-IP101 IPViewer.exe bug

TK-IP101 IPViewer.exe bug

I really tried to fix this. I even ran it on Linux with Wine just to do an strace on it, looking for failing open() calls. Nothing. So I thought… Why not try the second option, the Java Viewer that goes by the name of KViewer.jar? Usually I don’t install Java, but why not try it out with Oracles Java 1.7u60, eh? Well:

So yeah. What the hell happened there? It took me days to determine the exact cause, but I’ll cut to the chase: With Java 1.6u29, Oracle introduced multiple changes in the way SSL/TLS worked, also due to the disclosure of the BEAST vulnerability. When testing, I found that the software would work fine when run with JRE 1.6u27, but not with later versions. Since Java code is pretty easily decompiled (thanks fly out to Martin A. for pointing that out) and the Viewer just came as a JAR file, I thought I’d embark on the adventure of decompiling Java code using the [Java Decompiler]:

Java Decompiler decompiling KViewer.jar's Classes

Java Decompiler decompiling KViewer.jar’s Classes

This results in surprisingly readable code. That is, if you’re into Java. Which I am not. But yeah. The Java Decompiler is pretty convenient as it allows you to decompile all classes within a JAR and to extract all other resources along with the generated *.java files. And those I imported into a Java development environment I knew, Eclipse Luna.

Eclipse Luna

Eclipse Luna

Eclipse Luna (using a JDK 7u60) immediately complained about 15 or 16 errors and about 60 warnings. Mostly that was missing primitive declarations and other smaller things that even I managed to fix, got rid even of the warnings. But the SSL bug persisted in my Java 7 build just as it did before. See the following two traces, tracing SSL and handshaking errors, one working ok on JRE 1.6u27, and one broken on JRE 1.7u60:

So first I got some ideas from stuff [posted here at Oracle], and added the following two system properties in varying combinations directly in the Main class of

public static void main(String[] paramArrayOfString)
  /* Added by the GAT from                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("", "false");
  java.lang.System.setProperty("", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;

This didn’t really do any good though, especially since “interoperable” mode should work anyway and is being set as the default. But today I found [this information on an IBM site]!

It seems that Oracle fixed the BEAST vulnerability in Java 1.6u29 amongst other things. They seem to have done this by disallowing renegotiations for affected implementations of CBCs (Cipher-Block Chaining). Now, this KVM switch can negotiate only a single cipher: SSL_RSA_WITH_3DES_EDE_CBC_SHA. See that “CBC” in there? Yeah, right. And it got blocked, because the implementation in that aged KVM box is no longer considered safe. Since you can’t just switch to a stream-based RC4 cipher, Java has no other choice but to drop the connection! Unless…  you do this:

public static void main(String[] paramArrayOfString)
  /* Added by the GAT from                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("", "false");
  java.lang.System.setProperty("", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;

Setting the jsse.enableCBCProtection property to false before the negotiation / handshake will make your code tolerate CBC ciphers vulnerable to BEAST attacks. Recompiling KViewer with all the code fixes including this one make it work fine with 2-way PKI authentication using a client certificate on both Java 1.7u60 and even Java 1.8u5. I have tested this using the 64-Bit x86 VMs on CentOS 6.5 Linux as well as on Windows XP Professional x64 Edition and Windows 7 Professional SP1 x64.

De-/Recompiled & "fixed" KViewer connecting to a machine much older even than its own crappy code

De-/Recompiled & “fixed” KViewer.jar connecting to a machine much older even than its own crappy code

I fear I cannot give you the modified source code, as TRENDnet would probably hunt me down, but I’ll give you the compiled byte code at least, the JAR file, so you can use it yourself. If you wanna check out the code, you could just decompile it yourself, losing only my added comments: [KViewer.jar]. (ZIPped, fixed / modified to work on Java 1.7+)

Both the modified code and the byte code “binary” JAR have been returned to TRENDnet in the context of my open support ticket there. I hope they’ll welcome it with open arms instead of suing me for decompiling their Java viewer.

In reality, even this solution is nowhere near perfect. While it does at least allow you to run modern Java runtime environments instead of highly insecure older ones plus using pretty secure PKI auth, it still doesn’t fix the man-in-the-middle-attack issues at hand. TRENDnet should fix their KVM firmware, enable it to run the TLSv1.2 protocol with AES256 Galois-Counter-Mode ciphers (GCM) and fix the many, many problems in their viewer clients. The TK-IP101 being an end-of-life product means that this is likely never gonna happen though.

It does say a lot when the consumer has to hack up the software of a supposedly high-security 400€ networking hardware piece by himself just to make it work properly.

I do still hope that TRENDnet will react positively to this, as they do not offer a modern replacement product to supersede the TK-IP101.