Jan 112018
AMI logo

1.) Introduction

This is something I’ve been wanting do do years ago, but the recent outcry regarding the Meltdown and Spectre hardware security holes have reminded me of this. Since fixing at least parts of the exploits also requires CPU microcode support (from here on: “µcode”), and not all operating systems and mainboards/machines might get them, this might be useful for the current situation as well. It can also be helpful if you wish to patch Intel Xeon support into a slightly older platform, where the manufacturer hasn’t done so already. This article does however not apply to modern UEFI-based systems.

So, instead of getting the required CPU µcodes from a donor BIOS, why not obtain the latest version from Intels’ [µcode package for Linux]? Oh, and it’s not really Linux-specific anyway.

I will show you how to extract and prepare the Intel µcodes and how to patch them so you can embed them into pretty much any somewhat modern AMI BIOS. AMI BIOSes are pretty widespread, but tools similar to the AMI-specific ones shown here might of course also exist for other BIOS brands.

The steps of this guide will be shown for both Linux/UNIX as well as Microsoft Windows.

All the tools used in this post can be found at the end of the article. You can also get all the currently released µcodes in one package there, so you don’t need to download all the archives from Intel (not all archives contain all µcodes…).

2.) Fetch and extract/convert Intels’ µcode package

You can get the µcodes directly from Intel, again, here’s the [link]. To unpack the .tgz archive on MS Windows, you may want to use [7-zip] (It’s a 2-step process with 7-zip, from .tgz to .tar and from .tar to the final files). For older releases, all you’ll get by doing this is a microcode.dat file, sometimes the file name will also contain a date. That’s all the µcodes assembled in one file in a text format. This is useless for patching the data into BIOS images, so we’ll need to extract the individual CPU µcodes and convert them into the proper binary format.

Newer releases might contain pre-built binary versions as well, but we’ll just continue to work with microcode.dat.

2a.) On Linux or UNIX

We’ll use iucode_tool for that, as at least most Linux systems should have a package ready for that program. If you can by no means get or compile iucode_tool, you might compile microdecode from source on Linux or UNIX instead, see 2b.) and 8.)! I have tested this on CentOS 6.9 Linux, and microdecode compiles just fine. But let’s continue with iucode_tool:

To extract the µcodes, switch to the directory containing microcode.dat (after unpacking the .tgz file with tar -xzf) and run the following commands:

$ mkdir ./µcodes/
$ iucode_tool -t d -L --write-named-to="./µcodes/" "microcode.dat"; done

With that, you’ll get a lot of *.fw binary files in ./µcodes/. Those are in the regular Intel µcode format and come in different sizes depending on the CPU. The file names look similar to e.g. this: s000206C2_m00000003_r00000013.fw. The first “s” part contains the CPUID, the second “m” part shows the platform ID, and the “r” part denotes the µcode revision, all of it in hexadecimal notation.

2b.) On Windows

On Windows, we’ll use the microdecode command line tool. It’s a bit easier to use than iucode_tool, as it’s only purpose is really just µcode extraction and nothing else. Switch to the directory containing the unpacked microcode.dat, make sure microdecode.exe is in your search path (or in the same directory), and run the following command:

microdecode.exe microcodes.dat

This will result in a lot of *.bin files that are identical to the .fw files the *nix tool extracts. The file names will be similar to this: cpu000206c2_plat00000003_ver00000013_date20100907.bin. The first “cpu” part contains the CPUID like above, the second “plat” part contains the platform ID, and the “ver” part shows the µcode revision. Additionally, we also get the release date in the final “date” part.

3.) Identifying your CPU

You’ll likely know your CPU model if you’re reading this. But what you’ll need to know above that is your exact CPUID, stepping and the revision of CPU µcode you’re currently running. You can look up some of the information on cpu-worlds’ [CPUID database], but ultimately, you’ll need to fetch it from your local machine anyway.

Important background information: The CPUID is a 32-bit value structured as follows:

CPUID structure

The binary structure of Intels’ 32-bit CPUID, 4 bits equal one hexadecimal character

The binary matrix is usually expressed as 4-bit hexadecimal characters. e.g. the binary CPUID 0000:0000:0000:0010:0000:0110:1100:0010bin would result in 000206C2hex.

3a.) On Linux/UNIX

On Linux, the required information can be grabbed from procfs. Like this, for example:

$ cat /proc/cpuinfo | grep -e "model" -e "stepping" -e "microcode" -e "cpu family" | sort | uniq

cpu family	: 6
microcode	: 15
model		: 44
model name	: Intel(R) Core(TM) i7 CPU       X 980  @ 3.33GHz
stepping	: 2

You have to be careful interpreting this information however, as it’s mostly decoded in decimal form. Modern kernels might show some of the data in hexadecimal, so be careful here!

But first, convert the decimals from /proc/cpuinfo to hexadecimal, where necessary, in this case we’ll do it for the µcode revision and the model number:

$ printf '%X\n' 15
$ printf'%X\n' 44

So, the model number is 2Chex. Here, the right part or Chex is the actual model number (CPUID bits 4..7) and 2hex is the extended family number (CPUID bits 16..19).

The µcode version here would be Fhex. Note that µcode versions can be wider than just 1 hexadecimal character as well. The stepping 2 is a number lower than 15, so we don’t need to convert it to hexadecimal.

To assemble our final string which is 8 hexadecimal characters wide, we walk through the bit mask on the image shown above step by step! All “reserved” parts are just filled with blanks:

  1. 0 (Bits 31..28, reserved)
  2. 0 (Bits 27..24, extended CPU family, zero for family 6)
  3. 0 (Bits 23..20, extended CPU family, zero for family 6)
  4. 2 (Bits 19..16, extended CPU model number)
  5. 0 (Bits 15..12, two reserved and two platform type bits, typically zero)
  6. 6 (Bits 11..8, CPU family code)
  7. C (Bits 7..4, CPU model number)
  8. 2 (Bits 3..0, CPU stepping ID)

As a result of this, our CPUID string looks as follows:

  • 000206C2hex, currently running µcode revision Fhex.

Please note that information down, it’s needed in the next step.

Of course, if you’re running some other UNIX, you’d be needing different tools. On FreeBSD you may be able to work with the “cpuflags” tool, like this:

$ cpuflags x 2>& | grep "Origin"

For other UNICES, you’re on your own though.

3b.) On Windows

On Windows I’d recommend Ray Hinchcliffes’ [System Information Viewer], or “SIV” in short. It’s a pretty powerful freeware system information tool that will read out the stuff we need for the next steps (You can also use something like CPU-Z of course). See the following screenshot to learn what you need to look for:

System Information Viewer (SIV)

System Information Viewer (click to enlarge)

From left to right, we need the following data as pointed out with those arrows: CPU family 6hex, CPU model 2Chex (you can ignore the decimal value “44”), stepping 2hex and µcode revision 13hex. Again, 2Chex is a combination of model number Chex and extended model number 2hex!

Just like on Linux, the final identification string needs to be assembled first. Luckily, SIV already does the decimal -> hexadecimal conversion for us. To assemble the entire CPUID, put everything together like this: 

  1. 0 (Bits 31..28, reserved)
  2. 0 (Bits 27..24, extended CPU family, zero for family 6)
  3. 0 (Bits 23..20, extended CPU family, zero for family 6)
  4. 2 (Bits 19..16, extended CPU model number)
  5. 0 (Bits 15..12, two reserved and two platform type bits, usually zero)
  6. 6 (Bits 11..8, CPU family code)
  7. C (Bits 7..4, CPU model number)
  8. 2 (Bits 3..0, CPU stepping ID)

As a result of this, our CPUID string looks as follows:

  • 000206C2hex, currently running µcode revision 13hex.

The revision is the µcode version you’re currently running. Newer version will just have higher numbers, so that’s how you can compare them with what you extracted from Intels’ package.

Note that information down, as you’ll need it in the next step.

4.) Locating the correct µcodes

4a.) On Linux/Unix

Let’s continue to assume your CPUID string is 000206C2hex. Switch to the directory where you extracted the Intel µcode package for Linux and look for the correct µcode binaries (in my case I have them grouped in folders by date as well):

$ find . -iname "*000206C2*"

Two versions are available: The newer 13hex and the older Fhex (in decimal that would amount to 19 vs. 15, higher is always newer).

4b.) On Windows

Like on Linux, let’s assume your CPUID string is 000206C2hex. Switch to the directory where your extracted Intel µcodes are, and look for the corresponding files:

DIR /B /S "*000206C2*"

There you go, the newer 13hex and the older Fhex µcodes.

5.) What’s wrong with AMI MMTool: A detailed explanation

To embed the µcodes into an actual AMI BIOS image, AMIs’ MMTool “CPU Patch” function is required. It’s a Windows program, but given its simple nature, it will run fine with Wine on Linux and FreeBSD UNIX as well:


 But if you attempt to embed Intels’ original binaries, it’ll fail right away:

MMTool failing to import Intel µcodes directly

MMTool failing to import Intel µcodes directly

If you extract existing µcodes from a given BIOS image and compare them with the same µcode revision in Intels’ format, you’ll notice a size difference:

$ ls -al ./06C2-Rev13.bin 20100914/µcodes/s000206C2_m00000003_r00000013.fw
-rw-rw-r-- 1 thrawn users 8192 Jan 10 14:04 ./06C2-Rev13.bin
-rw-r--r-- 1 thrawn users 7168 Jan 10 14:26 20100914/out/s000206C2_m00000003_r00000013.fw

Intels’ µcode file in this case is 7168 bytes or 7kiB in size, but the stuff we extracted from that AMI BIOS is 8192 bytes or 8kiB in size. So what’s the deal here? At first I had no idea at all and thought that they’d just be encoded differently in the AMI BIOS or something. But it turns out that that is not the case. If you open both files in a hex editor of your choice, you’ll notice something at the end of the file that came from the AMI BIOS:


The final 1kiB is just binary zeroes! So how do we fix that? In essence, the entire procedure is like this:

  1. Extract µcodes from the target AMI BIOS image using MMTool.
  2. Look at how large that file is (in bytes).
  3. Look at how large the desired Intel µcode file “for Linux” is for your exact CPU.
  4. Calculate the difference in bytes.
  5. Fill the end of the Intel file with as many binary zeroes as there are missing bytes.
  6. Save that file and try to load it into your BIOS image using MMTools’ “CPU Patch” function.

Alright, let’s do it!

6.) Patching those µcodes for AMI MMTool and patching the BIOS itself

First, extract some random µcode from your BIOS using AMI MMTools’ “CPU Patch” function. Look at the file and note down its’ size in bytes. Also, you may want to look at the extracted µcode to see whether it has a lot of zeroes at the end. In my case, as said, the µcodes in that AMI BIOS are 8192 bytes, or 8kiB large, while Intel’s µcode is 7kiB in size. 8192 – 7168 = 1024, so again, the difference is 1024 bytes or 1kiB.

Please triple-check this stuff, as a wrong padding will break the µcode! Of course MMTool should warn you if something’s wrong, but it doesn’t hurt to make sure it’s safe from the start.

6a.) On Linux/UNIX

Let’s make this easy, generate a 1kiB zero padding file for future use on similar µcode files:

$ dd bs=1 count=1024 if=/dev/zero of=./1kiB-zeropadding.bin
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00383825 s, 267 kB/s

And now, append the zeroes to the end of the file, generating a new file in the process, so we don’t touch the originals:

$ cat ./20100914/µcodes/s000206C2_m00000003_r00000013.fw ./1kiB-zeropadding.bin > ./0206C2-rev13.bin

That file 0206C2-rev13.bin is now the final µcode file for patching into the target BIOS. As an example, I’ll patch those µcodes for Xeon X5600 processors into an ancient BIOS version 0402 of an ASUS P6T Deluxe mainboard:


Save that file, and you’re done modifying your BIOS with µcodes from Intels’ Linux package!

6b.) On Windows

On MS Windows there is no dd command by default, but we have a another preinstalled utility for creating our 1kiB zero padding file: fsutil! Like this:

fsutil File CreateNew .\1kiB-zeropadding.bin 1024

We don’t have a cat command on Windows either, but we can use a binary copy for file concatenation, no need to install any additional tools. We’ll attach the zero padding to the end of the Intel µcode and write the result to a new file, so we’re not altering the originals:

COPY /B .\20100914\cpu000206c2_plat00000003_ver00000013_date20100907.bin + .\1kiB-zeropadding.bin .\0206C2-rev13.bin
        1 file(s) copied.

That file 0206C2-rev13.bin is now the final µcode file for patching into the target BIOS. As an example, I’ll patch those µcodes for Xeon X5600 processors into an ancient BIOS version 0402 of an ASUS P6T Deluxe mainboard:


Save that file, and you’re done modifying your BIOS with µcodes from Intels’ Linux package!

7.) Flashing the modified BIOS

This step depends on your system. You may be able to flash from within the BIOS itself (e.g. from floppy disk or USB pen drive), or from within MS Windows or by using some bootable medium.

In any case, your BIOS image is ready now, and can be flashed at your convenience. After the update, re-check your µcode revision as shown in 3.). Your CPU should now be running the updated µcode!

8.) Downloads

Some of the programs used in this article can be downloaded here:

  • [AMI MMTool v3.22]
  • [microdecode], compiled by myself, contains 32-bit and 64-bit versions of the tool as well as the source code and license file. Compiles on Linux/UNIX as well.
  • [Intel µcodes], everything from 2009-03-30 up to 2018-01-08 as available on the Intel web site at the moment.

9.) Hopefully you haven’t bricked your mainboard

…all of this without any kind of guarantees of course! :roll:

Mar 012017
Notepadqq @ CentOS 6 Linux logo

It’s rather rare for me to look for a replacement of some good Windows software for Linux/UNIX instead of the other way around, but the source code editor [Notepad++] is one example of such software. The program gedit on the Gnome 2 desktop environment of my old CentOS 6 enterprise Linux isn’t bad, but it isn’t exactly good either. The thing I was missing most was a search & replace engine capable of regular expressions.

Of course, vi can do it, but at times, vi can be a bit hard to use, so I kinda looked for a Notepad++ replacement. What I found was [Notepadqq], which is basically a clone using the Qt5 UI. However, this editor is clearly made for more modern systems, but I still looked for a way to get it to compile and run on my CentOS 6.8 x86_64 Linux system. And I found one. Here are the most important prerequisites:

  • A new enough GCC (I used 6.2.0), because the v4.4.7 platform compiler won’t work with the modern C++ stuff
  • Qt5 libraries from the [EPEL] repository
  • git

First, you’ll want a new compiler. That part is surprisingly easy, but a bit time consuming. First, download a fresh GCC tarball from a server on the [mirrors list], those are in the releases/ subdirectory, so a file like gcc-6.3.0.tar.bz2 (My version is still 6.2.0). It seems Notepadqq only needs some GCC 5, but since our platform compiler won’t cut it anyway, why not just use the latest?

Now, once more, this will take time, could well be hours, so you might wanna do the compilation step over night, the last step needs root privileges:

$ tar -xzvf ./gcc-6.3.0.tar.bz2
$ cd ./gcc-6.3.0/
$ ./configure --program-suffix="-6.3.0"
$ make
# make install

And when you do this, please never forget to add a --program-suffix for the configuration step!  You might seriously fuck things up if you miss that! So double-check it!

When that’s finally done, let’s handle Qt5 next. I’ll be using a binary distribution to make things easy, and yeah, I didn’t just install the necessary packages, I got the whole Qt5 blob instead, too lazy to do the cherry picking. Oh, and if you don’t have it, add git as well:

# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
# yum install qt5* git

I assume # yum install qt5-qtwebkit qt5-qtwebkit-devel qt5-qtsvg qt5-qtsvg-devel qt5-qttools qt5-qttools-devel should also be enough according to the requirements, but I didn’t try that. Now, enter a free directory or one you generally use for source code and fetch the latest Notepadqq version (this will create a subfolder we’ll cd to):

$ git clone https://github.com/notepadqq/notepadqq.git
$ cd ./notepadqq

After that, we need to make sure that we’ll be using the correct compiler and that we’re linking against the correct libraries that came with it (like libstdc++.so.6.*). To do that, set the following environment variables, assuming you’re using the bash as your shell (use lib/ instead of lib64/ folders if you’re on 32-bit x86):

$ export CC="gcc-6.3.0"
$ export CXX="g++-6.3.0"
$ export CPP="cpp-6.3.0"
$ export CFLAGS="-I/usr/local/include/ -L/usr/local/lib64/"
$ export CXXFLAGS="-I/usr/local/include/ -L/usr/local/lib64/"
$ export LDFLAGS="-L/usr/local/lib64/"

The C-related settings are probably not necessary as Qt5 stuff should be pure C++, but you’ll never know, so let’s play it safe.

With that we’re including and linking against the correct libraries and we’ll be using our modern compiler as well. Time to actually compile Notepadqq. To do that, we’ll still need to tell it where to find the Qt5 versions of the qmake and lrelease binaries, but luckily, we can solve that with some simple configuration options. So, let’s do this, the last step requires root privileges again, from within the notepadqq/ directory that git clone created for us:

$ ./configure --qmake /usr/bin/qmake-qt5 --lrelease /usr/bin/lrelease-qt5
$ make
# make install

Now, there are some weird linking issues that I never got fixed on CentOS (some developer please tell me how, I have the same crap when building x265!). Because of that we still can’t launch Notepadqq as-is, we need to give it an LD_LIBRARY_PATH to find the proper libraries at runtime. Let’s just create an executable launcher script /usr/local/sbin/notepadqq.sh for that. Edit it and enter the following code:

LD_LIBRARY_PATH="/usr/local/lib64" /usr/local/bin/notepadqq "$@"

Use this as your launcher script for Notepadqq and you’re good to go with your Notepad++ replacement on good old CentOS 6.x:

Running Notepadqq on CentOS 6 Linux

Running the latest Notepadqq on CentOS 6 Linux with Qt5 version 5.6.1, state 2017-03-01

Now, let’s see whether it’s even that good actually… :roll:

Nov 222016
FreeBSD IBM ServeRAID Manager logo

And yet another FreeBSD-related post: After [updating] the IBM ServeRAID manager on my old Windows 2000 server I wanted to run the management software on any possible client. Given it’s Java stuff, that shouldn’t be too hard, right? Turned out not to be too easy either. Just copying the .jar file over to Linux and UNIX and running it like $ java -jar RaidMan.jar wouldn’t do the trick. Got nothing but some exception I didn’t understand. I wanted to have it work on XP x64 (easy, just use the installer) and Linux (also easy) as well as FreeBSD. But there is no version for FreeBSD?!

The ServeRAID v9.30.21 manager only supports the following operating systems:

  • SCO OpenServer 5 & 6
  • SCO Unixware 7.1.3 & 7.1.4
  • Oracle Solaris 10
  • Novell NetWare 6.5
  • Linux (only certain older distributions)
  • Windows (2000 or newer)

I started by installing the Linux version on my CentOS 6.8 machine. It does come with some platform-specific libraries as well, but those are for running the actual RAID controller management agent for interfacing with the driver on the machine running the ServeRAID controller. But I only needed the user space client program, which is 100% Java stuff. All I needed was the proper invocation to run it! By studying IBMs RaidMan.sh, I came up with a very simple way of launching the manager on FreeBSD by using this script I called serveraid.sh (Java is required naturally):

  1. #!/bin/sh
  3. # ServeRAID Manager launcher script for FreeBSD UNIX
  4. # written by GAT. http://www.xin.at/archives/3967
  5. # Requirements: An X11 environment and java/openjdk8-jre
  7. curDir="$(pwd)"
  8. baseDir="$(dirname $0)/"
  10. mkdir ~/.serveraid 2>/dev/null
  11. cd ~/.serveraid/
  13. java -Xms64m -Xmx128m -cp "$baseDir"RaidMan.jar com.ibm.sysmgt.raidmgr.mgtGUI.Launch \
  14. -jar "$baseDir"RaidMan.jar $* < /dev/null >> RaidMan_StartUp.log 2>&1
  16. mv ~/RaidAgnt.pps ~/RaidGUI.pps ~/.serveraid/
  17. cd "$curDir"

Now with that you probably still can’t run everything locally (=in a FreeBSD machine with ServeRAID SCSI controller) because of the Linux libraries. I haven’t tried running those components on linuxulator, nor do I care for that. But what I can do is to launch the ServeRAID manager and connect to a remote agent running on Linux or Windows or whatever is supported.

Now since this server/client stuff probably isn’t secure at all (no SSL/TLS I think), I’m running this through an SSH tunnel. However, the Manager refuses to connect to a local port because “localhost” and “” make it think you want to connect to an actual local RAID controller. It would refuse to add such a host, because an undeleteable “local machine” is always already set up to begin with, and that one won’t work with an SSH tunnel as it’s probably not running over TCP/IP. This can be circumvented easily though!

Open /etc/hosts as root and enter an additional fantasy host name for I did it like that with “xin”:

::1			localhost localhost.my.domain xin		localhost localhost.my.domain xin

Now I had a new host “xin” that the ServeRAID manager wouldn’t complain about. Now set up the SSH tunnel to the target machine, I put that part into a script /usr/local/sbin/serveraidtunnel.sh. Here’s an example, 34571 is the ServeRAID agents’ default TCP listen port, shall be the LAN IP of our remote machine hosting the ServeRAID array:

ssh -fN -p22 -L34571: mysshuser@www.myserver.com

You’d also need to replace “mysshuser” with your user name on the remote machine, and “www.myserver.com” with the Internet host name of the server via which you can access the ServeRAID machine. Might be the same machine or a port forward to some box within the remote LAN.

Now you can open the ServeRAID manager and connect to the made-up host “xin” (or whichever name you chose), piping traffic to and from the ServeRAID manager through a strongly encrypted SSH tunnel:

IBM ServeRAID Manager on FreeBSD

It even detects the local systems’ operating system “FreeBSD” correctly!


IBM ServeRAID Manager on FreeBSD

Accessing a remote Windows 2000 server with a ServeRAID II controller through an SSH tunnel, coming from FreeBSD 11.0 UNIX

IBM should’ve just given people the RaidMan.jar file with a few launcher scripts to be able to run it on any operating system with a Java runtime environment, whether Windows, or some obscure UNIX flavor or something else entirely, just for the client side. Well, as it stands, it ain’t as straight-forward as it may be on Linux or Windows, but this FreeBSD solution should work similarly on other systems as well, like e.g. Apple MacOS X or HP-UX and others. I tested this with the Sun JRE 1.6.0_32, Oracle JRE 1.8.0_112 and OpenJDK 1.8.0_102 for now, and even though it was originally built for Java 1.4.2, it still works just fine.

Actually, it works even better than with the original JRE bundled with RaidMan.jar, at least on MS Windows (no more GUI glitches).

And for the easy way, here’s the [package]! Unpack it wherever you like, maybe in /usr/local/. On FreeBSD, you need [archivers/p7zip] to unpack it and a preferably modern Java version, like [java/openjdk8-jre], as well as X11 to run the GUI. For easy binary installation: # pkg install p7zip openjdk8-jre. To run the manager, you don’t need any root privileges, you can execute it as a normal user, maybe like this:

$ /usr/local/RaidMan/serveraid.sh

Please note that my script will create your ServeRAID configuration in ~/.serveraid/, so if you want to run it as a different user or on a different machine later on, you should recursively copy that directory to the new user/machine. That’ll retain the local client configuration.

That should do it! :)

Nov 192016
FreeBSD GMABoost logo

Recently, after finding out that the old Intel GMA950 profits greatly from added memory bandwidth (see [here]), I wondered if the overclocking mechanism applied by the Windows tool [here] had leaked into the public after all this time. The developer of said tool refused to open source the software even after it turning into abandonware – announced support for GMA X3100 and X4500 as well as MacOS X and Linux never came to be. Also, he did not say how he managed to overclock the GMA950 in the first place.

Some hackers disassembled the code of the GMABooster however, and found out that all that’s needed is a simple PCI register modification that you could probably apply by yourself on Microsoft Windows by using H.Oda!s’ [WPCREdit].

Tools for PCI register modification do exist on Linux and UNIX as well of course, so I wondered whether I could apply this knowledge on FreeBSD UNIX too. Of course, I’m a few years late to the party, because people have already solved this back in 2011! But just in case the scripts and commands disappear from the web, I wanted this to be documented here as well. First, let’s see whether we even have a GMA950 (of course I do, but still). It should be PCI device 0:0:2:0, you can use FreeBSDs’ own pciconf utility or the lspci command from Linux:

# lspci | grep "00:02.0"
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
# pciconf -lv pci0:0:2:0
vgapci0@pci0:0:2:0:    class=0x030000 card=0x30aa103c chip=0x27a28086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller'
    class      = display
    subclass   = VGA

Ok, to alter the GMA950s’ render clock speed (we are not going to touch it’s 2D “desktop” speed), we have to write certain values into some PCI registers of that chip at 0xF0hex and 0xF1hex. There are three different values regulating clockspeed. Since we’re going to use setpci, you’ll need to install the sysutils/pciutils package on your machine via # pkg install pciutils. I tried to do it with FreeBSDs’ native pciconf tool, but all I managed was to crash the machine a lot! Couldn’t get it solved that way (just me being too stupid I guess), so we’ll rely on a Linux tool for this. Here is my version of the script, which I call gmaboost.sh. I placed that in /usr/local/sbin/ for global execution:

  1. #!/bin/sh
  3. case "$1" in
  4.   200) clockStep=34 ;;
  5.   250) clockStep=31 ;;
  6.   400) clockStep=33 ;;
  7.   *)
  8.     echo "Wrong or no argument specified! You need to specify a GMA clock speed!" >&2
  9.     echo "Usage: $0 [200|250|400]" >&2
  10.     exit 1
  11.   ;;
  12. esac
  14. setpci -s 02.0 F0.B=00,60
  15. setpci -s 02.0 F0.B=$clockStep,05
  17. echo "Clockspeed set to "$1"MHz"

Now you can do something like this: # gmaboost.sh 200 or # gmaboost.sh 400, etc. Interestingly, FreeBSDs’ i915_kms graphics driver seems to have set the 3D render clock speed of my GMA950 to 400MHz already, so there was nothing to be gained for me in terms of performance. I can still clock it down to conserve energy though. A quick performance comparison using a crappy custom-recorded ioquake3 demo shows the following results:

  • 200MHz: 30.6fps
  • 250MHz: 35.8fps
  • 400MHz: 42.6fps

Hardware was a Core 2 Duo T7600 and the GPU was making use of two DDR-II/667 4-4-4 memory modules in dual channel configuration. Resolution was 1400×1050 with quite a few changes in the Quake III configuration to achieve more performance, so your results won’t be comparable, even when running ioquake3 on identical hardware. I’d post my ~/.ioquake3/baseq3/q3config.cfg here, but in my stupidity I just managed to freaking wipe the file out. Now I have to redo all the tuning, pfh.

But in any case, this really works!

Unfortunately, it only applies to the GMA950. And I still wonder what it was that was so wrong with # pciconf -w -h pci0:0:2:0 0xF0 0060 && pciconf -w -h pci0:0:2:0 0xF0 3405 and the like. I tried a few combinations just in case my byte order was messed up or in case I really had to write single bytes instead of half-words, but either the change wouldn’t apply at all, or the machine would just lock up. Would be nice to do this with only BSD tools on actual FreeBSD UNIX, but I guess I’m just too stupid for pciconf

Nov 142016
HP/Compaq nx6310/nc6320 logo

A good while back, I got a free notebook from [The_Plague]German flag, a HP/Compaq nx6310[1][2] which he kinda pulled out of the trash at his company. It’s not exactly “Thinkpad T23” material, but it’s a pretty solid, well-built machine with a good keyboard. I’ve been using the thing as an operating system testbed for a while (Linux, ReactOS, Haiku OS, OpenBSD, Dragonfly BSD, and finally: FreeBSD UNIX). After settling for FreeBSD the machine clearly showed its limitations though, the most problematic being imposed by the very low-end i940GML chipset. That one has limited the machine to a single processor core and a 533MHz data rate FSB.

I did give the machine a Core Duo T2450, but switching dual core on in the BIOS results in a lockup at POST time. Also, the chipset cannot use dual-channel DDR-II and limits the user to 2GiB of memory, making the use of a 64-bit processor rather pointless. Which turned out to be bad, because some code doesn’t even provide full functionality for 32-bit anymore, like x265, which dropped deep color support on 32-bit architectures.

But now, The_Plague pulled another one out of the trash, it’s basically the exact same machine, but a higher-end model, the nc6320. This one has an i945GM chipset, which means dual core support, FSB667 and 4GiB dual-channel RAM capability! It came with a Core 2 Duo T5600 @ 1.83GHz with 2MiB L2 cache. I ordered the largest possible chip for this box from ebay Hong Kong, so now it has a Core 2 Duo T7600 @ 2.33GHz with 4MiB L2 cache. Also, 2×2=4GiB of DDR-II/667 CL4 are on their way already, together with a 12-cell secondary monster battery!

And of course, FreeBSD UNIX again, in its brand new version 11.0-RELEASE:

HP/Compaq nc6320 running FreeBSD 11.0 UNIX

HP/Compaq nc6320 running FreeBSD 11.0 UNIX (click to enlarge)

The CPU upgrade is actually even noticeable when browsing the web, lots of resource-hungry Javascript and CSS3, you know. Luckily, Chromium supports hardware acceleration on the Intel GMA950 GPU on FreeBSD, as the OS comes with a kernel modesetting compliant driver for almost all integrated Intel graphics chips. It’s too slow to do the rasterization stage on the GPU, but it still helps.

Once again, it shall serve mostly as a meeting and sysadmin machine, with a little bit of private-use-fun added on top. Let’s have a look at the software! Oh and by the way, I decided to make the screenshots 8-bit .png images, so some of them will look a bit bad. But still better+smaller than JPEG for that purpose:

Running screenfetch on the nc6320

Running screenfetch on the nc6320 (click to enlarge)

$ screenfetch is showing us some details about the machine, which also makes it clear that everything is “Tokisaki Kurumi”-themed. Since there’s a lot of red color on that girls’ garments it seems at least somewhat fitting for a FreeBSD machine.

Chromium with FVD Speed Dial

Chromium with FVD Speed Dial (click to enlarge)

I’m a [Vivaldi] fan personally, but that browser isn’t available on any BSD yet, so I installed a few extensions to make Chromium work somewhat like Vivaldi; The most important part being the static FVD speed dial you can see above. What you can’t see here are the other extensions that followed it: AdBlockPlus and Ghostery. I hear there are better/faster solutions than ABP for ad blocking these days however, so maybe I’ll revise that.

IBM Lotus Notes via wine 1.8

IBM Lotus Notes 6.5.1 via 32-bit wine 1.8.4 (click to enlarge)

Also, for work I would sometimes need IBM Lotus Notes, as it’s our Universities’ groupware solution (think of that what you will). While I couldn’t get the Linux version to run, our Domino servers still accept connections from older clients, so it’s Lotus Notes 6.5.1 running under a 32-bit [wine], which is a solution IBM officially recommended for running the software on Linux/UNIX a few years ago. And yeah, it still works. And if you have Windows software wine can’t cope with?

XP x64 via VirtualBox on FreeBSD

XP x64 via VirtualBox on FreeBSD (click to enlarge)

For anything that wine can’t handle, the VirtualBox port kicks in, as we can see here. Together with the CPUs VT-x extension and the guest tools, virtualizing Windows on FreeBSD UNIX works relatively well. Not all features are there (like USB passthrough), but it works ok for me. Will need a Windows 7 VM as well I think.

More stuff:

Communicating on FreeBSD

Communicating on FreeBSD (parts are censored, click to enlarge)

One important part is communication! Luckily, there is a version of licq in the ports tree now. It builds well together with its Qt4 UI, so no complaints there. Hexchat for IRC access is also available, but the tricky part was Skype; Not that I really need it, but I wanted to have the linuxulator up and running as well! For those of you who don’t know what the “linuxulator” is: It’s a series of kernel modules that extend FreeBSDs kernel with parts of the Linux kernel interface. On top of that, you can pull parts of Fedora 10 or CentOS 6.8 or some CentOS 7 Linux userspace components from the package servers. Together with the kernel modules those form a kind of runtime environment for executing Linux programs – Skype 4.3 in this case! So I have both wine and linuxulator ready for action, and with it access to ICQ, Jabber, MSN, IRC and Skype. Now, what about multimedia?

Multimedia on FreeBSD

smplayer and xmms on FreeBSD, unfortunately the 8-bit color is a bit too noticeable for this screenshot, my apologies (click to enlarge)

This is a part where the upgraded processor also helps. Here we can see (s)mplayer play the last episode of the Anime Hanayamata in taxing 2.5Mbit H.265/HEVC encoding, paired with AAC-LC audio. The Core 2 Duo T5600 had some issues with this, but the faster T7600 shows now problems. Additionally, xmms is playing a Commodore 64 SID tune using libsidplay2 and the reSID engine. xmms comes with a lot of funny plugins from the FreeBSD ports tree for Gameboy tunes or NES tunes, but the C64 one you need to compile for yourself. Not too hard though, you can fetch libsidplay2 and reSID from packages beforehand to make things easier! What else?


ioquake3, a cleaned up version of the Quake III Arena source code, here in its 64-bit FreeBSD build (click to enlarge)

A pretty fun part: Playing the native Quake3 port [ioquake3] in 64-bit, for whenever you just need to shoot something to blow off some steam. ;) I have to say, I had to tweak it quite a bit to run fluently on the WVA 1400×1050 display of this book given the weak GMA950 GPU, but it runs “rather ok” now. ioquake3 is also available for Windows, OSX and Linux by the way, including a more advanced OpenGL 2 renderer, which gives users access to some advanced graphical effects. And if I get bored by that…

HakuNeko Manga ripper and qComicbook

HakuNeko Manga ripper and qComicbook showing some sweet girls love! (click to enlarge)

Once again, fixing up HakuNekos’ build system and C++ code to work with FreeBSD properly took some time. Unfortunately there is no port for it yet (and I’m too stupid/lazy to create one), so you have to fix it by hand. Lots of replacing sed invocations with gsed, find with gfind etc. and the OS #ifdef parts, which need to be changed in several .cpp files, here’s an example from MangaConnector.cpp:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. endif

Something like that needs to turn into this to compile on FreeBSD, otherwise you’ll end up with a HakuNeko that can’t do shit (it’ll still compile and run, but like I said, it’d be devoid of function):

  1. #if defined __LINUX__ || __FreeBSD__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. endif

This is true for the latest version 1.4.1 as well. I guess the modifications should also apply to other operating systems by adding things like __OpenBSD__ or similar.

Now all that’s left is to wait for that massive 12C battery, the RAM capacity+speed upgrade and some FreeBSD case sticker that I ordered from [unixstickers.com] (hint: That’s a referral URL, it’s supposed to give you some $5 coupon upon ordering, I hope it works). Upon my order, a small part was donated to the LLVM project – very fitting, given that I’ve used clang/llvm a lot to compile stuff on FreeBSD as of late. :)

FreeBSD case sticker (preview)

This is what it’s supposed to look like, and it’s going to replace the current Windows XP+Vista sticker

I hope it’ll look as good in real life! :) Ah, I think I’m gonna have a lot of fun with that old piece of junk. ;)

Ah, and thanks fly out to The_Plague, who saved this laptop from the trash bin and gave it to me for free! Prost!

Edit: And the memory is here, two G.Skill “performance” modules doing 4-4-4 latencies at 667MHz data rate, replacing a single Samsung module running 5-5-5. Now I was interested in how much going from single channel CL5 to dual channel CL4 would really affect performance. Let’s just say, it didn’t do too much for CPU processes. However, the effect on the integrated GMA950 GPU (using shared system memory!) was amazing. It seems the graphics chip was held back a lot by the memory interface! Let’s have a quick look at Quake III Arena performance using a quickly recorded demo just for this purpose (ioquake3 can’t play old Quake III Arena demos like the “001” demo):

  • ioquake3 1.36, single channel DDR-II/667 CL5:
  • 30.6fps
  • ioquake3 1.36, dual channel DDR-II/667 CL4:
  • 41.2fps

Roughly +35%!!

Tests were run three times, then three more times after a reboot. After that, an average was taken. For ioquake3 this wouldn’t even have been necessary though, as the results were extremely consistent. It’s amazing how much the added memory speed really affects the game engine! I rebooted and re-ran the tests several times because I couldn’t believe in that massive boost in performance, but it’s actually true and fully reproducible! This reminds me of how well modern AMD APU graphics chips scale with main memory speed and it explains why people were asking for quad-channel DDR4 on those Kaveri APU chips. Its built-in Radeons would’ve probably loved the added bandwidth!

I also kinda felt that browsing web sites got a lot more smooth using Chromium with most of its GPU acceleration turned on. So I tried the graphics-centric browser test [Motionmark] to put that to the test. Parts of the results were inconclusive, but let’s have a look first:

  • Motionmark 1.0 (medium screen profile), single channel DDR-II/667 CL5:
  • Overall result: 13.85 ±22.24%
  • Multiply: 119.26 ±2.95%
  • Canvas Arcs: 19.04 ±68.48%
  • Leaves: 3.00 ±133.33%
  • Paths: 85.30 ±6.57%
  • Canvas Lines: 1.00 ±0.00%
  • Focus: 1.76 ±5.22%
  • Images: 40.58 ±2.56%
  • Design: 18.89 ±8.00%
  • Suits: 24.00 ±37.50%
  • Motionmark 1.0 (medium screen profile), dual channel DDR-II/667 CL4:
  • Overall result: 22.47 ±15.93%
  • Multiply: 124.55 ±1.60%
  • Canvas Arcs: 26.00 ±138.46%
  • Leaves: 65.90 ±16.93%
  • Paths: 37.00 ±16.89%
  • Canvas Lines: 1.00 ±0.00%
  • Focus: 2.00 ±50.00%
  • Images: 41.58 ±3.59%
  • Design: 24.49 ±2.35%
  • Suits: 90.65 ±13.55%

Now first things first: This was just my first pick for any kind of graphics-heavy browser benchmark. I thought I needed something that would make the browser do a lot of stuff on the GPU, given that hardware acceleration was almost fully enabled on FreeBSD UNIX + Chromium + GMA950. However, after repeated runs it showed that the variance was just far too high on the following tests: Leaves, Paths, Suits. Those would also mess up the overall score. The ones that showed consistent performance were: Multiply, Canvas Arcs, Canvas Lines, Focus, Images, Design, so we should focus on those. Well, not all of those tests show promising results (Multiply, Canvas Lines), but some clearly do. It seems my feeling that parts of CSS3 etc. had gotten faster after the memory upgrade was spot-on!

Not bad, not bad at all! And tomorrow morning, the [x264 benchmark] will also have finished, showing how much a classic CPU-heavy task would profit from that upgrade (probably not much, but we’ll see tomorrow).

Edit 2: And here is the rest. Like I thought, the memory upgrade had only minimal impact on CPU performance:

  • x264 benchmark, single channel DDR-II/667 CL5:
  • Runtime: 04:40:08.621
  • x264 benchmark, dual channel DDR-II/667 CL4:
  • Runtime: 04:38:23.851

So yeah it’s faster. But only by a meager +0.62%. Completely negligible. But it’s still a good upgrade given the GPU performance boost and the fact that I can now use more memory for virtual machines. :)

Ah, and here’s the 12-cell ultra capacity battery, which gives me a total of 18 cells in conjunction with the 6-cell primary battery:

Nice hardware actually, you can check it’s charge (roughly) with a button and a 4-LED display, and it has it’s own charging plug. What surprised me most though was this:

$ hwstat | grep -i -e "serial number" -i -e battery
[ACPI Battery (sysctl)]
        Serial number:                  00411 2006/10/12
        Serial number:                  00001 2016/07/29

That probably explains how a still sealed battery could come with a ~25% pre-charge. Manufactured in July 2016, wow. And that for a notebook that’s 10 years old? Ok, it’s an aftermarket battery by [GRS], but that’s just damn fine still! With that I’ll surely have enough battery runtime to make it through longer meetings as well! :)

Edit 3: And today I used the notebook for a sysadmin task, helping our lead developer in debugging a weird problem in a Java-based student exam submission and evaluation system of ours at work. I suspected that the new CuPPIX (=KNOPPIX derivative) distribution I built for this was to blame, but it turned out to be a faulty Java library handling MySQL database access, hence crashing our server software under high parallel loads. In any case, I had the nc6320 with me during the entire morning up until 12:30 or so, walking away with a total charge of 49% left after the developer had fixed the problem. Not stellar given a total of 18 cells, but definitely good enough for me! :)

Edit 4: And my FreeBSD sticker from unixstickers is finally here! They even gave me a bunch of random free stickers to go with it! I gave those to some colleagues for their kids. ;) And here it is:

FreeBSD sticker from unixstickers.com

There was a Windows Vista/XP sticker before, now it shows some UNIX love! (click to enlarge)

The sticker shows some pretty good quality as well, nice stuff! :)

Sep 072016
TeamViewer on Linux logo

I’m not exactly a big fan of TeamViewer, since you’ll never know what’s going to happen with that traffic of yours, so I prefer VNC over SSH instead. A few weeks ago I got TeamViewer access to a remote workstation machine for the purpose of processing A/V files however. Basically, it was about video and audio transcoding on said machine.

Since the stream meta data (like the language of an audio stream) wasn’t always there, I wanted to check it by playing back the files remotely in foobar2000 or MPC-HC. TeamViewer does offer a feature to relay the audio from a remote machine to your local box, as long as the remote server has some kind of soundcard / sound chip installed. I was using TeamViewer 11 – the newest version at the time of writing – to connect from CentOS 6.8 Linux to a Windows 7 Professional machine. Playing back audio yielded nothing but silence though.

Now, TeamViewer is actually not native Linux software. Both its Linux and MacOS X versions come with a bundled Wine 1.6 distribution preconfigured to run the 32-bit TeamViewer Windows binary. It was thus logical to assume that the configuration of TeamViewers’ built-in Wine was broken. This may happen in cases where you upgrade TeamViewer from previous releases (which is what I had done, 7 -> 8 -> 9 -> 11).

There are a multitude of proposed solutions to fix this, and since none of them worked for me as-is, I’d like to add my own to the mix. The first useful hint came from [here]. You absolutely need a working system-wide Wine setup for this. I already had one that I needed for work anyway, namely Wine 1.8.6 from the [EPEL] repository, configured using [winetricks]. We’re going to take some files from that installation and essentially replace TeamViewers’ own Wine with the one distributed by EPEL.

So I had TeamViewer 11 installed in /opt/teamviewer/ and some important configuration files for it in ~/.local/share/teamviewer11/ and ~/.config/teamviewer/. First, we backup the wine files of TeamViewer and replace them with the platform ones (the paths may vary depending on your Linux distribution, but the file names should not):

# mv /opt/teamviewer/tv_bin/wine/bin/wine /opt/teamviewer/tv_bin/wine/bin/wine.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wineserver /opt/teamviewer/tv_bin/wine/bin/wineserver.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/wine-preloader.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/wine.BACKUP/
# cp /usr/bin/wine /usr/bin/wineserver /usr/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/
# cp /usr/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/
# cp -r /usr/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/

This will replace all the binaries and libraries, in my case shoving Wine 1.8.6 underneath TeamViewer. This isn’t all that’s needed however. We’ll also need the system registry hive of your working Wine installation (with sound). That should be stored in ~/.wine/system.reg! Let’s replace TeamViewers’ own hive with this one:

$ mv ~/.local/share/teamviewer11/system.reg ~/.local/share/teamviewer11/system.reg.BACKUP
$ cp ~/.wine/system.reg ~/.local/share/teamviewer11/

Ok, and the final part is adding the proper Linux audio backend to this Wines’ configuration. That part is stored in ~/.wine/user.reg. Replacing the whole file didn’t work for me though, as TeamViewer would crash upon launch, probably missing some keys from its own user.reg. So, let’s just edit its file instead, open ~/.local/share/teamviewer11/system.reg with your favorite text editor and add the following line in a proper location (it’s sorted alphabetically):

[Software\\Wine\\Drivers\\winepulse.drv] 1473239241

The corresponding file should be found within TeamViewers’ replaced Wine distribution now by the way, in my case it’s /opt/teamviewer/tv_bin/wine/lib/wine/fakedlls/winepulse.drv.

Now, run the TeamViewer profile updater (Some people say it’s required to make this work, it wasn’t for me, but it didn’t hurt either): $ /opt/teamviewer/tv_bin/TeamViewer --update-profile and then its’ Wine configuration: $ /opt/teamviewer/tv_bin/TeamViewer --winecfg. After that, you should be greeted with this:

TeamViewer 11 running its own winecfg

TeamViewer 11 running its own copy of winecfg.

Before the modifications, the configuration window would show “None” as the driver, without any way to change it. So no audio, whereas we have Pulseaudio now. Press “Test Sound” if you want to check whether it truly works. I haven’t tested the ALSA backend by the way. In my case, as soon as the registry was fixed, Wine just autoselected Pulseaudio, which is fine for me.

Now launch TeamViewer and check out the audio options in this submenu:

TeamViewer preferences

The TeamViewer 11 preferences can be found here.

It should look like this:

TeamViewers audio options

Make sure “Play computer sounds and music” is checked! (click to enlarge)

Now, after having connected and logged in, you may also wish to verify the conference audio settings in TeamViewers’ top menu:

TeamViewer 11 conference audio settings

TeamViewer 11 conference audio settings, make sure “Computer sound” is checked!

When you play a sound file on the remote computer, you should hear it on your local one as well. With that, I can finally test the audio files I’m supposed to use on that remote machine for their actual language (which is a rather important detail) where meta data isn’t available.

This seems to be a problem of TeamViewers installation / update procedure which hasn’t been addressed for several major released now. I presume just removing all traces of TeamViewer and installing it from scratch might also do the trick, but I didn’t try it for myself.

Ah, and one more thing: If you can’t launch TeamViewer on CentOS 6.x because you’re getting the following error…

teamviewerd error

TeamViewer Daemon not running…

…forget about the solutions on the web on top of what this message is telling you. TeamViewer 11 uses a systemd-style script for launching its daemon on Linux now, and that won’t do on SysV init systems. Just become root and launch the crap manually: # /opt/teamviewer/tv_bin/teamviewerd &, then press <CTRL>+<d> and it works!

Let’s hope that daemon isn’t doing anything evil while running as root. :roll:

Aug 032016
xmms logo

Yeah, I’m still using the good old xmms v1 on Linux and UNIX. Now, the boxes I used to play music on were all 32-bit x86 so far. Just some old hardware. Recently I tried to play some of my [AAC-LC] files on my 64-bit Linux workstation, and to my surprise, it failed to play the file, giving me the error Pulse coding not allowed in short blocks when started from the shell (just so I can read the error output).

I thought this had to have something to do with my file and not the player, so I tried .aac files with ADTS headers instead of audio packed into an mp4/m4a container. The result was the same. Also, the encoding (SBR/VBR or even HE-AAC, which sucks for music anyway) didn’t make a difference. Then I found [this post] on the Gentoo forums, showing that the problem was really the architecture. 32-bit builds wouldn’t fail, but the source code of the [libfaad2] decoding library of xmms’ mp4 plugin wasn’t ready for 64-bit.

xmms’ xmms_mp4Plugin-0.4 comes with a pretty old libfaad2, 2.0 or something. I will show you how to upgrade that to version 2.7, which is fixed for x86_64 (Readily fixed source code is also provided at the bottom). We’ll also fix up other parts of that plugin so it can compile using more modern versions of GCC and clang, and my test platform for this would be a rather conservative CentOS 6.8 Linux. First, get the source code:

I’m assuming you already have xmms installed, otherwise obtain version 1.2.11 from [here]!

Step 1, libfaad2:

Unpack both archives from above, then enter the mp4 plugins’ source directory. You’ll find a libfaad2/ directory in there. Delete or move it and all of its contents. From the faad2 source tree, copy the directory libfaad/ from there to libfaad2/ in the plugins’ directory, replacing the deleted one. Now that’s the source, but we also need the updated headers so that the xmms plugin can link against the newer libfaad2. To do that, copy the two files include/faad.h and include/neaacdec.h from the faad2 2.7 source tree to the directory include/ in the plugin source tree. Overwrite faad.h if prompted.

Step 2, libmp4v2:

Also, the bundled libmp4v2 of the mp4 plugin is very old and broken on modern systems due to code issues. Let’s fix them, so back to the mp4 plugin source tree. We need to fix some invalid pure specifiers in the following files: libmp4v2/mp4property.h, libmp4v2/mp4property.cpp, libmp4v2/rtphint.h and libmp4v2/rtphint.cpp.

To do so, open them in a text editor and search and replace all occurences of = NULL with = 0. This will fix all assignments and comparisons that would otherwise break. When using vi or vim, you can do :%s/=\sNULL/= 0/g for this.

On top of that, we need to fix a invalid const char* to char* conversion in libmp4v2/rtphint.cpp as well. Open it, and hop to line number 325, you’ll see this:

  1. char* pSlash = strchr(pRtpMap, '/');

Replace it with this:

  1. const char* pSlash = strchr(pRtpMap, '/');

And we’re done!

Now, in the mp4 plugins’ source root directory, run $ ./bootstrap && ./configure. Hopefully, no errors will occur. If all is ok, run $ make. Given you have xmms 1.2.11 installed, it should compile and link fine now. Last step: # make install.

This has been tested with my current GCC 4.4.7 platform compiler as well as GCC 4.9.3 on CentOS 6.8 Linux. Also, it has been tested with clang 3.4.1 on FreeBSD 10.3 UNIX, also x86_64. Please note that FreeBSD 10.3 needs extensive modifications to its build tools as well, so I can’t provide fixed source for this. However, packages on FreeBSD have already been fixed in that regard, so you can just # pkg install xmms-faad2 and it’s done anyway. This is likely the case for several modern Linux distros as well.

Let’s play:

xmms playing AAC-LC Freebsd 10.3 UNIX on x86_64

Yeah, it works. In this case on FreeBSD.

And the shell would say:

2-MPEG-4 AAC Low Complexity profile
MP4 - 2 channels @ 44100 Hz

Perfect! And here is the [fixed source code] upgraded with libfaad2 2.7, so you don’t have to do anything by yourself other than $ ./bootstrap && ./configure && make and # make install! Ah, actually I’m not so sure whether xmms itself builds fine on CentOS 6.8 from source these days… Maybe I’ll check that out another day. ;)

Feb 202016

H.265/HEVC logoRecently, after [successfully compiling] the next generation x265 H.265/HEVC video encoder on Windows, Linux and FreeBSD, I decided to ask for guidance when it comes to compressing Anime (live action will follow at a later time) in the Doom9 forums, [see here]. Thing is, I didn’t understand all of the knobs x265 has to offer, and some of the convenient presets of x264 didn’t exist here (like --tune film and --tune animation). So for a newbie it can be quite hard to make x265 perform well without sacrificing far too much CPU power, as x265 is significantly more taxing on the processor than x264.

Thanks to [Asmodian] and [MeteorRain]/[LittlePox] I got rid of x265s’ blurring issues, and I took their settings and turned them up to achieve more quality while staying within sane encoding times. My goal was to be able to encode 1080p ~24fps videos on an Intel Xeon X5690 hexcore @ 3.6GHz all-core boost clock at >=1fps for a target bitrate of 2.5Mbit.

In this post, I’d like to compare 7 scenes from the highly opulent Anime [The Garden of Words] (言の葉の庭) by [Makoto Shinkai] (新海 誠) at three different average bitrates, 1Mbit, 2.5Mbit (my current x264 default) and 5Mbit. The Blu-Ray source material is H.264/AVC at roughly 28Mbit on average. Also, both encoders are running in 10-bit color depth mode instead of the common 8-bit, meaning that the internal arithmetic precision is boosted from 8- to 16-bit integers as well. While somewhat “non-standard” for H.264/AVC, this is officially supported by H.265/HEVC for Blu-Ray 4K. The mode of operation is 2-pass to aim for comparable file sizes and bitrates. The encoding speed penalty for switching from x264 to x265 at the given settings is around a factor of 8. Somewhat.

The screenshots below are losslessly compressed 1920×1080 images. Since this is all about compression, I chose to serve the large versions of the images in WebP format to all browsers which support it (Opera 11+, Chromium-based Browsers like Chrome, Iron, Vivaldi, the Android Browser or Pale Moon as the only Gecko browser). This is done, because at maximum level, WebP does lossless compression much more efficiently, so the pictures are smaller. This helps, because my server only has 8Mbit/s upstream. If your browser doesn’t support WebP (like Firefox, IE, Edge, Safari), it’ll be fed lossless PNG instead. All of this happens automatically, you don’t need to do anything!

Now, let’s start with the specs.


Here are the source material encoding settings according to the video stream header:

cabac=1 / ref=4 / deblock=1:1:1 / analyse=0x3:0x133 / me=umh / subme=10 / psy=1 / psy_rd=0.40:0.00 /
mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 /
fast_pskip=1 / chroma_qp_offset=-2 / threads=12 / lookahead_threads=1 / sliced_threads=0 / slices=4 /
nr=0 / decimate=1 / interlaced=0 / bluray_compat=1 / constrained_intra=0 / bframes=3 / b_pyramid=1 /
b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=1 / weightp=1 / keyint=24 / keyint_min=1 /
scenecut=40 / intra_refresh=0 / rc_lookahead=24 / rc=2pass / mbtree=1 / bitrate=28229 /
ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 /
vbv_maxrate=31600 / vbv_bufsize=30000 / nal_hrd=vbr / filler=0 / ip_ratio=1.40 / aq=1:0.60

x264 10-bit encoding settings (pass 1 & pass 2), 2.5Mbit example:

--fps 24000/1001 --preset veryslow --tune animation --open-gop --b-adapt 2 --b-pyramid normal -f -2:0
--bitrate 2500 --aq-mode 1 -p 1 --slow-firstpass --stats v.stats -t 2 --no-fast-pskip --cqm flat

--fps 24000/1001 --preset veryslow --tune animation --open-gop --b-adapt 2 --b-pyramid normal -f -2:0
--bitrate 2500 --aq-mode 1 -p 2 --stats v.stats -t 2 --no-fast-pskip --cqm flat --non-deterministic

x265 10-bit encoding settings (pass 1 & pass 2), 2.5Mbit example:

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 2500 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 1
--slow-firstpass --stats v.stats --sar 1 --range full

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 2500 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 2
--stats v.stats --sar 1 --range full

Since x265 can only read raw YUV and Y4M, the source video is being fed to it via [libavs’] avconv tool, piping it into x265. The avconv commandline for that looks as follows:

$ avconv -r 24000/1001 -i input.h264 -f yuv4mpegpipe -pix_fmt yuv420p -r 24000/1001 - 2>/dev/null

If you want to do something similar, but you don’t like avconv, you can use [ffmpeg] as a replacement, the options are completely the same. Note that you should always specify the correct frame rates (-r) for input and output, or the bitrate setting of the encoder will be applied wrongly!

x264 on the other hand was linked against libav directly, using its decoding capabilities without any workarounds.

Let’s compare:

“The Garden of Words” has a lot of rain. This is a central story element of the 46 minute movie, and it’s hard on any encoder, because a lot of stuff is moving on screen all the time. Let’s take a look at such a scene for our first side-by-side comparison. Each comparison is done in two rows: H.264/AVC first (including the source material), and below that H.265/HEVC, also including the source.

Let’s go:

Scene 1, H.264/AVC encoded by x264 0.148.x:

Scene 1, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

It has been said that x265 performs specifically well at two things: Very high resolutions (which we don’t have here) and low bitrates. And yep, it shows. When comparing the 1Mbit shots, it becomes clear pretty quickly that x265 manages to preserve more detail for the parts with lots of motion. x264 on the other hand starts to wash out the scene pretty severely, smearing out some raindrops, spray water and parts of the foliage. Also, it’s pretty bad around the outlines as well, but that’s true for both encoders. You can spot that easily with all the aliasing artifacts on the raindrops.

Moving up a notch, it becomes very hard to distinguish between the two. When zooming in you can still spot a few minor differences (note the kids umbrella, even if it’s not marked), but it’s quite negligible. Here I’m already starting to think x265 might not be worth it in all cases. There are still differences between the two 2.5Mbit shots and the original however, see the red areas of the umbrella and the most low-contrast, dark parts of the foliage.

At 5Mbit, I really can’t see any difference anymore. Maybe the colors are a little off or something, but when seen in motion, distinguishing between the two and the original becomes virtually impossible. Given that we just threw a really difficult scene at x264 and x265, this should be a trend to continue throughout the whole test.

Now, even more rain:

Scene 2, H.264/AVC encoded by x264 0.148.x:

Scene 2, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Now this is extreme at 1Mbit! Looking at H.264, the spray water on top of the cable can’t even be told apart from the cloud in the background anymore. Detail loss all over the scene is catastrophic in comparison to the original. Tons of raindrops are simply gone entirely, and the texture details on the tower and the angled brick wall of the house to the left? Almost completely washed out and smeared.

Now, let’s look at H.265 @ 1Mbit. The spray water is also pretty bad, but it’s amazing how much more detail was preserved overall. Sure, there are still parts of the raindrops missing, but it’s much, much closer to the original. We can now see details on the walls as well, even the steep angle one on the left. The only serious issue is the red light bleeding at the tower. There is very little red there in the original, so I’m not sure what happened there. x264 does this as well, but x265 is a bit worse.

At the next level, the differences are less pronounced again, but there is still a significant enough improvement when going from x264 to x265 at 2.5Mbit: The spray water on the cable becomes more well-defined, and more rain is being preserved. Also, the textures on the walls are a tiny little bit more detailed and crisp. Once again though, x265 is bleeding too much red light at the tower.

Since it’s noticeably not fully on the level of the source still, let’s look at 5Mbit briefly. x265 is able to preserve a tiny little bit more rain detail here, coming extremely close to the original. In motion, you can’t really see the difference however.

Now, let’s get steamy:

Scene 3, H.264/AVC encoded by x264 0.148.x:

Scene 3, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

1Mbit first again: Let me just say: It’s ugly. x264 pretty much messes up the steam coming from the iron. We get lots of block artifacts now. Some of the low-contrast patterns on the ironing board are being smeared out at a pretty terrible level. Also, the bokeh background partly shows block artifacts and banding. x265 produces quite a lot of banding here itself, but no blocks. Also, outlines and sharp contrasts are more well-defined, and the low contrast part is done noticeably better.

At 2.5Mbit, the patterns repeat themselves now. The steam is only slightly better with x265, outlines are slightly more well-defined, and the low-contrast patterns are slightly more visible. For some of the blurred parts, x265 seems to be a tiny little bit to prone to banding though, in a very few spots, x264 might be just that 1% better. Overall however, x265 wins this, and even if it’s just for the better outlines.

At 5Mbit, you really need to zoom in and analyze very small details, e.g. around the outer border of the steam. Yes, x265 does better again. But you’d not really be able to notice this when watching.

How about we go cry a little bit:

Scene 4, H.264/AVC encoded by x264 0.148.x:

Scene 4, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Cutting onions would be a classic fun part in a slice-of-life anime. Here, it’s just kitchen work. And quite the bad looking one for H.264 at 1Mbit. The letters on the knife are partly lost completely, becoming unreadable. The onion parts that fly off are visibly worse than when encoded with x265 at the same bitrate. Also, x264 produced block artifacts in the blurred bokeh areas again, that simply aren’t there with x265.

On the next level, the two come much closer to each other. However, x265 simply does the outlines better. Less artifacts and sharper, just like with the writing on the knifes’ blade as well. The issues with the bokeh are nearly gone. What’s left is a negligible amount of blocking for x264 and banding for x265. Not really noticeable however.

Finally, at 5Mbit, x265 shows those ever so slightly more well-done outlines. But that’s about it, the rest looks nice for both, and pretty much identical to the source.

Now, please, dig in:

Scene 5, H.264/AVC encoded by x264 0.148.x:

Scene 5, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Let’s keep this short: x264 does blocking, and bad transitions/outlines. x265 does it better. Plain and simple.

At 2.5Mbit, x265 nearly reaches quality transparency when compared to the original, something x264 falls short of, just a bit. While x265 does the outlines and the steam part quite like in the original frame, x264 rips the outlines apart a bit too much, and slight block artifacts can again be seen for the steam part.

At 5Mbit, x264 still shows some blocking artifacts in a part that most lossy image/video compression algorithms traditionally suck at: The reds. While not true for all human beings, most eyes perceive much finer gradients for greens, then blues, and do worst with reds. Meaning, our eyes have an unequal sensitivity distribution when it comes to colors. So image and video codecs try to save bitrate in the reds first, because supposedly it’d be less noticeable. To me subjectively, x265 achieves transparency here, meaning it looks just like the original. x264 doesn’t manage entirely. Close, but not not entirely.


Scene 6, H.264/AVC encoded by x264 0.148.x:

Scene 6, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

This is a highly static scene, with only few moving parts, so there is some rain again, and some shadow cast by raindrops as well. Now, for the static parts, incremental B frames really work wonders here. Most detail is being preserved by both encoders. What’s supposed to be happening is happening: The encoders save bit rate where the human eye can’t easily tell: In the parts where stuff is moving around very quickly. That’s how we lose a lot of raindrop shadows and some drops as well. x264 seems to have trouble separating the scene into even smaller macro blocks though? Not sure if that’s the reason, but a lot of mesh detail for the basket on the balcony on the top right is lost – x265 does better there! This is maybe because x264 couldn’t distinct the moving drops from the static background so well anymore?

At 2.5Mbit, the scenes become almost indistinguishable. The more static content we have, the easier it gets of course, so the transparency threshold becomes lower. And if you ask me, both of them reach perfect quality at 5Mbit.

Let’s throw another hard one at them for the last round:

Scene 7, H.264/AVC encoded by x264 0.148.x:

Scene 7, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Enough rain already? Pffh! Here we have a lot of foliage and low contrast added to the mix. And it gets smeared a lot by x264, rain detail lost, fine details of the bushes turning into green mud, that’s how it goes. x265 also loses too much detail here (I mean, 1Mbit is really NOT much), but again, it fares quite a bit better.

At 2.5Mbit, both encoders do very well. Somehow, this scene doesn’t seem to penalize x264 that much at the medium level. You’d really need your magnifying glass to find the spots where x265 still does better, which surprises me a bit for this scene. And finally, at 5Mbit – if you ask me – visual transparency is reached for both x264 and x265.

Final thoughts:

Clearly it’s true what a lot of people have been saying. x265 rocks at low bitrates, if configured correctly. But that isn’t gonna give me perfect quality or anything. Yeah, it sucks less – much less – than x264 in that department, but at a higher 2.5Mbit, where both start looking quite decent, x265 having just a slight edge… it becomes hardly justifiable to use it, simply because it’s that much slower to run it at decent settings.

Also, you need to take device compatibility into account. Sure, a powerful PC can always play the stuff. No matter if it’s some UNIX, Linux, MacOS X or Windows. But how about video game consoles? Older TVs? That kind of thing. Most of those can only play H.264/AVC. Or course, if you’re only using your PC and you have a lot of time and electricity to burn, then hey – why not?

But I’ll have to think really long and really hard about whether I want to replace x264 with x265 at this given point in time. Overall, it might not be practical enough on my current hardware yet. Maybe I’d need an AVX/AVX2-capable processor, as x265 has tons of optimizations for those instruction set extensions. But I’m gonna stay on my Xeon X5690 for quite a while, so SSE 4.2 is the latest I have.

I’d say, if you can stand some quality degradation, then x265 might be the way to go, as it can give you much smaller file sizes at lower bitrates with slight degradation.

If you’re aiming for high bitrates and quality, it might not be worth it right now, at least for 1080p. It’s been said the tables are turning once more when going up to 4K and UHD, but I haven’t tested that yet, as all my content – both Anime and live action movies – are “low resolution” *cough* 1080p or 720p.


Screenshots were taken using the latest stable mplayer 1.3.0 on Linux. Thank god they’re bundling it with ffmpeg now, making things much easier. This choice was made because mplayer can easily grab screenshots from specific spots in a video in an automated fashion. I used framesteps for this, like this:

$ mplayer ./TEST-H.265/HEVC-1mbit.mkv -nosound -vf framestep=24 \
 -vo png:z=9:outdir=./screenshots/1mbit/H.265/HEVC/:prefix=H.265/HEVC-1mbit-

This will decode every 24th frame of the video file TEST-H.265/HEVC-1mbit.mkv, and grab it into a .png file with maximum lossless compression as supported by mplayer. The .png files will be prefixed with a user-defined string and numbered, like H.265/HEVC-1mbit-00000001.pngH.265/HEVC-1mbit-00000002.png and so on, until the end of file is reached.

To encode the full size screenshots to WebP, the most recent [libwebp-0.5.0], or rather one of its companion tools – cwebp – was used as follows:

$ cwebp -z 9 -lossless -q 100 -noalpha -mt ./input.png -o ./output.webp

Now… somebody wanna grant me remote access to some quad socket Haswell-EX Xeon box for free?


Meh… :roll:

Jan 262016

HakuNeko logoSince I’ve started using FreeBSD as a Linux and Windows replacement, I’ve naturally always been looking at porting my “known good” software over to the UNIX OS, or at replacing it by something that gets the job done without getting on my nerves too much at the same time. For most parts other than TrueCrypt, that was quite achievable, even though I had to endure varying degrees of pain getting there. Now, my favorite Manga / Comic ripper on Windows, [HakuNeko] was the next piece of software on the list. It’s basically just a more advanced Manga website parser and downloader based on stuff like [cURL], [OpenSSL] or the [wxWidgets] GUI libraries.

I didn’t even know this until recently (shame on me for never looking closely), but HakuNeko is actually free software licensed under the MIT license. Unfortunately, the source code and build system are quite Linux- and Windows-centric, and there exist neither packages nor ports of it for FreeBSD UNIX. Actually, the code doesn’t even build on my CentOS 6.7 Linux right now (I have yet to figure out the exact problem), but I managed to fix it up so it can compile and work on FreeBSD! And here’s how, step by step:

1.) Prerequisites

Please note that from here on, terminal commands are shown in this form: $ command or # command. Commands starting with a $ are to be executed as a regular user, and those starting with # have to be executed as the superuser root.

Ok, this has been done on FreeBSD 10.2 x86_32 using HakuNeko 1.3.12, both are current at the time of writing. I guess it might work on older and future releases of FreeBSD with different releases of HakuNeko as well, but hey, who knows?! That having been said, you’ll need the following software on top of FreeBSD for the build system to work (I may have missed something here, if so, just install the missing stuff like shown below):

  • cURL
  • GNU sed
  • GNU find
  • bash
  • OpenSSL
  • wxWidgets 2.8.x

Stuff that’s not on your machine can be fetched and installed as root from the official package repository, Like e.g.: # pkg install gsed findutils bash wx28-gtk2 wx28-gtk2-common wx28-gtk2-contrib wx28-gtk2-contrib-common

Of course you’ll need the HakuNeko source code as well. You can get it from official sources (see the link in first paragraph) or download it directly from here in the version that I’ve used successfully. If you take my version, you need 7zip for FreeBSD as well: # pkg install p7zip.

Unpack it:

  • $ 7z x hakuneko_1.3.12_src.7z (My version)
  • $ tar -xzvf hakuneko_1.3.12_src.tar.gz (Official version)

The insides of my archive are just vanilla as well however, so you’ll still need to do all the modifications by yourself.

2.) Replace the shebang lines in all scripts which require it

Enter the unpacked source directory of HakuNeko and open the following scripts in your favorite text editor, then replace the leading shebang lines #!/bin/bash with #!/usr/local/bin/bash:

  • ./configure
  • ./config_clang.sh
  • ./config_default.sh
  • ./res/parsers/kissanime.sh

It’s always the first line in each of those scripts, see config_clang.sh for example:

  1. #!/bin/bash
  3. # import setings from config-default
  4. . ./config_default.sh
  6. # overwrite settings from config-default
  8. CC="clang++"
  9. LD="clang++"

This would have to turn into the following (I also fixed that comment typo while I was at it):

  1. #!/usr/local/bin/bash
  3. # import settings from config-default
  4. . ./config_default.sh
  6. # overwrite settings from config-default
  8. CC="clang++"
  9. LD="clang++"

3.) Replace all sed invocations with gsed invocations in all scripts which call sed

This is needed because FreeBSDs sed and Linux’ GNU sed aren’t exactly that compatible in how they’re being called, different options and all.

In the text editor vi, the expression :%s/sed /gsed /g can do this globally over an entire file (mind the whitespaces, don’t omit them!). Or just use a convenient graphical text editor like gedit or leafpad for searching and replacing all occasions. The following files need sed replaced with gsed:

  • ./configure
  • ./res/parsers/kissanime.sh

4.) Replace all find invocations with gfind invocations in ./configure

Same situation as above with GNU find, like :%s/find /gfind /g or so, but only in one file:

  • ./configure

5.) Fix the make check

This is rather cosmetic in nature as $ ./configure won’t die if this test fails, but you may still wish to fix this. Just replace the string make --version with gmake --version (there is only one occurrence) in:

  • ./configure

6.) Fix the DIST variables’ content

I don’t think that this is really necessary either, but while we’re at it… Change the DIST=linux default to DIST=FreeBSD in:

  • ./configure

Again, only one occurrence.

7.) Run ./configure to create the Makefile

Enough with that, let’s run the first part of the build tools:

  • $ ./configure --config-clang

Notice the --config-clang option? We could use GCC as well, but since clang is FreeBSDs new and default platform compiler, you should stick with that whenever feasible. It works for HakuNeko, so we’re gonna use the default compiler, which means you don’t need to install the entire GCC for just this.

There will be error messages looking quite intimidating, like the basic linker test failing, but you can safely ignore those. Has something to do with different function name prefixes in FreeBSDs libc (or whatever, I don’t really get it), but it doesn’t matter.

However, there is one detail that the script will get wrong, and that’s a part of our include path. So let’s handle that:

8.) Fix the includes in the CFLAGS in the Makefile

Find the line containing the string CFLAGS = -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread or similar in the newly created ./Makefile. After the option -O2 add the following: -I/usr/local/include. So it looks like this: CFLAGS = -c -Wall -O2 -I/usr/local/include -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread. That’s it for the Makefile.

9.) Fix the Linux-specific conditionals across the C++ source code

And now the real work starts, because we need to fix up portions of the C++ code itself as well. While the code would build and run fine on FreeBSD, those relevant parts are hidden behind some C++ preprocessor macros/conditionals looking for Linux instead. Thus, important parts of the code can’t even compile on FreeBSD, because the code only knows Linux and Windows. Fixing that isn’t extremely hard though, just a bit of copy, paste and/or modify. First of all, the following files need to be fixed:

  • ./src/MangaConnector.cpp
  • ./src/Logger.cpp
  • ./src/MangaDownloaderMain.cpp
  • ./src/MangaDownloaderConfiguration.cpp

Now, what you should look for are all conditional blocks which look like #ifdef __LINUX__. Each will end with an #endif line. Naturally, there are also #ifdef __WINDOWS__ blocks, but those don’t concern us, as we’re going to use the “Linux-specific” code, if you can call it that. Let me give you an example right out of MangaConnector.cpp, starting at line #20:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Now given that the Linux code builds just fine on FreeBSD, the most elegant and easier version would be to just alter all those #ifdef conditionals to inclusive #if defined ORs, so that they trigger for both Linux and FreeBSD. If you do this, the block from above would need to change to this:

  1. #if defined __LINUX__ || __FreeBSD__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif

Should you ever want to create different code paths for Linux and FreeBSD, you can also just duplicate it. That way you could later make changes for just Linux or just FreeBSD separately:

  1. #ifdef __LINUX__
  2. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  3. #endif
  4. #ifdef __FreeBSD__
  5. wxString MCEntry::invalidFileCharacters = wxT("/\r\n\t");
  6. #endif

Whichever way you choose, you’ll need to find and update every single one of those conditional blocks. There are three in Logger.cpp, three in MangaConnector.cpp, two in MangaDownloaderConfiguration.cpp and again three in MangaDownloaderMain.cpp. Some are more than 10 lines long, so make sure to not make any mistakes if duplicating them.

Note that you can maybe extend compatibility even further with additional directives like __OpenBSD__ or __NetBSD__ for additional BSDs or __unix__ for a wide range of UNIX systems like AIX or HP-UX. None of which has been tested by me of course.

When all of that is done, it’s compile and install time:

10.) Compile and install

You can compile as a regular user, but the installation needs root by default. I’ll assume you’ll want to install HakuNeko system-wide, so, we’ll leave the installation target directories on their defaults below /usr/local/. While sitting in the unpacked source directory, run:

  • $ gmake
  • # gmake install

If nothing starts to crash and burn, this should compile and install the code. clang will show some warnings during compilation, but you can safely ignore that.

11.) Start up the white kitty

The installation procedure will also conveniently update your window manager as well, if you’re using panels/menus. Here it’s Xfce4:

HakuNeko is showing up as an "Internet" tool

HakuNeko (“White Cat”) is showing up as an “Internet” tool. Makes sense.

With the modifications done properly it should fire up just fine after initializing its Manga connectors:

HakuNeko with the awesomeness that is "Gakkou Gurashi!" being selected from the HTTP source "MangaReader"

HakuNeko with the awesomeness that is “Gakkou Gurashi!” being selected from the HTTP source [MangaReader].

Recently the developers have also added [DynastyScans] as a source, which provides access to partially “rather juicy” Yuri Dōjinshi (self-published amateur and sometimes semi-professional works) of well-known Manga/Anime, if you’re into that. Yuri, that is (“girls love”). Mind you, not all, but a lot of the stuff on DynastyScans can be considered NSFW and likely 18+, just as a word of warning:

HakuNeko fetching a Yuru Yuri Dōjinshi from DynastyScans, bypassing their download limits by not fetching packaged ZIPs - it works perfectly!

HakuNeko fetching a Yuru Yuri Dōjinshi called “Secret Flowers” from DynastyScans, bypassing their download limits by not fetching packaged ZIPs – it works perfectly!

Together with a good comic book reader that can read both plain JPEG-filled folders and stuff like packaged .cbz files, HakuNeko makes FreeBSD a capable comic book / Manga reading system. My personal choice for a reader to accompany HakuNeko would be [QComicBook], which you can easily get on FreeBSD. There are others you can fetch from the official package repository as well though.

Final result:

HakuNeko and QComicBook make a good team on FreeBSD UNIX

HakuNeko and QComicBook make a good team on FreeBSD UNIX – I like the reader even more than ComicRack on Windows.

And, at the very end, one more thing, even though you’re likely going to be aware of this already: Just like Anime fansubs, fan-translated Manga or even Dōjinshi are sitting in a legal grey zone, as long as the book in question hasn’t been licensed in your country. It’s being tolerated, but if it does get licensed, ownership of a fan-translated version will likely become illegal, which means you should actually buy the stuff at that point in time.

Just wanted to have that said as well.

Should you have trouble building HakuNeko on FreeBSD 10 UNIX (maybe because I missed something), please let me know in the comments!