Nov 242016
 

Broken Windows logo[1] I know what I should do if a system service on Microsoft Windows starts crashing of course; Fixing it is the way to go! But sometimes you simply can’t, because the component causing a certain instability can’t be swapped out or updated. Now Windows services do have a mechanism for monitoring and restarting a service upon failure, but it seems that only works if the system gets an actual error code back from the service upon termination. But it doesn’t seem to work (at least for me) if the service just dies abnormally. Windows recognizes the service has stopped somehow of course, but the restart procedure just doesn’t kick in.

So I thought I’d do it myself, programmatically. And it’s actually pretty easy. I solved this with VBScript, Windows Batch and Mark Russinovichs’ pslist plus grep. So the prerequisites are:

  • Microsoft Windows (well, huh..)
  • MS Windows Script(ing) Host / VBScript, Windows should come with this preinstalled since Windows 2000.
  • [pslist]
  • [grep][src] (grep is optional, I used GNU grep 2.5.4 in this case, licensed under the [GPLv3+])

Make sure the pstools and grep are within your %PATH%, so Windows can find those .exe files. If you don’t want to use grep, you can also use Microsofts’ own find command, if your version of Windows has it.

I divided this into two small scripts. Since the main part is Batch, it might be problematic if you run it at very short intervals, checking for the services’ status, because you get a command window popping up on the desktop. Since most users wouldn’t want that, another script acts as a launcher, hiding the cmd.exe window so it’s run fully in the background without disturbing any potential users or administrators. The launcher looks like this, in my case it’s meant to watch over an Apache web server:

  1. Set WshShell = CreateObject("WScript.Shell")
  2. WshShell.Run chr(34) & "C:\Server\Scripts\monitor-httpd.bat" & Chr(34), 0
  3. Set WshShell = Nothing

And that script C:\Server\Scripts\monitor-httpd.bat we’re launching looks like this:

  1. @ECHO OFF
  2. FOR /F "tokens=* delims= usebackq" %%I IN (`pslist ^| grep httpd`) DO SET HTTPDSTATUS=%%I
  3. IF NOT DEFINED HTTPDSTATUS (net start "Apache2.2") ELSE (SET HTTPDSTATUS=)

A version relying on Microsoft find instead of GNU grep could look like this:

  1. @ECHO OFF
  2. FOR /F "tokens=* delims= usebackq" %%I IN (`pslist ^| find /I "httpd"`) DO SET HTTPDSTATUS=%%I
  3. IF NOT DEFINED HTTPDSTATUS (net start "Apache2.2") ELSE (SET HTTPDSTATUS=)

To get a services’ exact name, just launch services.msc from Start \ Run or run the command net start on a cmd terminal.

As you can see, this greps “httpd” from the process list and pushes its output into %%I and finally into %HTTPDSTATUS%. We have to use a FOR /F for that, as Windows has no way of pushing command outputs from subshells into shell variables like UNIX has (like e.g. var=`command` or var=$(command)). Then we check for the status of that variable. If it’s not defined, then the process http.exe was nowhere to be found! In that case we restart the associated system service (needs proper permissions!). If the variable is defined, we do nothing but unsetting it, since we can assume the service is operating normally. Or at the very least it’s running. ;)

You can automate that by using the Windows task scheduler:

Scheduling an Apache web server "watchdog"

Scheduling an Apache web server “watchdog” (German Windows)

Create a Schedule to your liking and you’re done! If you can afford the affected service to be down for 5 minutes and no longer, just run it every 4 minutes or so.

The solution shown above can easily be adapted to monitor and restart any Windows service you have, as long as the service isn’t fundamentally broken so that it wouldn’t even start up anymore. Also, you can do a lot more, like sending notification eMails with a command line mailer like [blat] when crashes do occur. Of course, this is only useful for services that crash rarely. If it dies every few minutes, you should reaaally fix it instead of just pushing the restart button all the time… ;)

And that’s that!

[1] © Mar.0007. Original Version for desktopwallpapers4.me.

Nov 192016
 

FreeBSD GMABoost logoRecently, after finding out that the old Intel GMA950 profits greatly from added memory bandwidth (see [here]), I wondered if the overclocking mechanism applied by the Windows tool [here] had leaked into the public after all this time. The developer of said tool refused to open source the software even after it turning into abandonware – announced support for GMA X3100 and X4500 as well as MacOS X and Linux never came to be. Also, he did not say how he managed to overclock the GMA950 in the first place.

Some hackers disassembled the code of the GMABooster however, and found out that all that’s needed is a simple PCI register modification that you could probably apply by yourself on Microsoft Windows by using H.Oda!s’ [WPCREdit].

Tools for PCI register modification do exist on Linux and UNIX as well of course, so I wondered whether I could apply this knowledge on FreeBSD UNIX too. Of course, I’m a few years late to the party, because people have already solved this back in 2011! But just in case the scripts and commands disappear from the web, I wanted this to be documented here as well. First, let’s see whether we even have a GMA950 (of course I do, but still). It should be PCI device 0:0:2:0, you can use FreeBSDs’ own pciconf utility or the lspci command from Linux:

# lspci | grep "00:02.0"
00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03)
 
# pciconf -lv pci0:0:2:0
vgapci0@pci0:0:2:0:    class=0x030000 card=0x30aa103c chip=0x27a28086 rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller'
    class      = display
    subclass   = VGA

Ok, to alter the GMA950s’ render clock speed (we are not going to touch it’s 2D “desktop” speed), we have to write certain values into some PCI registers of that chip at 0xF0hex and 0xF1hex. There are three different values regulating clockspeed. Since we’re going to use setpci, you’ll need to install the sysutils/pciutils package on your machine via # pkg install pciutils. I tried to do it with FreeBSDs’ native pciconf tool, but all I managed was to crash the machine a lot! Couldn’t get it solved that way (just me being too stupid I guess), so we’ll rely on a Linux tool for this. Here is my version of the script, which I call gmaboost.sh. I placed that in /usr/local/sbin/ for global execution:

  1. #!/bin/sh
  2.  
  3. case "$1" in
  4.   200) clockStep=34 ;;
  5.   250) clockStep=31 ;;
  6.   400) clockStep=33 ;;
  7.   *)
  8.     echo "Wrong or no argument specified! You need to specify a GMA clock speed!" >&2
  9.     echo "Usage: $0 [200|250|400]" >&2
  10.     exit 1
  11.   ;;
  12. esac
  13.  
  14. setpci -s 02.0 F0.B=00,60
  15. setpci -s 02.0 F0.B=$clockStep,05
  16.  
  17. echo "Clockspeed set to "$1"MHz"

Now you can do something like this: # gmaboost.sh 200 or # gmaboost.sh 400, etc. Interestingly, FreeBSDs’ i915_kms graphics driver seems to have set the 3D render clock speed of my GMA950 to 400MHz already, so there was nothing to be gained for me in terms of performance. I can still clock it down to conserve energy though. A quick performance comparison using a crappy custom-recorded ioquake3 demo shows the following results:

  • 200MHz: 30.6fps
  • 250MHz: 35.8fps
  • 400MHz: 42.6fps

Hardware was a Core 2 Duo T7600 and the GPU was making use of two DDR-II/667 4-4-4 memory modules in dual channel configuration. Resolution was 1400×1050 with quite a few changes in the Quake III configuration to achieve more performance, so your results won’t be comparable, even when running ioquake3 on identical hardware. I’d post my ~/.ioquake3/baseq3/q3config.cfg here, but in my stupidity I just managed to freaking wipe the file out. Now I have to redo all the tuning, pfh.

But in any case, this really works!

Unfortunately, it only applies to the GMA950. And I still wonder what it was that was so wrong with # pciconf -w -h pci0:0:2:0 0xF0 0060 && pciconf -w -h pci0:0:2:0 0xF0 3405 and the like. I tried a few combinations just in case my byte order was messed up or in case I really had to write single bytes instead of half-words, but either the change wouldn’t apply at all, or the machine would just lock up. Would be nice to do this with only BSD tools on actual FreeBSD UNIX, but I guess I’m just too stupid for pciconf

Sep 072016
 

TeamViewer on Linux logoI’m not exactly a big fan of TeamViewer, since you’ll never know what’s going to happen with that traffic of yours, so I prefer VNC over SSH instead. A few weeks ago I got TeamViewer access to a remote workstation machine for the purpose of processing A/V files however. Basically, it was about video and audio transcoding on said machine.

Since the stream meta data (like the language of an audio stream) wasn’t always there, I wanted to check it by playing back the files remotely in foobar2000 or MPC-HC. TeamViewer does offer a feature to relay the audio from a remote machine to your local box, as long as the remote server has some kind of soundcard / sound chip installed. I was using TeamViewer 11 – the newest version at the time of writing – to connect from CentOS 6.8 Linux to a Windows 7 Professional machine. Playing back audio yielded nothing but silence though.

Now, TeamViewer is actually not native Linux software. Both its Linux and MacOS X versions come with a bundled Wine 1.6 distribution preconfigured to run the 32-bit TeamViewer Windows binary. It was thus logical to assume that the configuration of TeamViewers’ built-in Wine was broken. This may happen in cases where you upgrade TeamViewer from previous releases (which is what I had done, 7 -> 8 -> 9 -> 11).

There are a multitude of proposed solutions to fix this, and since none of them worked for me as-is, I’d like to add my own to the mix. The first useful hint came from [here]. You absolutely need a working system-wide Wine setup for this. I already had one that I needed for work anyway, namely Wine 1.8.6 from the [EPEL] repository, configured using [winetricks]. We’re going to take some files from that installation and essentially replace TeamViewers’ own Wine with the one distributed by EPEL.

So I had TeamViewer 11 installed in /opt/teamviewer/ and some important configuration files for it in ~/.local/share/teamviewer11/ and ~/.config/teamviewer/. First, we backup the wine files of TeamViewer and replace them with the platform ones (the paths may vary depending on your Linux distribution, but the file names should not):

# mv /opt/teamviewer/tv_bin/wine/bin/wine /opt/teamviewer/tv_bin/wine/bin/wine.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wineserver /opt/teamviewer/tv_bin/wine/bin/wineserver.BACKUP
# mv /opt/teamviewer/tv_bin/wine/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/wine-preloader.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/libwine.so.1.0.BACKUP
# mv /opt/teamviewer/tv_bin/wine/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/wine.BACKUP/
# cp /usr/bin/wine /usr/bin/wineserver /usr/bin/wine-preloader /opt/teamviewer/tv_bin/wine/bin/
# cp /usr/lib/libwine.so.1.0 /opt/teamviewer/tv_bin/wine/lib/
# cp -r /usr/lib/wine/ /opt/teamviewer/tv_bin/wine/lib/

This will replace all the binaries and libraries, in my case shoving Wine 1.8.6 underneath TeamViewer. This isn’t all that’s needed however. We’ll also need the system registry hive of your working Wine installation (with sound). That should be stored in ~/.wine/system.reg! Let’s replace TeamViewers’ own hive with this one:

$ mv ~/.local/share/teamviewer11/system.reg ~/.local/share/teamviewer11/system.reg.BACKUP
$ cp ~/.wine/system.reg ~/.local/share/teamviewer11/

Ok, and the final part is adding the proper Linux audio backend to this Wines’ configuration. That part is stored in ~/.wine/user.reg. Replacing the whole file didn’t work for me though, as TeamViewer would crash upon launch, probably missing some keys from its own user.reg. So, let’s just edit its file instead, open ~/.local/share/teamviewer11/system.reg with your favorite text editor and add the following line in a proper location (it’s sorted alphabetically):

[Software\\Wine\\Drivers\\winepulse.drv] 1473239241

The corresponding file should be found within TeamViewers’ replaced Wine distribution now by the way, in my case it’s /opt/teamviewer/tv_bin/wine/lib/wine/fakedlls/winepulse.drv.

Now, run the TeamViewer profile updater (Some people say it’s required to make this work, it wasn’t for me, but it didn’t hurt either): $ /opt/teamviewer/tv_bin/TeamViewer --update-profile and then its’ Wine configuration: $ /opt/teamviewer/tv_bin/TeamViewer --winecfg. After that, you should be greeted with this:

TeamViewer 11 running its own winecfg

TeamViewer 11 running its own copy of winecfg.

Before the modifications, the configuration window would show “None” as the driver, without any way to change it. So no audio, whereas we have Pulseaudio now. Press “Test Sound” if you want to check whether it truly works. I haven’t tested the ALSA backend by the way. In my case, as soon as the registry was fixed, Wine just autoselected Pulseaudio, which is fine for me.

Now launch TeamViewer and check out the audio options in this submenu:

TeamViewer preferences

The TeamViewer 11 preferences can be found here.

It should look like this:

TeamViewers audio options

Make sure “Play computer sounds and music” is checked! (click to enlarge)

Now, after having connected and logged in, you may also wish to verify the conference audio settings in TeamViewers’ top menu:

TeamViewer 11 conference audio settings

TeamViewer 11 conference audio settings, make sure “Computer sound” is checked!

When you play a sound file on the remote computer, you should hear it on your local one as well. With that, I can finally test the audio files I’m supposed to use on that remote machine for their actual language (which is a rather important detail) where meta data isn’t available.

This seems to be a problem of TeamViewers installation / update procedure which hasn’t been addressed for several major released now. I presume just removing all traces of TeamViewer and installing it from scratch might also do the trick, but I didn’t try it for myself.

Ah, and one more thing: If you can’t launch TeamViewer on CentOS 6.x because you’re getting the following error…

teamviewerd error

TeamViewer Daemon not running…

…forget about the solutions on the web on top of what this message is telling you. TeamViewer 11 uses a systemd-style script for launching its daemon on Linux now, and that won’t do on SysV init systems. Just become root and launch the crap manually: # /opt/teamviewer/tv_bin/teamviewerd &, then press <CTRL>+<d> and it works!

Let’s hope that daemon isn’t doing anything evil while running as root. :roll:

Aug 232016
 

UnrealIRCd logoOne of the services I’ve been running on xin.at for years now has been the IRC server UnrealIRCd. It’s available for Linux, UNIX and also Windows, so it’s a pretty neat choice I think. A few days ago however, a user had notified me, that his client couldn’t connect when using SSL/TLS encryption after an update of the software. I’m pretty sure this was due to the OpenSSL developers disabling the SSL v3 protocol by default. So his client only had TLS and my old UnrealIRCd 3.x only had SSL v3 => handshake failure:

error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure

So what now? Just shoving a newer SSL library under my IRC server wouldn’t work in a stable fashion. So far, the only software I have ever seen which can be “magically” upgraded to modern protocols and ciphers this way was the Gene6 FTP server. All the way from OpenSSL 0.9.6 to 1.0.2. No idea how they did it.

Two options: Have users recompile their libraries and clients to enable SSL v3 (yeah, as if…), or try and backport a current (=2016-07-28) UnrealIRCd 4 to my server. One that supports both modern TLS v2 with modern ciphers as well as good old SSL v3, so legacy clients may connect in an encrypted fashion as well.

Why backport? Because it’s freaking Windows 2000 (and no, newer versions do *not* work), and UnrealIRCd dropped support for that, so I absolutely needed to recompile the server and several libraries it depends on. Now that was one wild ride for a user like me, I’m telling you.

Ah yes, this isn’t exactly a good step-by-step guide or anything, so in case you just wanna grab the files, scroll all the way down! If you want to know a few of the details… I don’t even remember all the things I did, but let’s see…

Requirements:

Here’s what you need:

  1. The Microsoft [Visual C++ 2008 runtime SP1 redistributable package] (only on the system where the server is supposed to run, not on the build system)
  2. Microsoft VisualStudio 2008 (I guess 2010 also works, as long as you have the v90 toolset available)
  3. Perl. I used [Strawberry Perl 5.24].
  4. The latest UnrealIRCd [dev package]. It’s for UnrealIRCd v3.4, but that doesn’t matter.
  5. The UnrealIRCd [source code]. I used the current/bugfixed version 4.0.5 for this build.
  6. A precompiled version of pcre2 supporting Windows 2000, I only found one eligible one [here]. (I failed to recompile/relink pcre2 properly, even with the version from the dev package :( )
  7. The stock [tre 0.8.0 library] source code, because it supports VS2008. The version shipped with the dev package doesn’t.
  8. The latest [OpenSSL library] source code, it’ll serve as a replacement for the older one shipped with the dev package.

If you cannot obtain Visual Studio 2008 via any (legal!) means, that’d probably mean you’re out of luck though. Luckily, I got all versions from Microsofts MSDNAA / DreamSpark program, but if you’re stuck on something like VS2012, 2013 or 2015, I cannot help you. Maybe this can still work out, but you’ll still need the 2008 version to get the v90 toolset (I guess, not an expert here…)

Modifications:

There are quite a few, but here are the ones that I still remember:

1.) Additional headers are required to link some of the software, there are free ones available. You can grab them [here]. Put them into the VC\include\ subdirectory of your Visual Studio 2008 installation folder. On top of those two, inttypes.h and stdint.h you’ll also need unistd.h, but that one’s easy: Just make a copy of io.h in that same folder and rename that copy to unistd.h and you’re done.

2.) First, cURL-SSL was built with the nmake options ENABLE_IPV6=no and ENABLE_IDN=no set. IPv6 support on Windows 2000 does exist by using an [experimental update], but it’s function calls are different than with Microsofts’ final version, so it’s unusable by most software. Also, IDN support is only available [for Windows XP and later], so internationalized domain names using non-ASCII characters don’t work. UnrealIRCd is to be linked against this version.

3.) tre replaced with latest stock tre 0.8.0 and recompiled, UnrealIRCd is to be linked against this build.

4.) Before building OpenSSL, it may need modifications to its makefile ms\ntdll.mak, which is generated by the ms\do_nasm step described in OpenSSLs INSTALL.W32, depending on your requirements. It is here where you can enable older, weaker ciphers and the older SSL v3/v2 protocols. Enable these deprecated version only if you absolutely need them!

Look for line 21 (Note, that the ^ line breaks aren’t in the file originally, it’s all in one line. I just added them here for readability purposes):

  1. CFLAG= /MD /Ox /O2 /Ob2 -DOPENSSL_THREADS  -DDSO_WIN32 -W3 -Gs0 -GF -Gy -nologo ^
  2.  -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -D_CRT_SECURE_NO_DEPRECATE ^
  3.  -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT ^
  4.  -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DRMD160_ASM ^
  5.  -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DOPENSSL_USE_APPLINK -I. ^
  6.  -DOPENSSL_NO_RC5 -DOPENSSL_NO_MD2 -DOPENSSL_NO_SSL2 -DOPENSSL_NO_KRB5 -DOPENSSL_NO_JPAKE ^
  7.  -DOPENSSL_NO_WEAK_SSL_CIPHERS -DOPENSSL_NO_STATIC_ENGINE

You could replace this with the following, allowing weak ciphers and SSL v3, but not SSL v2 for example:

  1. #CFLAG= /MD /Ox /O2 /Ob2 -DOPENSSL_THREADS  -DDSO_WIN32 -W3 -Gs0 -GF -Gy -nologo ^
  2. # -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -D_CRT_SECURE_NO_DEPRECATE ^
  3. # -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT ^
  4. # -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DRMD160_ASM ^
  5. # -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DOPENSSL_USE_APPLINK -I. ^
  6. # -DOPENSSL_NO_RC5 -DOPENSSL_NO_MD2 -DOPENSSL_NO_SSL2 -DOPENSSL_NO_KRB5 -DOPENSSL_NO_JPAKE ^
  7. # -DOPENSSL_NO_WEAK_SSL_CIPHERS -DOPENSSL_NO_STATIC_ENGINE
  8. CFLAG= /MD /Ox /O2 /Ob2 -DOPENSSL_THREADS  -DDSO_WIN32 -W3 -Gs0 -GF -Gy -nologo ^
  9.  -DOPENSSL_SYSNAME_WIN32 -DWIN32_LEAN_AND_MEAN -DL_ENDIAN -D_CRT_SECURE_NO_DEPRECATE ^
  10.  -DOPENSSL_BN_ASM_PART_WORDS -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT ^
  11.  -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DRMD160_ASM ^
  12.  -DAES_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DOPENSSL_USE_APPLINK -I. ^
  13.  -DOPENSSL_NO_RC5 -DOPENSSL_NO_MD2 -DOPENSSL_NO_SSL2 -DOPENSSL_NO_KRB5 -DOPENSSL_NO_JPAKE ^
  14.  -DOPENSSL_NO_STATIC_ENGINE

Compile as shown in the documentation, and install somewhere.

5.) Before UnrealIRCd can use the new version of OpenSSL it may need modifications to match the ones patched into the OpenSSL makefile. By default, it will also block stuff like SSL v3. Enter its source tree and open ssl\ssl.c, then locate lines 245 and 321, which will look like this:

  1. SSL_CTX_set_options(ctx_server, SSL_OP_NO_SSLv3);

Just comment that out:

  1. /** SSL_CTX_set_options(ctx_server, SSL_OP_NO_SSLv3); **/

If you enabled SSLv2 as well and want the IRC server to be able to use it, do the same for lines 244 and 320, look for this…

  1. SSL_CTX_set_options(ctx_client, SSL_OP_NO_SSLv2);

…and comment it out again:

  1. /** SSL_CTX_set_options(ctx_client, SSL_OP_NO_SSLv2); **/

Now compile and link as shown in the UnrealIRCd documentation. Like the developers I’d recomment assembling a proper command line for this, as editing the makefile all the time can be cumbersome, especially if you’re running into trouble along the way.

What else?

Some of the VS project files may be preconfigured for platform toolsets you don’t have (like v100, v110, etc.) or may be set to produce a Debug build by default. Make sure you’re using only the v90 toolset and produce only Release builds. To learn how, check out the Visual Studio documentation online. It’s not that hard for the stuff you need to build with the GUI.

And here is the file:

Note that I may have done something horribly wrong along the way with this, because it really works only on Windows 2000. This is not how it should be. But launching it on a newer operating system yields something like this:

UnrealIRCd runtime error on anything greater than or equal to Windows XP

Yeah… umm… riiight…

And after pressing OK, this:

UnrealIRCd runtime error on anything greater than or equal to Windows XP #2

Whatever…

I searched for those errors on the web for a little, but couldn’t find anything that would’ve told me why it breaks like this on “modern” operating systems, yet still works on Windows 2000. Oh, the build system was XP x64 by the way. Well, it doesn’t really matter, the standard build of the developers works on XP+ anyway, and this works only on Windows 2000. Mission accomplished in any case.

In this incarnation, the server can support SSL v3 as well as TLS v1.2 protocols and supports the following ciphers:

ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA38
4:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:SRP-DSS-
AES-256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:DH-DSS-AES256-GCM-SH
A384:DHE-DSS-AES256-GCM-SHA384:DH-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA38
4:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DH-RSA-AES256-SHA256:DH-DSS-AES256
-SHA256:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DH-RSA-AES256-SHA:DH-DSS-AES256-SH
A:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:DH-RSA-CAMELLIA256-SHA:DH-DSS-
CAMELLIA256-SHA:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA
-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA:ECDH-ECDSA-AES256-SH
A:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:CAMELLIA256-SHA:PSK-AES256-CBC-SHA:
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA25
6:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:SRP-DSS-
AES-128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:DH-DSS-AES128-GCM-SH
A256:DHE-DSS-AES128-GCM-SHA256:DH-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA25
6:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:DH-RSA-AES128-SHA256:DH-DSS-AES128
-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:DH-RSA-AES128-SHA:DH-DSS-AES128-SH
A:DHE-RSA-SEED-SHA:DHE-DSS-SEED-SHA:DH-RSA-SEED-SHA:DH-DSS-SEED-SHA:DHE-RSA-CAME
LLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:DH-RSA-CAMELLIA128-SHA:DH-DSS-CAMELLIA128-SH
A:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256
:ECDH-ECDSA-AES128-SHA256:ECDH-RSA-AES128-SHA:ECDH-ECDSA-AES128-SHA:AES128-GCM-S
HA256:AES128-SHA256:AES128-SHA:SEED-SHA:CAMELLIA128-SHA:IDEA-CBC-SHA:PSK-AES128-
CBC-SHA:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SH
A:RC4-SHA:RC4-MD5:PSK-RC4-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:SR
P-DSS-3DES-EDE-CBC-SHA:SRP-RSA-3DES-EDE-CBC-SHA:SRP-3DES-EDE-CBC-SHA:EDH-RSA-DES
-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DH-RSA-DES-CBC3-SHA:DH-DSS-DES-CBC3-SHA:ECDH-RSA-
DES-CBC3-SHA:ECDH-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA:PSK-3DES-EDE-CBC-SHA

The necessary tools for creating an SSL/TLS certificate and for installing a Windows service for the server are also included (openssl.exe, unrealsvc.exe).

Licensing:

UnrealIRCd and the software it was linked against in this case is released under the following licenses:

Any modifications to any of the software packages above as posted on this page are hereby licensed under the same license as the original software before modifications were applied. When downloading any unmodified source code, you’ll have to patch it yourself before building for a Windows 2000 platform target.

And what now?

Well, I guess my server supports IRC+TLS for all modern clients now, so yay! ;) URLs are the same as before: [irc+ssl://www.xin.at:6697] with SSL v3/TLS v1.2 or [irc://www.xin.at:6666] if you wish to connect without any encryption enabled, all plain text.

Aug 052016
 

VirtualDimension logoShort story: It’s [VirtualDimension].

Long story… It’s most definitely not what Microsoft added in Windows 10. Besides it being limited to Windows 10, it just sucks for a multitude of reasons. And there I was, having hopes for it as well… If you’ve ever used multiple desktops on a graphical, X11-based Linux or UNIX window manager / desktop environment, you’d know what I’m talking about. Usually, what you’d get on those systems, whether KDE, Gnome, Xfce4, LXDE, or whatever is just one small, configurable panel which allows you to control multiple virtual desktops. On my Gnome 2 on CentOS 6.8 Linux, it looks like this (others are very similar):

Virtual desktops on Gnome 2

Virtual desktops on Gnome 2

The leftmost desktop is my usual “Internet” environment, here I chat, read emails, browse the web for anything work-related and so on. The second one is a Linux distribution development desktop. Here I’m building a derivative of Klaus Knoppers’ [Knoppix] distro. Then comes the testing environment for said distribution on desktop #3. Usually some shells and one VMware Player instance. Next to that are two more VM desktops for software testing and for writing user guides for software installation on different operating systems. At the moment that’s MacOS X and a Windows XP x64 software build VM. Usually there’s also a Windows 7 one. One is empty (for arbitrary stuff), then comes the server administration desktop with 9 open shells, one for each server. And the last one is my private desktop with yet another web browser, and some shells for spawning screen sessions for running software compilations, encoding runs and the likes.

Now, I have a 30″ screen both at home and at work, resolution is 2560×1600. But it’s just never enough screen real estate. So I wanted well-integrated virtual desktops for Windows as well, but last time I tried out some software, I couldn’t find a good one. Recently, I tried again for some reason, like “let’s give this one last shot”. And I tried a lot of programs!

Among the software tested were [Dexpot], [Finestra], [VirtuaWin], [WindowsPager], Xilisoft [Multiple Desktops] and the Windows PowerToy predecessor of [Desktops 2.0] written by Windows Hacker Mark Russinovich and Bryce Cogswell. And finally, [VirtualDimension]. Some of those are free and open source software, others are not.

One of my primary requirements was compatibility to Windows XP x64. Of course it’d be nice if it worked on Windows 10 as well. But most of the above had important features missing or severely misbehaving on XP. Some were just very, very sluggish when switching desktops. Others had missing features to begin with, like previews on the desktop tiles. A blank desktop tile doesn’t help at all, as I need to see roughly what’s running where at a glance.

I’m not gonna make this a lengthy top list or anything, I’m just gonna show you what the software of my choice – VirtualDimension – could do for me, let’s look at the tiles first:

VirtualDimension on XP x64

VirtualDimension on XP x64

We’ll start with my good old XP x64 first. Here you can see my system tray, and Miranda being open. VirtualDimension cannot be embedded into the taskbar properly (damn), but it has an “always on top” feature. Since the contact list in my docked and always-launched Miranda doesn’t go all the way down, there is free and unused space there. Perfect for VirtualDimension! And since it’s always on top, it doesn’t disappear when clicking on Miranda for chatting.

Given the source code is definitely coming from a UNIX or Linux user (given he built it with GCC/Mingw), some features immediately ring a bell. Like “mouse warp”, where you switch desktops by moving your mouse to the border of the screen. I disabled that, don’t like it. But yeah, it’s there.

Important: While it doesn’t give you live window geometry previews, it does give you iconized previews, so you can always identify any desktop quickly by seeing what’s running there. The desktops can also be named, and there is an OSD that you can have pop up on you when switching, like so:

VirtualDimension OSD

OSD showing right after a desktop switch

In this case I had just switched to desktop #2, which is for A/V processing exclusively. This is just the top left part of the screen, where one of my eight transcoding shells was running a x265 benchmark prototype test. Color, display duration, transparency to mouse clicks on the OSD part, font and size are configurable.

Also, you can freely define keyboard shortcuts for switching desktops as well. I chose CTRL+Shift+Right as well as CTRL+Shift+Left for switching desktops and Alt+Right / Alt+Left for pushing a window to the next/previous desktop as those don’t conflict with other shortcuts I’m using.

What else can it do? Let’s right click on one of the icons in the preview tiles:

VirtualDimension iconized window right click

Clicking on a program icon in VirtualDimensions’ preview tiles gives you this menu

The first five options from the top are global ones. However, the ones below are specific to the icon you right-clicked. With “Activate”, you’d switch to the target desktop and put focus on that programs’ window. The others are pretty self-explanatory as well I guess. We also get a graceful “Close” option, and a brutal “Kill” option that’s equivalent to murdering the process in task manager. Maybe useful since it’s faster that way.

And if we click on the free area?

VirtualDimension, right click on the free area of a preview tile

Right-clicking on the free area of a preview tile gives you a list of all programs on that desktop.

Ok, not sure how useful that is, but at least it may help with identifying the windows on a desktop in more detail, as you get the window titles here. For my encoding shells I could get very quick glance at the progress, but not exactly in great detail. So the helpfulness of this is limited.

What else?

Well, it’s extremely fast! That’s one major plus for VirtualDimension, as several of the other solutions (open source ones as well) were abysmally slow, at least on XP x64. But damn, VirtualDimension just flies! And its memory footprint is minimal. I saw less than 12MiB of consumption here. Even if you add a truckload of Desktops (there seems to be no upper limit), it just won’t slow down unless you spawn like 50 CPU intensive processes all over the place killing your CPU or maxing out your RAM. But that wouldn’t have been VirtualDimensions fault then. Its memory footprint will linearly grow by spawning more desktops, so with eight you may see around 20MiB. Still neat.

And what’s bad about it?

Well, sometimes, if you have a lot of windows on one desktop, the icons are’t cut off in the right spot at the bottom of the preview tile, so they overflow just a little bit. Just a cosmetic issue. Also, you should maybe deactivate the shell integration. With this, VirtualDimension hooks itself into all windows (such a DLL hook means entering another processes’ memory area). With that, you can get its functions via right clicking on a windows’ title bar, like on UNIX.

Nice, but dangerous! This can trigger anti-cheat systems in online games, because they really don’t like you stepping into their processes’ memory areas! That’s what cheating tools do to modify a games’ parameters on the fly as well. You don’t wanna be banned because of such a thing!

In my case, I managed to lock myself out of Mechwarrior Online because of this. I wasn’t banned, but the login process wouldn’t even let me launch its window. Disabling the feature, launching MWO, then re-enabling it and trying to log in caused a pretty abnormal process termination:

Mechwarrior Online really doesn't like VirtualDimensions' shell integration feature

Mechwarrior Online really doesn’t like VirtualDimensions’ shell integration feature! And no, there was no “update available”… (click to enlarge)

There is an exception list for this feature, and I added all of MWOs’ .exe files to it, to no avail. Better to stay away from this one.

Now, well, this otherwise beautiful piece of software was dropped by its developer around 2005. About the time my XP x64 came out. Latest alpha build is from some time in 2006. So this is ancient! It even supports Windows 98 and NT 4.0, I mean… So, how about Windows 10 then? I mean, Windows 10 doesn’t even have a GDI UI anymore, this is like one completely different world. Since I do have a Windows 10 machine (yeah, ew), let’s check it out:

VirtualDimension on Windows 10

VirtualDimension on Windows 10 – hey, it really works!?

Miranda seemingly can’t dock properly on Windows 10 anymore. It kinda… floats near the desktop border when docked. It’s strange and it wastes space, but well, I don’t know how to fix that yet. But anyway, I embedded VirtualDimension into Miranda (by just moving the window there, removing its title bar and resizing it properly again). And guess what?

It just works™!

I launched some Metro / Modern UI apps in a window as well, and while those aren’t shown in the preview tiles, they can be controlled with keyboard shortcuts, just like regular windows. Also, it’s just as blazing fast as it is on XP. Ah, and… yeah, it actually does work on all 64-bit x86 Windows versions it seems! It’s amazing, but an ancient piece of 32-bit software that does alter a Windows systems’ usage pattern quite fundamentally still works fine on Windows 10 64-bit. I gotta say, I’m pretty relieved, because Windows 10s’ own solution just sucks – where is my live preview? – and I don’t want to change my usage paradigm too much when switching operating systems (even from Linux/UNIX to Windows and back).

Some of the other solutions like Dexpot or Finestra may be faster on Windows 10 then they are on a just half-supported XP x64, but nah, don’t need them.

VirtualDimension is as perfect as it gets, despite its age! Or maybe because of it?

Still, anyone interested in picking up that [VirtualDimension project on SourceForge] and in continuing its development? ;) I guess I can’t touch that code, would probably just mess everything up. Ah, it’s C++ by the way…

A few things could use some fixing, like the icon overflowing issue, Modern UI window detection, certain, rare windows being sticky on all Desktops even if nobody told them to do so (Miranda, X-Chat DCC windows) and that exclusion list for the shell integration, which doesn’t hook into all windows properly when active either anyway.

Would be so nice if somebody could continue working this! :)

Until that happens (I know, it never will), I’ll just continue using v0.95 alpha. ;)

Jun 032016
 

H.265/HEVC logoAnd here’s another x265 build for Windows XP and Windows XP x64, following [1.9+141]. As usual, these work on modern versions of Windows just as well. Again, built with Microsoft Visual Studio 2010 SP1 and tested for correct encodes for 8-bit, 10-bit and 12-bit color depth. The 8-bit test has been done using the x86_32 version, the 10- & 12-bit tests has been done with the x86_64 version. I’m not running complicated test suits on this, just a simple encode with manual output checking.

Here is the software for 32-bit and 64-bit systems:

As usual, the builds depend on the Microsoft Visual C++ 2010 runtime which you can download from Microsoft [for 32-bit systems] and [for 64-bit systems] if you don’t have it already.

This time around, it’s a pure binary release, giving you the x265.exe and libx265.dll. I think I’m gonna keep it that way. It’s meant for users, not developers anyway.

I’m thinking I might create a project page for this, so that all releases get consolidated on a single spot, that’d probably better than creating a new post for each and every build I’m pushing out. If I’m gonna do that, links to it will be added to each post regarding information about how to build x265 for WinXP+, and also to all binary release posts.

Such a page could also give you an avconv release on the spot, so you can work with all kinds of video input to your liking, given that x265 can only accept raw YUV video by itself. Just need to build a 32-bit version of libav as well then.

Oh well, have fun! :)

Update: All x265 releases have now been consolidated on [this page]! All future XP- and XP x64-compatible releases of x265 plus a relatively recent version of avconv to act as a decoder for feeding x265 with any kind of input video streams will be posted there as well.

Jun 022016
 

KERNEL_STACK_INPAGE_ERROR logoRecently, this server (just to remind you: an ancient quad Pentium Pro machine with SCSI storage and FPM DRAM) experienced a 1½ hour downtime due to a KERNEL_STACK_INPAGE_ERROR bluescreen, stop code 0x00000077. Yeah yeah, I’m dreaming about running OpenBSD on XIN.at, but it’s still the same old Windows server system. Bites, but hard to give up and/or migrate certain pieces of software. In any case, what exactly does that mean? In essence, it means that the operating systems’ paged pool memory got corrupted. So, heh?

More clearly, either a DRAM error or a disk error, as not the entire paged pool needs to actually be paged to disk. The paged pool is swappable to disk, but not necessarily swapped to disk. So we need to dig a bit deeper. Since this server has 2-error correction and 3-error reporting capability for its memory due to IBM combining the parity FPM-DRAM with additional ECC chips, we can look for ECC/parity error reports in the servers’ system log. Also, disk errors should be pretty apparent in the log. And look what we’ve got here (The actual error messages are German even though the log is being displayed on a remote, English system – well, the server itself is running a German OS):

Actually, when grouping the error log by disk events, I get this:

54 disk errors in total - 8 of which were dead sectors

54 disk errors in total – 8 of which were medium errors – dead sectors

8 unrecoverable dead sectors and 46 controller errors, starting from march 2015 and nothing before that date. Now the actual meaning of a “controller error” isn’t quite clear. In case of SCSI hardware like here, it could be many things. Starting from firmware issues over cabling problems all the way to wrong SCSI bus terminations. Judging from the sporadic nature and the limited time window of the error I guess it’s really failing electronics in the drive however. The problems started roughly 10 years after that drive was manufactured, and it’s an 68-pin 10.000rpm Seagate Cheetah drive with 36GB capacity by the way.

So yeah, march 2015. Now you’re gonna say “you fuck, you saw it coming a long time ago!!”, and yeah, what can I say, it’s true. I did. But you know, looking away while whistling some happy tune is just too damn easy sometimes. :roll:

So, what happened exactly? There are no system memory errors at all, and the last error that has been reported before the BSOD was a disk event id 11, controller error. Whether there was another URE (unrecoverable read error / dead sector) as well, I can’t say. But this happened exactly before the machine went down, so I guess it’s pretty clear: The NT kernel tried to read swapped kernel paged pool memory back from disk, and when the disk error corrupted that critical read operation (whether controller error or URE), the kernel space memory got corrupted in the process, in which case any kernel has to halt the operating system as safe operation can no longer be guaranteed.

So, in the next few weeks, I will have to shut the machine down again to replace the drive and restore from a system image to a known good disk. In the meantime I’ll get some properly tested drives and I’m also gonna test the few drives I have in stock myself to find a proper replacement in due time.

Thank god I have that remote KVM and power cycle capabilities, so that even a non-ACPI compliant machine like the XIN.at server can recover from severe errors like this one, no matter where in the world I am. :) Good thing I spent some cash on an expensive UPS unit with management capabilities and that KVM box…

Mar 152016
 

H.265/HEVC logoJust recently, I’ve tested the computational cost of decoding 10-bit H.265/HEVC on older PCs as well as Android devices – with some external help. See [here]. The result was, that a reasonable Core 2 Quad can do 1080p @ 23.976fps @ 3MBit/s in software without issues, while a Core 2 Duo at 1.6GHz will fail. Also, it has been shown that Android devices – even when using seriously fast quad- and octa-core CPUs can’t do it fluently without a hardware decoder capable of accelerating 10-bit H.265. To my knowledge there is a hack for Tegra K1- and X1-based devices used by MX Player, utilizing the CUDA cores to do the decoding, but all others are being left behind for at least a few more months until Snapdragon 820 comes out.

Today, I’m going to show the results of my tests on Intel Skylake hardware to see whether Intels’ claims are true, for Intel has said that some of their most modern integrated GPUs can indeed accelerate 10-bit video, at least when it comes to the expensive H.265/HEVC. They didn’t claim this for all of their hardware however, so I’d like to look at some lower-end integrated GPUs today, the Intel HD Graphics 520 and the Intel HD Graphics 515. Here are the test systems, both running the latest Windows 10 Pro x64:

  • HP Elitebook 820 G3 (tiny)
  • HP Elitebook 820 G3
  • CPU: Intel [Core i5-6200U]
  • GPU: Intel HD Graphics 520
  • RAM: 8GB DDR4/2133 9-9-9-28-1T
  • Cooling: Active
  • HP Elite X2 1012 G1 (tiny)
  • HP Elite X2 1012 G1 Convertible
  • CPU: Intel [Core m5-6Y54]
  • GPU: Intel HD Graphics 515
  • RAM: 8GB LPDDR3/1866 14-17-17-40-1T
  • Cooling: Passive

Let’s look at the more powerful machine first, which would clearly be the actively cooled Elitebook 820 G3. First, let’s inspect the basic H.265/HEVC capabilities of the GPU with [DXVAChecker]:

DXVAChecker on an Intel HD Graphics 520

DXVAChecker looks good with the latest Intel drivers provided by HP (version 4331): 10-Bit H.264/HEVC is being supported all the way up to 8K!

And this is the ultra-low-voltage CPU housing the graphics core:

Intel Core i5-6200U

Intel Core i5-6200U

So let’s launch the Windows media player of my choice, [MPC-HC], and look at the video decoder options we have:

In any case, both HEVC and UHD decoding have to be enabled manually. On top of that, it seems that either Intels’ proprietary QuickSync can’t handle H.265/HEVC yet, or MPC-HC simply can’t make use of it. The standard Microsoft DXVA2 API however supports it just fine.

Once again, I’m testing with the Anime “Garden of Words” in 1920×1080 at ~23.976fps, but this time with a smaller slice at a higher bitrate of 5Mbit. The encoding options were as follows for pass 1 and pass 2:

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 5000 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 1
--slow-firstpass --stats v.stats --sar 1 --range full

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 5000 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 2
--stats v.stats --sar 1 --range full

Let’s look at the performance during some intense scenes with lots of rain at the beginning and some less taxing indoor scenes later:

There is clearly some difference, but it doesn’t appear to be overly dramatic. Let’s do a combined graph, putting the CPU loads for GPU-assisted decoding over the regular one as an overlay:

CPU load with software decoding in blue and DXVA2 GPU-accelerated hardware decoding in red

Blue = software decoding, magenta (cause I messed up with the red color) = GPU-assisted hardware decoding

Well, using DXVA2 does improve the situation here, even if it’s not by too much. It’s just that I would’ve expected a bit more here, but I guess that we’d still need to rely on proprietary APIs like nVidia CUVID or Intel QuickSync to get some really drastic results.

Let’s take a look at the Elite X2 1012 G1 convertible/tablet with its slightly lower CPU and GPU clock rates next:

Its processor:

Core m5-6Y54

Core m5-6Y54

And this is, what DXVAChecker has to say about its integrated GPU:

DXVAChecker on an Intel HD Graphics 515

Whoops… Something important seems to be missing here…

Now what do we have here?! Both HD Graphics 520 and 515 should be [architecturally identical]. Both are GT2 cores with 192 shader cores distributed over 24 clusters, 24 texture mapping units as well as 3 rasterizers. Both support the same QuickSync generation. The only marginal difference seems to be the maximum boost clock of 1.05GHz vs. 1GHz, and yet HD Graphics 515 shows no sign of supporting the Main10 profile for H.264/HEVC (“HEVC_VLD_Main10”), so no GPU-assisted 10-bit decoding! Why? Who knows. At the very least they could just scratch 8K support, and implement it for SD, HD, FHD and UHD 4K resolutions. But nope… Only 8-bit is supported here.

I even tried the latest beta driver version 4380 to see whether anything has changed in the meantime, but no; It behaves in the same way.

Let’s look at what that means for CPU load on the slower platform:

CPU load with software decoding

The small Core m5-6Y54 has to do all the work!

We can see that we get close to hitting the ceiling with the CPUs’ boost clock going up all the way. This is problematic for thermally constrained systems like this one. During a >4 hour [x264 benchmark run], the Elite X2 1012 G1 has shown that its 4.5W CPU can’t hold boost clocks this high for a long time, given the passive cooling solution. Instead, it sat somehwere in between 1.7-2.0GHz, mostly in the 1.8-1.9GHz area. This might still be enough with bigger decoding buffers, but DXVA2 would help a bit here in making this slightly less taxing on the CPU, especially considering higher bitrates or even 4K content. Also, when upping the ambient temperature, the runtime could be pushed back by almost an hour, pushing the CPU clock rate further down by 100-200MHz. So it might just not play that movie on the beach in summer at 36°C. ;)

So, what can we learn from that? If you’re going for an Intel/PC-based tablet, convertible or Ultrabook, you need to pick your Intel CPU+graphics solution wisely, and optimally not without testing it for yourself first! Who knows what other GPUs might be missing certain GPU video decoding features like HD Graphics 515 does. Given that there is no actual compatibility matrix for this as of yet (I have asked Intel to publish one, but they said they can’t promise anything), you need to be extra careful!

For stuff like my 10-bit H.265/HEVC videos at reasonably “low” bitrates, it’s likely ok even with the smallest Core m3-6Y30 + HD Graphics 515 that you can find in devices like Microsofts’ own Surface Pro 4. But considering modern tablets’ WiDi (Wireless Display) tech with UHD/4K resolutions, you might want to be careful when choosing that Windows (or Linux) playback device for your big screens!

Feb 252016
 

SlySoft logoIt’s over – after 13 years of being almost constantly under pressure by US-based companies, SlySoft finally had to close its doors. Most notably known for software such as CloneCD or AnyDVD, the Antiguan-based company has provided people all over the world with ways to quickly and easily circumvent disc-based copy protection mechanisms such as Sony ArcCos, CSS, ACSS or BD+ and many others for years.

The companys’ founder, a certain Mr. Giancarla Bettini had already been sued – and successfully so – before an Antiguan court. While it was strictly up to Antiguan Authorities to actually sue SlySoft (because the AACS-LA could not do so themselves due to some legal constraints), this did finally happen in 2012, fining Mr. Bettini for a sum of USD $30.000. That didn’t result in SlySoft closing down however.

What it was that happened exactly a few days ago is unclear, as SlySoft seems to be under NDA or maybe legal pressure as to not release any statement regarding the reasons for the shutdown, quote, “We were not allowed to respond to any request nor to post any statement”. The only thing that we have besides a forum thread with next to zero information is the statement on the official website, which is rather concise as well:

“Due to recent regulatory requirements we have had to cease all activities relating to SlySoft Inc.
We wish to thank our loyal customers/clients for their patronage over the years.”

It should be relatively clear however, that this has to have something to do with the AACS-LA and several movie studios as well as software and hardware companies “reminding” the United States government of SlySofts illicit activities just recently. This would’ve resulted in Antigua being put onto the US priority [watchlist] of countries violating US/international copyright laws. Ultimately, being put onto that list can result in trade barriers being put up within a short time, hurting a countrys’ economy, thus escalating the whole SlySoft thing to an international incident. More information [here].

AnyDVD HD

This little program and its little brothers made it all the way to the top and became an international incident! Quite the career…

It seems – and here is where my pure speculations start – that there was some kind of agreement found between SlySofts’ founder and the AACS-LA and/or the Antiguan and US governments resulting in the immediate shutdown of SlySoft without further consequences for either its founder or other members of the company. If true then SlySoft will surely also have to break their promise of releasing a “final” version of AnyDVD HD including all the decryption keys from the online database in case they have to close their doors forever. This is, what “[…] we have had to cease all activities relating to SlySoft Inc. […]” means after all.

So what are the consequences, technically?

I can only say for AnyDVD HD as according to the forums over at SlySoft, but the latest version 7.6.9.0 supposedly includes some 130.000 AACS keys and should still be able to decrypt a lot of Blu-Rays, even if not all of them.

In the end however, the situation can only deteriorate as time passes and new versions of AACS keys and BD+ certificates are being released, even if you bypass the removed DNS A-Records of key.slysoft.com and access one of the key servers by resolving the IP address locally (via your hosts file). Thing is, nobody can tell when SlySoft will be forced to implement more effective methods of making their services inaccessible, like by just switching off the machines themselves.

But even if they stay online for years to come, no new keys or certificates are going to be added, so it’s probably safe to say that the red fox is truly dead.

AddendumJust to be clear for those of you who are scared of even accessing any SlySoft machines with their real IPs any longer; According to a SlySoft employee (you can read it in their forums), all of the servers are still 100% under SlySofts physical control, and their storage backends are encrypted. They were not raided or anything. So it seems you do not have to fear “somebody else listening” on SlySofts key servers.

PS.: A sad day if you ask me, a victorious one if you ask the movie industry. Maybe somebody should just walk over and tell them, that cheaper, DRM-free media actually work a lot better on the market, when compared to jailing users into some “trusted” (by them) black boxes with forced software updates and closed software. Yeah, I actually want to play my BD movies on the PC (legally!!), and on systems based on free software like Linux and BSD UNIX as well, not on some blackboxed HW player, so go suck it down, Hollywood. I mean, I’m even BUYING your shit, for Christs’ sake…

Oh, by the way, China is actually sitting on that copyright watchlist (I mean, obviously), and they gave us DVDFab. Also, there are MakeMKV and [others as well]. We’ll see whether the AACS-LA can hunt them all down… And even if they can… Will it really make them more money? Debatable at best…

Red Fox logoUpdate: Those guys work fast! While SlySoft is gone, several of the developers have grabbed the software and moved the servers to Belize, the discussion forums have already been migrated and a new version 7.6.9.1 of AnyDVD HD has been released, including new keys and reconfigured to access the new key servers as well. The company is now called “Red Fox” and the forums can be accessed via [forum.redfox.bz].

By now, AnyDVD HD respects the old licenses as well, and this will stay this way for the transition period. Ultimately however, according to posts on the forums, people will have to buy new licenses, even if they had a lifetime license before. They also said they’ll cook up “something nice” for people who bought licenses just recently. Probably some kind of discount I presume.

Still, if I may quote one of the developers: “SlySoft is dead, long live RedFox!”

Feb 202016
 

H.265/HEVC logoRecently, after [successfully compiling] the next generation x265 H.265/HEVC video encoder on Windows, Linux and FreeBSD, I decided to ask for guidance when it comes to compressing Anime (live action will follow at a later time) in the Doom9 forums, [see here]. Thing is, I didn’t understand all of the knobs x265 has to offer, and some of the convenient presets of x264 didn’t exist here (like --tune film and --tune animation). So for a newbie it can be quite hard to make x265 perform well without sacrificing far too much CPU power, as x265 is significantly more taxing on the processor than x264.

Thanks to [Asmodian] and [MeteorRain]/[LittlePox] I got rid of x265s’ blurring issues, and I took their settings and turned them up to achieve more quality while staying within sane encoding times. My goal was to be able to encode 1080p ~24fps videos on an Intel Xeon X5690 hexcore @ 3.6GHz all-core boost clock at >=1fps for a target bitrate of 2.5Mbit.

In this post, I’d like to compare 7 scenes from the highly opulent Anime [The Garden of Words] (言の葉の庭) by [Makoto Shinkai] (新海 誠) at three different average bitrates, 1Mbit, 2.5Mbit (my current x264 default) and 5Mbit. The Blu-Ray source material is H.264/AVC at roughly 28Mbit on average. Also, both encoders are running in 10-bit color depth mode instead of the common 8-bit, meaning that the internal arithmetic precision is boosted from 8- to 16-bit integers as well. While somewhat “non-standard” for H.264/AVC, this is officially supported by H.265/HEVC for Blu-Ray 4K. The mode of operation is 2-pass to aim for comparable file sizes and bitrates. The encoding speed penalty for switching from x264 to x265 at the given settings is around a factor of 8. Somewhat.

The screenshots below are losslessly compressed 1920×1080 images. Since this is all about compression, I chose to serve the large versions of the images in WebP format to all browsers which support it (Opera 11+, Chromium-based Browsers like Chrome, Iron, Vivaldi, the Android Browser or Pale Moon as the only Gecko browser). This is done, because at maximum level, WebP does lossless compression much more efficiently, so the pictures are smaller. This helps, because my server only has 8Mbit/s upstream. If your browser doesn’t support WebP (like Firefox, IE, Edge, Safari), it’ll be fed lossless PNG instead. All of this happens automatically, you don’t need to do anything!

Now, let’s start with the specs.

Specifications:

Here are the source material encoding settings according to the video stream header:

cabac=1 / ref=4 / deblock=1:1:1 / analyse=0x3:0x133 / me=umh / subme=10 / psy=1 / psy_rd=0.40:0.00 /
mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 /
fast_pskip=1 / chroma_qp_offset=-2 / threads=12 / lookahead_threads=1 / sliced_threads=0 / slices=4 /
nr=0 / decimate=1 / interlaced=0 / bluray_compat=1 / constrained_intra=0 / bframes=3 / b_pyramid=1 /
b_adapt=2 / b_bias=0 / direct=3 / weightb=1 / open_gop=1 / weightp=1 / keyint=24 / keyint_min=1 /
scenecut=40 / intra_refresh=0 / rc_lookahead=24 / rc=2pass / mbtree=1 / bitrate=28229 /
ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 /
vbv_maxrate=31600 / vbv_bufsize=30000 / nal_hrd=vbr / filler=0 / ip_ratio=1.40 / aq=1:0.60

x264 10-bit encoding settings (pass 1 & pass 2), 2.5Mbit example:

--fps 24000/1001 --preset veryslow --tune animation --open-gop --b-adapt 2 --b-pyramid normal -f -2:0
--bitrate 2500 --aq-mode 1 -p 1 --slow-firstpass --stats v.stats -t 2 --no-fast-pskip --cqm flat
--non-deterministic

--fps 24000/1001 --preset veryslow --tune animation --open-gop --b-adapt 2 --b-pyramid normal -f -2:0
--bitrate 2500 --aq-mode 1 -p 2 --stats v.stats -t 2 --no-fast-pskip --cqm flat --non-deterministic

x265 10-bit encoding settings (pass 1 & pass 2), 2.5Mbit example:

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 2500 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 1
--slow-firstpass --stats v.stats --sar 1 --range full

--y4m -D 10 --fps 24000/1001 -p veryslow --open-gop --bframes 16 --b-pyramid --bitrate 2500 --rect
--amp --aq-mode 3 --no-sao --qcomp 0.75 --no-strong-intra-smoothing --psy-rd 1.6 --psy-rdoq 5.0
--rdoq-level 1 --tu-inter-depth 4 --tu-intra-depth 4 --ctu 32 --max-tu-size 16 --pass 2
--stats v.stats --sar 1 --range full

Since x265 can only read raw YUV and Y4M, the source video is being fed to it via [libavs’] avconv tool, piping it into x265. The avconv commandline for that looks as follows:

$ avconv -r 24000/1001 -i input.h264 -f yuv4mpegpipe -pix_fmt yuv420p -r 24000/1001 - 2>/dev/null

If you want to do something similar, but you don’t like avconv, you can use [ffmpeg] as a replacement, the options are completely the same. Note that you should always specify the correct frame rates (-r) for input and output, or the bitrate setting of the encoder will be applied wrongly!

x264 on the other hand was linked against libav directly, using its decoding capabilities without any workarounds.

Let’s compare:

“The Garden of Words” has a lot of rain. This is a central story element of the 46 minute movie, and it’s hard on any encoder, because a lot of stuff is moving on screen all the time. Let’s take a look at such a scene for our first side-by-side comparison. Each comparison is done in two rows: H.264/AVC first (including the source material), and below that H.265/HEVC, also including the source.

Let’s go:

Scene 1, H.264/AVC encoded by x264 0.148.x:

Scene 1, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

It has been said that x265 performs specifically well at two things: Very high resolutions (which we don’t have here) and low bitrates. And yep, it shows. When comparing the 1Mbit shots, it becomes clear pretty quickly that x265 manages to preserve more detail for the parts with lots of motion. x264 on the other hand starts to wash out the scene pretty severely, smearing out some raindrops, spray water and parts of the foliage. Also, it’s pretty bad around the outlines as well, but that’s true for both encoders. You can spot that easily with all the aliasing artifacts on the raindrops.

Moving up a notch, it becomes very hard to distinguish between the two. When zooming in you can still spot a few minor differences (note the kids umbrella, even if it’s not marked), but it’s quite negligible. Here I’m already starting to think x265 might not be worth it in all cases. There are still differences between the two 2.5Mbit shots and the original however, see the red areas of the umbrella and the most low-contrast, dark parts of the foliage.

At 5Mbit, I really can’t see any difference anymore. Maybe the colors are a little off or something, but when seen in motion, distinguishing between the two and the original becomes virtually impossible. Given that we just threw a really difficult scene at x264 and x265, this should be a trend to continue throughout the whole test.

Now, even more rain:

Scene 2, H.264/AVC encoded by x264 0.148.x:

Scene 2, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Now this is extreme at 1Mbit! Looking at H.264, the spray water on top of the cable can’t even be told apart from the cloud in the background anymore. Detail loss all over the scene is catastrophic in comparison to the original. Tons of raindrops are simply gone entirely, and the texture details on the tower and the angled brick wall of the house to the left? Almost completely washed out and smeared.

Now, let’s look at H.265 @ 1Mbit. The spray water is also pretty bad, but it’s amazing how much more detail was preserved overall. Sure, there are still parts of the raindrops missing, but it’s much, much closer to the original. We can now see details on the walls as well, even the steep angle one on the left. The only serious issue is the red light bleeding at the tower. There is very little red there in the original, so I’m not sure what happened there. x264 does this as well, but x265 is a bit worse.

At the next level, the differences are less pronounced again, but there is still a significant enough improvement when going from x264 to x265 at 2.5Mbit: The spray water on the cable becomes more well-defined, and more rain is being preserved. Also, the textures on the walls are a tiny little bit more detailed and crisp. Once again though, x265 is bleeding too much red light at the tower.

Since it’s noticeably not fully on the level of the source still, let’s look at 5Mbit briefly. x265 is able to preserve a tiny little bit more rain detail here, coming extremely close to the original. In motion, you can’t really see the difference however.

Now, let’s get steamy:

Scene 3, H.264/AVC encoded by x264 0.148.x:

Scene 3, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

1Mbit first again: Let me just say: It’s ugly. x264 pretty much messes up the steam coming from the iron. We get lots of block artifacts now. Some of the low-contrast patterns on the ironing board are being smeared out at a pretty terrible level. Also, the bokeh background partly shows block artifacts and banding. x265 produces quite a lot of banding here itself, but no blocks. Also, outlines and sharp contrasts are more well-defined, and the low contrast part is done noticeably better.

At 2.5Mbit, the patterns repeat themselves now. The steam is only slightly better with x265, outlines are slightly more well-defined, and the low-contrast patterns are slightly more visible. For some of the blurred parts, x265 seems to be a tiny little bit to prone to banding though, in a very few spots, x264 might be just that 1% better. Overall however, x265 wins this, and even if it’s just for the better outlines.

At 5Mbit, you really need to zoom in and analyze very small details, e.g. around the outer border of the steam. Yes, x265 does better again. But you’d not really be able to notice this when watching.

How about we go cry a little bit:

Scene 4, H.264/AVC encoded by x264 0.148.x:

Scene 4, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Cutting onions would be a classic fun part in a slice-of-life anime. Here, it’s just kitchen work. And quite the bad looking one for H.264 at 1Mbit. The letters on the knife are partly lost completely, becoming unreadable. The onion parts that fly off are visibly worse than when encoded with x265 at the same bitrate. Also, x264 produced block artifacts in the blurred bokeh areas again, that simply aren’t there with x265.

On the next level, the two come much closer to each other. However, x265 simply does the outlines better. Less artifacts and sharper, just like with the writing on the knifes’ blade as well. The issues with the bokeh are nearly gone. What’s left is a negligible amount of blocking for x264 and banding for x265. Not really noticeable however.

Finally, at 5Mbit, x265 shows those ever so slightly more well-done outlines. But that’s about it, the rest looks nice for both, and pretty much identical to the source.

Now, please, dig in:

Scene 5, H.264/AVC encoded by x264 0.148.x:

Scene 5, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Let’s keep this short: x264 does blocking, and bad transitions/outlines. x265 does it better. Plain and simple.

At 2.5Mbit, x265 nearly reaches quality transparency when compared to the original, something x264 falls short of, just a bit. While x265 does the outlines and the steam part quite like in the original frame, x264 rips the outlines apart a bit too much, and slight block artifacts can again be seen for the steam part.

At 5Mbit, x264 still shows some blocking artifacts in a part that most lossy image/video compression algorithms traditionally suck at: The reds. While not true for all human beings, most eyes perceive much finer gradients for greens, then blues, and do worst with reds. Meaning, our eyes have an unequal sensitivity distribution when it comes to colors. So image and video codecs try to save bitrate in the reds first, because supposedly it’d be less noticeable. To me subjectively, x265 achieves transparency here, meaning it looks just like the original. x264 doesn’t manage entirely. Close, but not not entirely.

Next:

Scene 6, H.264/AVC encoded by x264 0.148.x:

Scene 6, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

This is a highly static scene, with only few moving parts, so there is some rain again, and some shadow cast by raindrops as well. Now, for the static parts, incremental B frames really work wonders here. Most detail is being preserved by both encoders. What’s supposed to be happening is happening: The encoders save bit rate where the human eye can’t easily tell: In the parts where stuff is moving around very quickly. That’s how we lose a lot of raindrop shadows and some drops as well. x264 seems to have trouble separating the scene into even smaller macro blocks though? Not sure if that’s the reason, but a lot of mesh detail for the basket on the balcony on the top right is lost – x265 does better there! This is maybe because x264 couldn’t distinct the moving drops from the static background so well anymore?

At 2.5Mbit, the scenes become almost indistinguishable. The more static content we have, the easier it gets of course, so the transparency threshold becomes lower. And if you ask me, both of them reach perfect quality at 5Mbit.

Let’s throw another hard one at them for the last round:

Scene 7, H.264/AVC encoded by x264 0.148.x:

Scene 7, H.265/HEVC encoded by x265 1.9+15-425b583f25db:

Enough rain already? Pffh! Here we have a lot of foliage and low contrast added to the mix. And it gets smeared a lot by x264, rain detail lost, fine details of the bushes turning into green mud, that’s how it goes. x265 also loses too much detail here (I mean, 1Mbit is really NOT much), but again, it fares quite a bit better.

At 2.5Mbit, both encoders do very well. Somehow, this scene doesn’t seem to penalize x264 that much at the medium level. You’d really need your magnifying glass to find the spots where x265 still does better, which surprises me a bit for this scene. And finally, at 5Mbit – if you ask me – visual transparency is reached for both x264 and x265.

Final thoughts:

Clearly it’s true what a lot of people have been saying. x265 rocks at low bitrates, if configured correctly. But that isn’t gonna give me perfect quality or anything. Yeah, it sucks less – much less – than x264 in that department, but at a higher 2.5Mbit, where both start looking quite decent, x265 having just a slight edge… it becomes hardly justifiable to use it, simply because it’s that much slower to run it at decent settings.

Also, you need to take device compatibility into account. Sure, a powerful PC can always play the stuff. No matter if it’s some UNIX, Linux, MacOS X or Windows. But how about video game consoles? Older TVs? That kind of thing. Most of those can only play H.264/AVC. Or course, if you’re only using your PC and you have a lot of time and electricity to burn, then hey – why not?

But I’ll have to think really long and really hard about whether I want to replace x264 with x265 at this given point in time. Overall, it might not be practical enough on my current hardware yet. Maybe I’d need an AVX/AVX2-capable processor, as x265 has tons of optimizations for those instruction set extensions. But I’m gonna stay on my Xeon X5690 for quite a while, so SSE 4.2 is the latest I have.

I’d say, if you can stand some quality degradation, then x265 might be the way to go, as it can give you much smaller file sizes at lower bitrates with slight degradation.

If you’re aiming for high bitrates and quality, it might not be worth it right now, at least for 1080p. It’s been said the tables are turning once more when going up to 4K and UHD, but I haven’t tested that yet, as all my content – both Anime and live action movies – are “low resolution” *cough* 1080p or 720p.

Addendum:

Screenshots were taken using the latest stable mplayer 1.3.0 on Linux. Thank god they’re bundling it with ffmpeg now, making things much easier. This choice was made because mplayer can easily grab screenshots from specific spots in a video in an automated fashion. I used framesteps for this, like this:

$ mplayer ./TEST-H.265/HEVC-1mbit.mkv -nosound -vf framestep=24 \
 -vo png:z=9:outdir=./screenshots/1mbit/H.265/HEVC/:prefix=H.265/HEVC-1mbit-

This will decode every 24th frame of the video file TEST-H.265/HEVC-1mbit.mkv, and grab it into a .png file with maximum lossless compression as supported by mplayer. The .png files will be prefixed with a user-defined string and numbered, like H.265/HEVC-1mbit-00000001.pngH.265/HEVC-1mbit-00000002.png and so on, until the end of file is reached.

To encode the full size screenshots to WebP, the most recent [libwebp-0.5.0], or rather one of its companion tools – cwebp – was used as follows:

$ cwebp -z 9 -lossless -q 100 -noalpha -mt ./input.png -o ./output.webp

Now… somebody wanna grant me remote access to some quad socket Haswell-EX Xeon box for free?

No?

Meh… :roll: