Nov 142014
 

MOD Music on FreeBSD 10 with xmmsAs much as I am a WinAmp 2 lover on Microsoft Windows, I am an xmms lover on Linux and UNIX. And by that I mean xmms1, not 2. And not Audacious. Just good old xmms. Not only does it look like a WinAmp 2 clone, it even loads WinAmp 2/5 skins and supports a variety of rather obscure plugins, which I partially need and love (like for Commodore 64 SID tunes via libsidplay or libsidplay2 with awesome reSID engine support).

Recently I started playing around with a free Laptop I got and which I wanted to be my operating systems test bed. Currently, I am evaluating – and probably keeping – FreeBSD 10 UNIX. Now it ain’t perfect, but it works pretty well after a bit of work. Always using Linux or XP is just a bit boring by now. ;)

One of the problems I had was with xmms. While the package is there, its built-in tracker module file (mod, 669, it, s3m etc.) support is broken. Play one of those and its xmms-mikmod plugin will cause a segmentation fault immediately when talking to a newer version of libmikmod. Also, recompiling multimedia/xmms from the ports tree produced a binary with the same fault. Then I found this guy [Jakob Steltner] posting about the problem on the ArchLinux bugtracker, [see here].

Based on his work, I created a ports tree compatible patch patch-drv_xmms.c, here is the source code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
--- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
+++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
@@ -117,6 +117,10 @@
        return VC_Init();
 }
 
+static void xmms_CommandLine(CHAR * commandLine)
+{
+}
+
 MDRIVER drv_xmms =
 {
        NULL,
@@ -126,7 +130,8 @@
 #if (LIBMIKMOD_VERSION > 0x030106)
         "xmms",
         NULL,
-#endif
+#endif
+       xmms_CommandLine, // Was missing
         xmms_IsThere,
        VC_SampleLoad,
        VC_SampleUnload
--- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
+++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
@@ -117,6 +117,10 @@
        return VC_Init();
 }

+static void xmms_CommandLine(CHAR * commandLine)
+{
+}
+
 MDRIVER drv_xmms =
 {
        NULL,
@@ -126,7 +130,8 @@
 #if (LIBMIKMOD_VERSION > 0x030106)
         "xmms",
         NULL,
-#endif
+#endif
+       xmms_CommandLine, // Was missing
         xmms_IsThere,
        VC_SampleLoad,
        VC_SampleUnload

So that means recompiling xmms from source by yourself. But fear not, it’s relatively straightforward on FreeBSD 10.

Assuming that you unpacked your ports tree as root by running portsnap fetch and portsnap extract without altering anything, you will need to put the file in /usr/ports/multimedia/xmms/files/ and the cd to that directory on a terminal. Now run make && make install as root. The patch will be applied automatically.

Now one can once again run xmms and module files just work as they’re supposed to:

A patched xmms playing a MOD file on FreeBSD 10

xmms playing a MOD file on FreeBSD 10 via a patched xmms-mikmod (click to enlarge)

The sad thing is, soon xmms will no longer be with us, at least on FreeBSD. It’s now considered abandonware, as all development has ceased. The xmms port on FreeBSD doesn’t even have a maintainer anymore and it’s [scheduled for deletion] from the ports tree together with all its plugins from ports/audio and ports/multimedia. I just wish I could speak C to some degree and work on that myself. But well…

Seems my favorite audio player is finally dying, but as with all the other old software I like and consider superior to modern alternatives, I’m gonna keep it alive for as long as possible!

You can download Jakobs’ patch that I adapted for the FreeBSD ports tree right here:

Have fun! :)

Oct 222014
 

Webchat logoXIN.at has been running an IRC chat server for some time now, but the problem always lies with people needing some client software to use it, like X-Chat or Nettalk or whatever.

People usually just don’t want to install yet another chat client software, no matter how old and well-established IRC itself may be. Alternatively, they can use some other untrusted web interface to connect to either the plain text [irc://www.xin.at:6666] or the encrypted [irc+ssl://www.xin.at:6697] server via a browser, but this isn’t optimal either. Since JavaScript cannot open TCP sockets on its own, and hence cannot connect to an IRC server directly, there are only two kinds of solutions:

  • Purely client-based as a Java Applet or Adobe Flash Applet, neither of wich are very good options.
  • JavaScript client + server backend for handling the actual communication with the IRC server.
    • Server backends exist in JavaScript/Node.js, Perl, Python, PHP etc.

Since I cannot run [Node.js] and [cgi:irc] is unportable due to its reliance on UNIX sockets, only Python and PHP remained. Since PHP was easier for me, I tried the old [WebChat2] software developed by Chris Chabot for this. To achieve connection-oriented encryption security, I wrapped SSL/TLS around the otherwise unencrypting PHP socket server of WebChat2. You can achieve this with cross-platform software like [stunnel], which can essentially wrap SSL around almost every servers connection (minus the complex FTP protocol maybe). While WebChat2’s back end is based on PHP, the front end uses JavaScript/Comet. This is what it looks like:

So that should do away with the “I don’t wanna install some chat client software” problem, especially when considering that most people these days don’t even know what Internet Relay Chat is anymore. ;) It also allows anonymous visitors on this web log to contact me directly, while allowing for a more tap-proof conversation when compared with what typical commercial solutions would give you (think WhatsApp, Skype and the likes). Well, it’s actually not more tap-proof considering the server operator can still read all communication at will, but I would like to believe that I am a more trustworthy server operator than certain big corporations. ;)

Oh, and if you finally do find it in yourself to use some good client software, check out [XChat] on Linux/UNIX and its fork [HexChat] on Windows, or [LimeChat] on MacOS X. There are mobile clients too, like for Android ([AndroIRC], [AndChat]), iOS ([SIRCL], [TurboIRC]), Windows Phone 8 ([IRC Free], [IRC Chat]), Symbian 9.x S60 ([mIRGGI]) and others.

So, all made easy now, whether client software or just web browser! Ah and before I forget it, here’s the link of course:

Edit: Currently, only the following browsers are known to work with the chat (older version may sometimes work, but are untested):

  • Mozilla FireFox 31+
  • Chromium (incl. Chrome/SRWare Iron) 30+
  • Opera 25+
  • Apple Safari 5.1.7+
  • KDE Konqueror 4.3.4+

The following browsers are known to either completely break or to make the interface practically unusable:

  • Internet Explorer <=11
  • Opera <=12.17
Sep 232014
 

CD burning logoAt work I usually have to burn a ton of heavily modified Knoppix CDs for our lectures every year or so. The Knoppix distribution itself is being built by me and a colleague to get a highly secure read-only, server-controlled environment for exams and lectures. Now, usually I’m burning on both a Windows box with Ahead [Nero], and on Linux with the KDE tool [K3B] (despite being a Gnome 2 user), both GUI tools. My Windows box had 2 burners, my Linux box one. To speed things up and increase disc quality at the same time the idea was to plug more burners into the machines and burn each individual disc slower, but parallelized.

I was shocked to learn that K3B can actually not burn to multiple burners at once! I thought I was just being blind, stumbling through the GUI like an idiot, but it’s actually really not there. Nero on the other hand managed to do this for what I believe is already the better part of a decade!

True disc burning stations are just too expensive, like 500€ for the smaller ones instead of the 80-120€ I had to spend on a bunch of drives, so what now? Was I building this for nothing?

Poor mans disc station

Poor mans disc station. Also a shitty photograph, my apologies for that, but I had no real camera available at work.

Well, where there is a shell, there’s a way, right? Being the lazy ass that I am, I was always reluctant to actually use the backend tools of K3B on the command line myself. CD/DVD burning was something I had just always done on a GUI. But now was the time to script that stuff myself, and for simplicities sake I just used the bash. In addition to the shell, the following core tools were used:

  • cut
  • grep
  • mount
  • sudo (For a dismount operation, might require editing /etc/sudoers)

Also, the following additional tools were used (most Linux distributions should have them, conservative RedHat derivatives like CentOS can get the stuff from [EPEL]):

  • [eject(eject and retract drive trays)
  • [sdparm(read SATA device information)
  • sha512sum (produce and compare high-quality checksums)
  • wodim (burn optical discs)

I know there are already scripts for this purpose, but I just wanted to do this myself. Might not be perfect, or even good, but here we go. The work(-in-progress) is divided into three scripts. The first one is just a helper script generating a set of checksum files from a master source (image file or disc) that you want to burn to multiple discs later on, I call it create-checksumfiles.sh. We need one file for each burner device node later, because sha512sum needs that to verify freshly burned discs, so that’s why this exists:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
#!/bin/bash
 
wrongpath=1 # Path for the source/master image is set to invalid in the
            # beginning.
 
# Getting path to the master CD or image file from the user. This will be
# used to generate the checksum for later use by multiburn.sh
until [ $wrongpath -eq 0 ]
do
  echo -e "Please enter the file name of the master image or device"
  echo -e "(if it's a physical disc) to create our checksum. Please"
  echo -e 'provide a full path always!'
  echo -e "e.g.: /home/myuser/isos/master.iso"
  echo -e "or"
  echo -e "/dev/sr0\n"
  read -p "> " -e master
 
  if [ -b $master -o -f $master ] && [ -n "$master" ]; then
    wrongpath=0 # If device or file exists, all ok: Break this loop.
  else
    echo -e "\nI can find neither a file nor a device called $master.\n"
  fi
done
 
echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"
 
checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.
 
# Getting device node name prefix of the users' CD/DVD burners from the
# user.
echo -e "Now please enter the device node prefix of your disc burners."
echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
echo -e "etc."
read -p "> " -e devnode
 
# Getting number of burners in the system from the user.
echo -e "\nNow enter the total number of attached physical CD/DVD burners."
read -p "> " -e burners
 
((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!
 
echo -e "\nDone, creating the following files with the following contents"
echo -e "for later use by the multiburner for disc verification:"
 
# Creating the per-burner checksum files for later use by multiburn.sh.
for ((i=0;i<=$burners;i++))
do
  echo -e " * sum$i.txt: $checksum $devnode$i"
  echo "$checksum $devnode$i" > sum$i.txt
done
 
echo -e ""
#!/bin/bash

wrongpath=1 # Path for the source/master image is set to invalid in the
            # beginning.

# Getting path to the master CD or image file from the user. This will be
# used to generate the checksum for later use by multiburn.sh
until [ $wrongpath -eq 0 ]
do
  echo -e "Please enter the file name of the master image or device"
  echo -e "(if it's a physical disc) to create our checksum. Please"
  echo -e 'provide a full path always!'
  echo -e "e.g.: /home/myuser/isos/master.iso"
  echo -e "or"
  echo -e "/dev/sr0\n"
  read -p "> " -e master

  if [ -b $master -o -f $master ] && [ -n "$master" ]; then
    wrongpath=0 # If device or file exists, all ok: Break this loop.
  else
    echo -e "\nI can find neither a file nor a device called $master.\n"
  fi
done

echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"

checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.

# Getting device node name prefix of the users' CD/DVD burners from the
# user.
echo -e "Now please enter the device node prefix of your disc burners."
echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
echo -e "etc."
read -p "> " -e devnode

# Getting number of burners in the system from the user.
echo -e "\nNow enter the total number of attached physical CD/DVD burners."
read -p "> " -e burners

((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!

echo -e "\nDone, creating the following files with the following contents"
echo -e "for later use by the multiburner for disc verification:"

# Creating the per-burner checksum files for later use by multiburn.sh.
for ((i=0;i<=$burners;i++))
do
  echo -e " * sum$i.txt: $checksum $devnode$i"
  echo "$checksum $devnode$i" > sum$i.txt
done

echo -e ""

As you can see it’s getting its information from the user interactively on the shell. It’s asking the user where the master medium to checksum is to be found, what the users burner / optical drive devices are called, and how many of them there are in the system. When done, it’ll generate a checksum file for each burner device, called e.g. sum0.txt, sum1.txt, … sum<n>.txt.

Now to burn and verify media in a parallel fashion, I’m using an old concept I have used before. There are two more scripts, one is the controller/launcher, which will then spawn an arbitrary amount of the second script, that I call a worker. First the controller script, here called multiburn.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
#!/bin/bash
 
if [ $# -eq 0 ]; then
  echo -e "\nPlease specify the number of rounds you want to use for burning."
  echo -e "Each round produces a set of CDs determined by the number of"
  echo -e "burners specified in $0."
  echo -e "\ne.g.: ./multiburn.sh 3\n"
  exit
fi
 
#@========================@
#| User-configurable part:|
#@========================@
 
# Path that the image resides in.
prefix="/home/knoppix/"
 
# Image to burn to discs.
image="knoppix-2014-09.iso"
 
# Number of rounds are specified via command line parameter.
copies=$1
 
# Number of available /dev/sr* devices to be used, starting
# with and including /dev/sr0 always.
burners=3
 
# Device node name used on your Linux system, like "/dev/sr" for burners
# called /dev/sr0, /dev/sr1, etc.
devnode="/dev/sr"
 
# Number of blocks per complete disc. You NEED to specify this properly!
# Failing to do so will break the script. You can read the block count 
# from a burnt master disc by running e.g. 
# ´sdparm --command=capacity /dev/sr*´ on it.
blocks=340000
 
# Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
speed=32
 
#@===========================@
#|NON user-configurable part:|
#@===========================@
 
# Checking whether all required tools are present first:
# Checking for eject:
if [ ! `which eject 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sdparm:
if [ ! `which sdparm 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sha512sum:
if [ ! `which sha512sum 2>/dev/null` ]; then
  echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sudo:
if [ ! `which sudo 2>/dev/null` ]; then
  echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for wodim:
if [ ! `which wodim 2>/dev/null` ]; then
  echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  exit
fi
 
((burners--)) # Reducing number of burners by one as we also have a burner "0".
 
# Initial burner ejection:
echo -e "\nEjecting trays of all burners...\n"
for ((g=0;g<=$burners;g++))
do
  eject $devnode$g &
done
wait
 
# Ask user for confirmation to start the burning session.
echo -e "Burner trays ejected, please insert the discs and"
echo -e "press any key to start.\n"
read -n1 -s # Wait for key press.
 
# Retract trays on first round. Waiting for disc will be done in
# the worker script afterwards.
for ((l=0;l<=$burners;l++))
do
  eject -t $devnode$l &
done
 
for ((i=1;i<=$copies;i++)) # Iterating through burning rounds.
do
  for ((h=0;h<=$burners;h++)) # Iterating through all burners per round.
  do
    echo -e "Burning to $devnode$h, round $i."
    # Burn image to burners in the background:
    ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &
  done
  wait # Wait for background processes to terminate.
  ((j=$i+1));
  if [ $j -le $copies ]; then
    # Ask user for confirmation to start next round:
    echo -e "\nRemove discs and place new discs in the drives, then"
    echo -e "press a key for the next round #$j."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k &
    done
    wait
  else
    # Ask user for confirmation to terminate script after last round.
    echo -e "\n$i rounds done, remove discs and press a key for termination."
    echo -e "Trays will close automatically."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k & # Pull remaining empty trays back in.
    done
    wait
  fi
done
#!/bin/bash

if [ $# -eq 0 ]; then
  echo -e "\nPlease specify the number of rounds you want to use for burning."
  echo -e "Each round produces a set of CDs determined by the number of"
  echo -e "burners specified in $0."
  echo -e "\ne.g.: ./multiburn.sh 3\n"
  exit
fi

#@========================@
#| User-configurable part:|
#@========================@

# Path that the image resides in.
prefix="/home/knoppix/"

# Image to burn to discs.
image="knoppix-2014-09.iso"

# Number of rounds are specified via command line parameter.
copies=$1

# Number of available /dev/sr* devices to be used, starting
# with and including /dev/sr0 always.
burners=3

# Device node name used on your Linux system, like "/dev/sr" for burners
# called /dev/sr0, /dev/sr1, etc.
devnode="/dev/sr"

# Number of blocks per complete disc. You NEED to specify this properly!
# Failing to do so will break the script. You can read the block count 
# from a burnt master disc by running e.g. 
# ´sdparm --command=capacity /dev/sr*´ on it.
blocks=340000

# Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
speed=32

#@===========================@
#|NON user-configurable part:|
#@===========================@

# Checking whether all required tools are present first:
# Checking for eject:
if [ ! `which eject 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sdparm:
if [ ! `which sdparm 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sha512sum:
if [ ! `which sha512sum 2>/dev/null` ]; then
  echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sudo:
if [ ! `which sudo 2>/dev/null` ]; then
  echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for wodim:
if [ ! `which wodim 2>/dev/null` ]; then
  echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  exit
fi

((burners--)) # Reducing number of burners by one as we also have a burner "0".

# Initial burner ejection:
echo -e "\nEjecting trays of all burners...\n"
for ((g=0;g<=$burners;g++))
do
  eject $devnode$g &
done
wait

# Ask user for confirmation to start the burning session.
echo -e "Burner trays ejected, please insert the discs and"
echo -e "press any key to start.\n"
read -n1 -s # Wait for key press.

# Retract trays on first round. Waiting for disc will be done in
# the worker script afterwards.
for ((l=0;l<=$burners;l++))
do
  eject -t $devnode$l &
done

for ((i=1;i<=$copies;i++)) # Iterating through burning rounds.
do
  for ((h=0;h<=$burners;h++)) # Iterating through all burners per round.
  do
    echo -e "Burning to $devnode$h, round $i."
    # Burn image to burners in the background:
    ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &
  done
  wait # Wait for background processes to terminate.
  ((j=$i+1));
  if [ $j -le $copies ]; then
    # Ask user for confirmation to start next round:
    echo -e "\nRemove discs and place new discs in the drives, then"
    echo -e "press a key for the next round #$j."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k &
    done
    wait
  else
    # Ask user for confirmation to terminate script after last round.
    echo -e "\n$i rounds done, remove discs and press a key for termination."
    echo -e "Trays will close automatically."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k & # Pull remaining empty trays back in.
    done
    wait
  fi
done

This one will take one parameter on the command line which will define the number of “rounds”. Since I have to burn a lot of identical discs this makes my life easier. If you have 5 burners, and you ask the script to go for 5 rounds that would mean you get 5 × 5 = 25 discs, if all goes well. It also needs to know the size of the medium in blocks for a later phase. For now you have to specify that within the script. The documentation inside shows you how to get that number, basically by checking a physical master disc with sdparm –command=capacity.

Other things you need to specify are the path to the image, the image files’ name, the device node name prefix, and the burning speed in factor notation. Also, of course, the number of physical burners available in the system. When run, it’ll eject all trays, prompt the user to put in discs, and launch the burning & checksumming workers in parallel.

The controller script will wait for all background workers within a round to terminate, and only then prompt the user to remove and replace all discs with new blank media. If this is the last round already, it’ll prompt the user to remove the last media set, and will then retract all trays by itself at the press of any key. All tray ejection and retraction is done automatically, so with all your drive trays still empty and closed, you launch the script, it’ll eject all drive trays for you, and retract after a keypress signaling the script all trays have been loaded by the user etc.

Let’s take a look at the worker script, which is actually doing the burning & verifying, I call this burn-and-check-worker.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
#!/bin/bash
 
burner=$1   # Burner number for this process.
image=$2    # Image file to burn.
blocks=$3   # Image size in blocks.
round=$4    # Current round (purely to show the info to the user).
speed=$5    # Burning speed.
devnode=$6  # Device node prefix (devnode+burner = burner device).
bwait=0     # Timeout variable for "blank media ready?" waiting loop.
mwait=0     # Timeout variable for automount waiting loop.
swait=0     # Timeout variable for "disc ready?" waiting loop.
m=0         # Boolean indicating automount failure.
 
echo -e "Now burning $image to $devnode$burner, round $round."
 
# The following code will check whether the drive has a blank medium
# loaded ready for writing. Otherwise, the burning might be started too
# early when using drives with slow disc access.
until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
do
  ((bwait++))
  if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
    echo -e "(Was trying to burn to $devnode$burner in round $round,"
    echo -e "failed to detect any blank medium in the drive.)\e[0m"
    eject $devnode$burner
    exit
  fi
  sleep 1 # Sleep 1 second before next check.
done
 
wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.
 
# Notify user if burning failed.
if [[ $? != 0 ]]; then
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  echo -e "Ejecting and aborting this thread.\e[0m\n"
  eject $devnode$burner
  exit
fi
 
# The following code will eject and reload the disc to clear the device
# status and then wait for the drive to become ready and its disc to
# become readable (checking the discs block count as output by sdparm).
eject $devnode$burner && eject -t $devnode$burner
until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
do
  ((swait++))
  if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
    echo -e "(Was trying to access $devnode$burner in round $round,"
    echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
    exit
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
 
# The next part is only necessary if your system auto-mounts optical media.
# This is usually the case, but if your system doesn't do this, you need to
# comment the next block out. This will otherwise wait for the disc to
# become mounted. We need to dismount afterwards for proper checksumming.
until [ -n "`mount | grep $devnode$burner`" ]
do
  ((mwait++))
  if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
    echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
    echo -e "Attempting to carry on..."
    echo -e "(Was waiting for disc on $devnode$burner to automount in"
    echo -e "round $round for 30 seconds.)\e[0m\n."
    m=1
    break
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  sudo umount $devnode$burner # Dismount burner as root/superuser.
fi
 
# On to the checksumming.
echo -e "Now comparing checksums for $devnode$burner, round $round."
sha512sum -c sum$burner.txt # Comparing checksums.
if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
fi
 
eject $devnode$burner # Ejecting disc after completion.
#!/bin/bash

burner=$1   # Burner number for this process.
image=$2    # Image file to burn.
blocks=$3   # Image size in blocks.
round=$4    # Current round (purely to show the info to the user).
speed=$5    # Burning speed.
devnode=$6  # Device node prefix (devnode+burner = burner device).
bwait=0     # Timeout variable for "blank media ready?" waiting loop.
mwait=0     # Timeout variable for automount waiting loop.
swait=0     # Timeout variable for "disc ready?" waiting loop.
m=0         # Boolean indicating automount failure.

echo -e "Now burning $image to $devnode$burner, round $round."

# The following code will check whether the drive has a blank medium
# loaded ready for writing. Otherwise, the burning might be started too
# early when using drives with slow disc access.
until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
do
  ((bwait++))
  if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
    echo -e "(Was trying to burn to $devnode$burner in round $round,"
    echo -e "failed to detect any blank medium in the drive.)\e[0m"
    eject $devnode$burner
    exit
  fi
  sleep 1 # Sleep 1 second before next check.
done

wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.

# Notify user if burning failed.
if [[ $? != 0 ]]; then
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  echo -e "Ejecting and aborting this thread.\e[0m\n"
  eject $devnode$burner
  exit
fi

# The following code will eject and reload the disc to clear the device
# status and then wait for the drive to become ready and its disc to
# become readable (checking the discs block count as output by sdparm).
eject $devnode$burner && eject -t $devnode$burner
until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
do
  ((swait++))
  if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
    echo -e "(Was trying to access $devnode$burner in round $round,"
    echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
    exit
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done

# The next part is only necessary if your system auto-mounts optical media.
# This is usually the case, but if your system doesn't do this, you need to
# comment the next block out. This will otherwise wait for the disc to
# become mounted. We need to dismount afterwards for proper checksumming.
until [ -n "`mount | grep $devnode$burner`" ]
do
  ((mwait++))
  if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
    echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
    echo -e "Attempting to carry on..."
    echo -e "(Was waiting for disc on $devnode$burner to automount in"
    echo -e "round $round for 30 seconds.)\e[0m\n."
    m=1
    break
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  sudo umount $devnode$burner # Dismount burner as root/superuser.
fi

# On to the checksumming.
echo -e "Now comparing checksums for $devnode$burner, round $round."
sha512sum -c sum$burner.txt # Comparing checksums.
if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
fi

eject $devnode$burner # Ejecting disc after completion.

So as you can probably see, this is not very polished, scripts aren’t using configuration files yet (would be a nice to have), and it’s still a bit chaotic when it comes to actual usability and smoothness. It does work quite well however, with the device/disc readiness checking as well as the anti-automount workaround having been the major challenges (now I know why K3B ejects the disc before starting its checksumming, it’s simply impossible to read from the disc after burning finishes).

When run, it looks like this (user names have been removed and paths altered for the screenshot):

Multiburner

“multiburner.sh” at work. I was lucky enough to hit a bad disc, so we can see the checksumming at work here. The disc actually became unreadable near its end. Verification is really important for reliable disc deployment.

When using a poor mans disc burning station like this, I would actually recommend putting stickers on the trays like I did. That way, you’ll immediately know which disc to throw into the garbage bin.

This could still use a lot of polishing, and it’s quite sad, that the “big” GUI tools can’t do parallel burning, but I think I can now make due. Oh, and I actually also tried Gnomes “brasero” burning tool, and that one is far too minimalistic and can also not burn to multiple devices at the same time. They may be other GUI fatsos that can do it, but I didn’t want to try and get any of those installed on my older CentOS 6 Linux, so I just did it the UNIX way, even if not very elegantly. ;)

Maybe this can help someone out there, even though I think there might be better scripts than mine to get it done, but still. Otherwise, it’s just documentation for myself again. :)

Edit: Updated the scripts to implement a proper blank media detection to avoid burning starting prematurely in rare cases. In addition to that, I added some code to detect burning errors (where the burning process itself would fail) and notify the user about it. Also applied some cosmetic changes.

Edit 2: Added tool detection to multiburn.sh, and removed redundant color codes in warning & error messages in burn-and-check-worker.sh.

Article logo based on the works of Lorian and Marcin Sochacki, “DVD.png” licensed under the CC BY-SA 3.0.

Sep 062014
 

The Grim Dawn logoGame compatibility is generally becoming a major issue on Windows XP and XP x64, and I’m not even talking about Direct3D 10/11 here. Microsofts own software development kits and development environments (VisualStudio) come preconfigured in a pretty “Anti-XP” way these days, even if you intend to just build Direc3D 9 or OpenGL 4 applications with it.

There are examples where even Indie developers building Direct3D 9.0c games refuse to deal with the situation in any way other than “Please go install a new OS”, Planetary Annihilation being the prime example. Not so for Grim Dawn though, a project by a former Titan Quest developer which I [helped fund] on Kickstarter a while back. In their recent build numbered B20, an issue arose that seemed to be linked to XP exclusively. See this:

Grim Dawn B20 if_indextoname() Bug

Grim Dawn B20 if_indextoname() bug, in this case on a German 32-Bit Windows XP. Looks similar on XP x64 though. © by [Hevel].

More information can be seen in the corresponding Grim Dawn [forum thread], where I and others reported the issue, determining that it was on XP only in the process. That thread actually consists of two issues, just focus on the if_indextoname() one. This is also documented at [Microsofts MSDN library].

The function seems to be related to DNS name resolution and is a part of the Windows IP Helper API on Windows Vista and newer.  if_indextoname() does however not exist on any NT5.x system, which means Windows 2000, XP and 2oo3 Server which includes XP x64 and there is no fallback DLL. My assumption is, that this happened because of the newly added multiplayer netcode in the game.

Now the interesting part: After me and a few other XP users reported the issue starting on the 30th of August, it took the developers only 3 days to roll out a hotfix via Steam, and all was good again! I believe nowadays you can judge developers by how well they support niche systems, and in this case support was stellar. It may also have something to do with the Grim Dawn developers actively participating in their forums of course. That’s also great, as you can interact with them directly! No in-between publisher customer support center crap, but actually people who know their stuff, ’cause they’re the ones building it!

So I’d like to say a big “Thank you, and keep up the good work!” here!

Jul 292014
 

JustitiaI thought it impossible, but it is indeed happening. As reported [here]German flag, the Austrian [Supreme Court of Justice] (not the same thing as the constitutional court) ruled that in case of massive copyright infringements, enforcement of a nation-wide ban of certain servers is justifiable. If such a ban is being decided upon, every Internet provider receives a written court order and of course has to obey such an order immediately and put in place effective mechanisms to block access to a certain host – nation wide! As far as I know that means an IP ban currently, not a host name ban.

The first round of active bans starts with 1st of August 2014, where [The Piratebay], [Kinox.to] and [Movie4k.to] will effectively have to be banned in Austria. This resembles Internet bans as seen in “The great firewall” of China or in more extreme cases in North Korea. Naturally, this is not good, as it represents a growing encroachment on our Internet freedoms.

I remember, when I was in China, I kept an SSH2 server and HTTP proxy open at home, so I could tunnel home through my SSH2 connection and then use the local HTTP proxy via that encrypted connection to access all of the Internet, because I knew access would be free and untampered with here. Now it seems people are getting ready to do the same thing and use encrypted virtual private networks (VPNs) or the Tor network to reach such sites using foreign exit nodes.

This is madness!

Austria was supposed to be a (relatively) free country where Internet services can not be banned simply for certain potentials they may offer.

Naturally, Internet Service Providers – even the largest ones – have protested sharply against this and even sued for having this insanity thwarted, but unfortunately they lost their case. It seems that despite several achieved victories on the front lines of a free Internet we’re heading into stormy waters now. ISPs also said that they’d be wrongfully pushed into the roles of judges because it would be them to decide, whether a web sites principal purpose is copyright infringement, thus justifying the ban (that’s the weird part).

Naturally, driving users towards using anonymization networks and VPNs only serves to further criminalize the use of those services too, giving them a bad name and making the problem worse.

Soon, we will be witnessing the first stones being set to form the fundament of a Great Firewall of Austria. The 1st of August will not be a good day, not at all.

Update: It seems that the matter is being discussed and renegotiated at the moment. As a result, the requested bans have been pushed back to an undefined point in time. So for now, all three sites I mentioned remain reachable within Austria. I will keep you updated as soon as any news about this surface.

Update 2, 2014-10-08: And here we go, the VAP (“Verein für Anti-Piraterie”, or anti-piracy association) did it. There are now DNS blockades in place for the domains kino.to and also movie4k.to. Querying any DNS server of the providers A1, Drei, Tele2 or UPC for those domain names will result in the IP address “0.0.0.0”, thus rendering the web sites inaccessible for any “normal” user. The VAP has now even been asking for an IP ban, which could easily affect multi-service machines – think email servers using the same IP – and also virtual hosts, where multiple websites/domains are hosted on a single machine or any high availability cluster of machines that does not use DNS-level clustering (well, you wouldn’t be able to resolve the DNS name anymore anyway).

Users have reacted by simply using other, free DNS servers on the web, and the site operators have reacted by using alternate top-level domain names, like movie4k.tv for instance. It seems the war is on. Providers like UPC have stated in public interviews, that this process is ethically questionable, as soon people in power may learn what kind of a tool such censorship could be to them – potentially eliminating criticism or any publication that they’d rather see gone.

I would like to add – once again – that I find it highly disturbing that a supposedly free country like Austria would implement measures reminiscent of things that happen in Turkey, China or North Korea…

Oh and The Pirate Bay is next it seems.

Jul 172014
 

XViewerThe first release of XViewer is now available, providing TK-IP101 users with a way to still manage their installations using modern Java versions and operating systems without any blocker bugs and crashes. I have created a static page about it [here] including downloads and the statements required by TRENDnet. You can also see it on the top right of this weblog. This is the first fruition of TRENDnet allowing me to release my modified version of their original KViewer under the GPLv3 license.

As requested, all traces of TRENDnet and their TK-IP101 box have been removed from the code (not that there were many anyway, as the code was reverse-engineered from the byte code) on top of the rename to XViewer. In time, I will also provide my own documentation for the tool.

Since I am no Java developer, you shouldn’t expect any miracles though. Also, if anyone would be willing to fork it into yet another, even better version of the program, you’re of course welcome to do so!

Happy remote monitoring & managing to you all! :)

Edit: Proper documentation for SSL certificate creation using a modern version of [XCA] (The X certificate and key management tool) and about setting up and using XViewer & XImpcert has now also been made [available]!

Jul 162014
 

XViewer logoIn my [last post] I have talked about the older TRENDnet TK-IP101 KVM-over-IP box I got to manage my server over the network even in conditions where the server itself is no longer reachable (kernel crash, BIOS, etc.).

I also stated that the client software to access the box is in a rather desolate state, which led me to the extreme step of decompiling the Java-based Viewer developed by TRENDnet called KViewer.jar and its companion tool for SSL certificate imports, Impcert.jar.

Usually, software decompilation is a rather shady business, but I did this as a TRENDnet support representative could not help me out any further. After reverse-engineering the software, making it compatible with modern Java Runtime environments and fixing a blocker bug in the crypto code, I sent my code and the binary back to TRENDnet for evaluation, asking them to publish the fixed versions. They refused, stating that the product was end-of-life.

In a second attempt, I asked the guy for permission to release my version of KViewer including the source code and also asked which license I could use (GPL? BSD? MIT?). To my enormous surprise, the support representative conferred with the persons in charge, and told me that it had been decided to grant me permission to release KViewer under the GNU General Public License (GPL), as long as all mention of TRENDnet and related products are removed from the source code and program.

To further distinct the new program from its original, I renamed it to “XViewer”, and its companion tool to “XImpcert”, as a hommage to my server, XIN.at.

KVM host:port

The former KViewer by TRENDnet, that works up to Java 1.6u27

XViewer

XViewer, usable on JRE 1.7 and 1.8

Now, I am no Java developer, I don’t know ANYthing about Java, but what I did manage to do is to fix all errors and warnings currently reported by the Eclipse Luna development environment and the Java Development Kit 1.7u60 on the source code. While my version no longer supports Java 1.6, it does run fine on Java 1.7u60 and 1.8u5, tested on Windows XP Professional x64 Edition and CentOS 6.5 Linux x86_64. A Window closing bug has been fixed by my friend Cosmonate, and I myself got rid of a few more. In addition to that, new buttons have been added for an embedded “About” window and an embedded GPLv3 license as suggested by TRENDnet.

On top of that, I hereby state that I am not affiliated with TRENDnet and that TRENDnet of course cannot be held liable for any damage or any problems resulting from the use of the modified Java viewer now known as XViewer or its companion tool XImpcert. That shall be said even before the release, as suggested to TRENDnet by myself and subsequently confirmed to be a statement required by the company.

In the very near future, I will create a dedicated site about XViewer on this weblog, maybe tomorrow or the day after tomorrow.

Oh and of course: Thanks fly out to Albert from TRENDnet and the people there who decided to grant me permission to re-release their viewer under the GPL! This is not something that we can usually take for granted, so kudos to TRENDnet for that one!

Jul 112014
 

TK-IP101 logoAttention please: This article contains some pretty negative connotations about the software shipped with the TRENDnet TK-IP101 KVM-over-IP product. While I will not remove what I have written, I have to say that TRENDnet went lengths to support me in getting things done, including allowing be to decompile and re-release their Java software in a fixed form under the free GNU General Public license. Please [read this] to learn more. This is extremely nice, and so it shall be stated before you read anything bad about this product, so you can see things in perspective! And no, TRENDnet has not asked me to post this paragraph, those are my own words entirely.

I thought that being able to manage my server out-of-band would be a good idea. It does sound good, right? Being able to remotely control it even if the kernel has crashed and being able to remotely access everything down to the BIOS level. A job for a KVM-over-IP switch. So I got this slightly old [TK-IP101] from TRENDnet. Turns out that wasn’t the smartest move, and it’s actually a 400€ piece of hardware. The box itself seems pretty ok at first, connecting to your KVM switch fabric or a single server via PS/2, USB and VGA. Plus, you can hook up a local PS/2 keyboard and mouse too. Offering what was supposed to be highly secure SSL PKI autentication via server+client certificates, so that only clients with the proper certificate may connect plus a web interface, this sounded really good!

TRENDnet TK-IP101

TRENDnet TK-IP101

It all breaks down when it comes to the software though. First of all, the guide for certificate creation that is supposed to be found on the CD that comes with the box is just not there. Also, the XCA software TRENDnet suggests one should use was also missing. Not good. Luckily, the software is open source and can be downloaded from the [XCA SourceForge project]. It’s basically a graphical OpenSSL front end. Create a PEM-encoded root certificate, PEM-encoded server certificate and a PKCS#12 client certificate, the latter signed by the root cert. So much for that. Oh, and I uploaded that TRENDnet XCA guide for you in case it’s missing on your CD too, its a bit different for the newer version of XCA, but just keep in mind to create keys beforehand and to use certificate requests instead of certificates. You then need to sign the requests with the root certificate. With that information plus the guide you should be able to manage certificate creation:

But it doesn’t end there. First I tried the Windows based viewer utility that comes with its own certificate import tool. Import works, but the tool will not do client+server authentication. What it WILL do before terminating itself is this:

TK-IP101 IPViewer.exe bug

TK-IP101 IPViewer.exe bug

I really tried to fix this. I even ran it on Linux with Wine just to do an strace on it, looking for failing open() calls. Nothing. So I thought… Why not try the second option, the Java Viewer that goes by the name of KViewer.jar? Usually I don’t install Java, but why not try it out with Oracles Java 1.7u60, eh? Well:

So yeah. What the hell happened there? It took me days to determine the exact cause, but I’ll cut to the chase: With Java 1.6u29, Oracle introduced multiple changes in the way SSL/TLS worked, also due to the disclosure of the BEAST vulnerability. When testing, I found that the software would work fine when run with JRE 1.6u27, but not with later versions. Since Java code is pretty easily decompiled (thanks fly out to Martin A. for pointing that out) and the Viewer just came as a JAR file, I thought I’d embark on the adventure of decompiling Java code using the [Java Decompiler]:

Java Decompiler decompiling KViewer.jar's Classes

Java Decompiler decompiling KViewer.jar’s Classes

This results in surprisingly readable code. That is, if you’re into Java. Which I am not. But yeah. The Java Decompiler is pretty convenient as it allows you to decompile all classes within a JAR and to extract all other resources along with the generated *.java files. And those I imported into a Java development environment I knew, Eclipse Luna.

Eclipse Luna

Eclipse Luna

Eclipse Luna (using a JDK 7u60) immediately complained about 15 or 16 errors and about 60 warnings. Mostly that was missing primitive declarations and other smaller things that even I managed to fix, got rid even of the warnings. But the SSL bug persisted in my Java 7 build just as it did before. See the following two traces, tracing SSL and handshaking errors, one working ok on JRE 1.6u27, and one broken on JRE 1.7u60:

So first I got some ideas from stuff [posted here at Oracle], and added the following two system properties in varying combinations directly in the Main class of KViewer.java:

1
2
3
4
5
6
7
8
9
10
11
12
public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}
public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

This didn’t really do any good though, especially since “interoperable” mode should work anyway and is being set as the default. But today I found [this information on an IBM site]!

It seems that Oracle fixed the BEAST vulnerability in Java 1.6u29 amongst other things. They seem to have done this by disallowing renegotiations for affected implementations of CBCs (Cipher-Block Chaining). Now, this KVM switch can negotiate only a single cipher: SSL_RSA_WITH_3DES_EDE_CBC_SHA. See that “CBC” in there? Yeah, right. And it got blocked, because the implementation in that aged KVM box is no longer considered safe. Since you can’t just switch to a stream-based RC4 cipher, Java has no other choice but to drop the connection! Unless…  you do this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}
public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

Setting the jsse.enableCBCProtection property to false before the negotiation / handshake will make your code tolerate CBC ciphers vulnerable to BEAST attacks. Recompiling KViewer with all the code fixes including this one make it work fine with 2-way PKI authentication using a client certificate on both Java 1.7u60 and even Java 1.8u5. I have tested this using the 64-Bit x86 VMs on CentOS 6.5 Linux as well as on Windows XP Professional x64 Edition and Windows 7 Professional SP1 x64.

De-/Recompiled & "fixed" KViewer connecting to a machine much older even than its own crappy code

De-/Recompiled & “fixed” KViewer.jar connecting to a machine much older even than its own crappy code

I fear I cannot give you the modified source code, as TRENDnet would probably hunt me down, but I’ll give you the compiled byte code at least, the JAR file, so you can use it yourself. If you wanna check out the code, you could just decompile it yourself, losing only my added comments: [KViewer.jar]. (ZIPped, fixed / modified to work on Java 1.7+)

Both the modified code and the byte code “binary” JAR have been returned to TRENDnet in the context of my open support ticket there. I hope they’ll welcome it with open arms instead of suing me for decompiling their Java viewer.

In reality, even this solution is nowhere near perfect. While it does at least allow you to run modern Java runtime environments instead of highly insecure older ones plus using pretty secure PKI auth, it still doesn’t fix the man-in-the-middle-attack issues at hand. TRENDnet should fix their KVM firmware, enable it to run the TLSv1.2 protocol with AES256 Galois-Counter-Mode ciphers (GCM) and fix the many, many problems in their viewer clients. The TK-IP101 being an end-of-life product means that this is likely never gonna happen though.

It does say a lot when the consumer has to hack up the software of a supposedly high-security 400€ networking hardware piece by himself just to make it work properly.

I do still hope that TRENDnet will react positively to this, as they do not offer a modern replacement product to supersede the TK-IP101.

Jul 032014
 

ibm-logoI was thinking about getting some spare parts for my server for quite some time now, but now I found a guy in the US who had not just some processor boards compatible with black 1MB Pentium PRO CPUs, but also a complete system board, all in completely new and unused condition. This is pretty rare for parts from 1995 and 1997, so I negotiated a good price with the man, and the boards have already arrived. Of course I had the honors of paying for customs again (20% VAT plus 10€ administrative fee), but oh well. Can’t do anything about that.

Let’s take a look at the system board first, this thing is pretty huge, featuring two north bridges with two separate PCI busses, and also two SCSI controllers onboard:

All new and shiny! And here are the processor riser boards, they hold two processors each for a total of four processors. Note that one of the boards has red capacitors, something I have not seen so far, but they seem to be pretty identical otherwise. The capacitors are 105°C parts:

Now I have one processor, two proper CPU riser boards supporting my 1MB CPUs (the first generation of CPU boards didn’t) and a whole system board as spare parts. Not that I’d really need them, so far there have been no hardware failures other than dead hard drives in my server. But you can never know, and getting parts in such good condition might get harder and harder over time. Now I’m only looking for one last thing, a hot-swappable power supply, IBM FRU #12J3342. While I do have three installed, another spare in case of failure surely wouldn’t hurt:

So that’s the last spare part missing. You can get them in the US, but I’ll keep looking until I can find one for a half-decent price. Shipping is expensive for these parts, because the power supplies are very heavy, and then there is customs too (they include the shipping cost in their fee calculation). But one day, I’ll surely get a fourth power supply too. Then XIN.at will be protected against pretty much all possible hardware failure scenarios. :)

Jun 272014
 

JustitiaThis just in: A very important battle has been won! The [Austrian Constitutional Court] has decided just this morning that the data retention laws are unconstitutional and illegal, following a similar decision made by the [European Court of Justice]. The Austrian government is thus required to repair the laws to re-establish an environment that respects personal privacy and data security. For those of you who can read German, here is the corresponding [news report]German flag!

The Constitutional Court – highest and most significant in the country – has decided that the laws we currently have are disproportional when it comes to what we have to sacrifice for gaining what seems to be very little. They stated, that the possibility of linking meta data together (to create profiles of persons and networks of persons) is especially problematic. Several paragraphs of the laws have been declared outright unconstitutional!

On top of that it was said that baseless surveillance is dangerous and the risk of abuse is high due to many people having access to the data collected (think: Internet Service Providers). Plus, the laws have never actually been used for their prime purpose, fighting terrorism. Not even once. They have been used in cases of theft and stalking though, which does not justify such a deep cut into the privacy of every single citizen of Austria.

The current Austrian government (including the minister of justice!) had still defended data retention and declared to want to keep it as-is. However, now they have to back off from that stance, no matter what.

A very important battle has been won, while the war rages on. So, while this is one of the most significant victories in terms of freedom in a long time, never forget:

“The price of freedom is eternal vigilance”.

Now that we are sensitized for the matter, and mechanisms to defend against further attacks are firmly in place, we are in a good position to defend against the next attempt to undercut our society. Just need to stay vigilant!

One can only hope that this will serve as a guiding light for countries that still have data retention in place… This really needs to go for good, not just in Austria!