Jan 262015
 

FreeBSD battery logo[1] While I’m still using the free HP/Compaq nx6310 notebook a friend has given me, I ran into a little problem during actual productive use with FreeBSD 10.1 UNIX. A problem that also affects more modern HP notebooks like the EliteBook or ProBook series as well as certain Macbooks as I learned only later. The issue lies within the battery status reporting. While the very first state transition will be recognized, as in say you disconnect the AC cable, or the charge drops by 1% or whatever, subsequent changes are completely ignored.

An example: You want to join a meeting with 100% charge. You disconnect the AC cable, an event that is still recognized by the system. The charge may then stand at 99%. It would however not update further, but just stay there until either a reboot or until the lights just go dark. Not good if you intend to actually use the system on the go. The user just needs to be aware of the battery level!

1.) The right direction, but still a fruitless attempt

So I read that HP has quite some share of trouble with Windows-specific and also buggy ACPI BIOS code, and thus, this has been a pain for some time, not just on FreeBSD, but also Linux and in some cases even Windows.

At first I [reported the issue in the FreeBSD forums], after which I was asked to report the problem on the [freebsd-acpi@freebsd.org] mailing list. Problem only was that one can’t post anonymously and the registration never worked for me. So I kept digging in the dirt, and my first attempt was to fix the ACPI code of course, or DSDT as it’s called – the Differentiated System Description Table, written in ASL – the ACPI Source Language. Basically a pretty creepy byte code language providing a bidirectional interface between the operating system and the systems BIOS/UEFI, which in turn controls certain aspects of the hardware. Think display brightness adjustments, WiFi on/off switches, audio volume control via keyboard, sleep buttons etc. – all that is ACPI. You can find the specs [here] if you really want to take a look.

HP compiled the code using [Microsofts ASL compiler], which is known to be rather sloppy and ignore tons of bugs in the code when compiling. So my first step was to dump the active DSDT from the systems memory using acpidump and tell the tool to disassemble it using Intels ASL compiler iasl in the same step:

# acpidump -d > ~/nx6310-stock.asl

Then, I attempted to recompile using the Intel ASL compiler again:

# iasl ~/nx6310-stock.asl

And then came the errors and warnings. Plus some nasty OS-specific code. Since an operating system can and will identify itself to the BIOS via ACPI, one can unfortunately filter by operating system on the BIOS level. Very bad stuff. Like this, taken from the ASL code directly, as written by HP:

1
2
3
4
5
6
Name (C015, Package (0x08)
{
     "Microsoft Windows",
     "Microsoft WindowsME: Millennium Edition",
     "Microsoft Windows NT"
})
Name (C015, Package (0x08)
{
     "Microsoft Windows",
     "Microsoft WindowsME: Millennium Edition",
     "Microsoft Windows NT"
})

…and:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
If (LOr (LEqual (C014, 0x00), LEqual (C014, 0x03)))
{
     If (CondRefOf (\_OSI, Local0))
     {
          If (\_OSI ("Windows 2001"))
          {
               Store (0x04, C014)
          }
 
          If (\_OSI ("Windows 2001 SP1"))
          {
               Store (0x04, C014)
          }
 
          If (\_OSI ("Windows 2001 SP2"))
          {
               Store (0x05, C014)
          }
 
          If (\_OSI ("Windows 2006"))
          {
               Store (0x06, C014)
          }
     }
}
If (LOr (LEqual (C014, 0x00), LEqual (C014, 0x03)))
{
     If (CondRefOf (\_OSI, Local0))
     {
          If (\_OSI ("Windows 2001"))
          {
               Store (0x04, C014)
          }

          If (\_OSI ("Windows 2001 SP1"))
          {
               Store (0x04, C014)
          }

          If (\_OSI ("Windows 2001 SP2"))
          {
               Store (0x05, C014)
          }

          If (\_OSI ("Windows 2006"))
          {
               Store (0x06, C014)
          }
     }
}

Next logical step was to fake the OS FreeBSD was reporting to the ACPI subsystem. You can either do that by changing and embedding the ASL (non-MS OS strings are for instance “FreeBSD” or “Linux”), or luckily also by adding something like the following line to /boot/loader.conf, after which you need to reboot:

hw.acpi.osname="Windows 2006"

That didn’t do the trick for me however, so I tried to run my modified ACPI code after all. To do that you need to recompile the ASL, and then have loader pick it up in the early boot sequence of the kernel. First, I compiled the fixed code:

# iasl -tc ~/nx6310.asl

The result will be /tmp/acpidump.aml (for whatever reason). I placed it in /boot/ and added the following to /boot/loader.conf:

acpi_dsdt_load="YES"
acpi_dsdt_name="/boot/nx6310.aml"

Now another reboot. To verify the AML gets loaded, boot the kernel in verbose mode, you can choose that option in the boot loader. It’ll show something like Preloaded acpi_dsdt "/boot/nx6310.aml" at 0xc18dcc34. in /var/log/messages.

In my case however, while the AML was indeed being loaded, the BIOS’ tables were strangely not overwritten. I never found out why. Dumping them again would just give me the bugged code by HP. So that seemed to be dead end.

2.) The solution

Then I discovered that this might actually be a problem with the FreeBSD kernel itself by stumbling over [problem report 162859] in FreeBSDs bugzilla. It seems there was a problematic commit to the kernels ACPI subsystem written by Jung-uk Kim, initially for FreeBSD 9.0 in 2011: [r216942]. By now, he had provided two patches for it, of which I tried the [newer one] unsuccessfully, making the problem even worse. You can see my replies under my real name “Michael Lackner” in that PR.

Luckily, it was seemingly my complaint on top of the others that made Jung-uk Kim revert the [original patch], all back to how it was in FreeBSD 8.0, which some people reported to work just fine. So I got the now reverted/fixed code from [r277579] – a file named acpi_ec.c, and replaced /usr/src/sys/dev/acpica/acpi_ec.c with it. Then, I recompiled the FreeBSD kernel [as described in the FreeBSD handbook] once more (fear not, it’s actually very easy).

Reboot, and everything just worked! No need for a DSDT replacement or operating system faking even. I tested charge, discharge, AC cable dis-/reconnect and full charge states. All ok now! Finally, that book is now 100% ready for production use! :)

I mean, let’s not make any mistake here, the ACPI code from HP is still buggy. But like the Linux kernel, the FreeBSD kernel knows how to sail around the most severe bugs, including Microsoft-specific code in certain cases, at least now that this issue is fixed.

Thanks fly out to Jung-uk Kim for reverting the problematic patch! It should be merged from current any day now. If you want to get it done faster than that, just fetch acpi_ec.c, place it in the right folder and recompile the kernel and with it all of its modules, just like I did.

[1] Original battery icon used for the logo is © Freepik from Flaticon and is licensed under CC BY 3.0

Jan 152015
 

4Kn logoWhile I’ve been planning to build myself a new RAID-6 array for some time (more space, more speed), I got interested in the latest and greatest of hard drive innovations, which is the 4Kn Advanced Format. Now you may now classic hard drives with 512 byte sectors and the regular Advanced Format also known as 512e, which uses 4kiB physical sector sizes, but emulates 512 byte sectors for compatibility reasons. Interestingly, [Microsoft themselves state], that “real” 4Kn harddrives, which expose their sector size to the operating system with no glue layers in between are only supported in Windows 8 and above. So even Windows 7 has no official support.

On top of that, Intel [has stated], that their SATA controller drivers do not support 4Kn, so hooking one such drive up to your Intel chipsets’ I/O controller hub (ICH) or platform controller hub (PCH) will not work. Quote:

“Intel® Rapid Storage Technology (Intel® RST) version 9.6 and newer supports 4k sector disks if the device supports 512 byte emulation (512e). Intel® RST does not support 4k native sector size devices.”

For clarity, to make 4Kn work in a clean fashion, it must be supported on three levels, from lowest to highest:

  1. The firmware: For mainboards, this means your system BIOS/UEFI. For dedicated storage controllers, the controller BIOS itself.
  2. The kernel driver of the storage controller, so that’s your SATA AHCI/RAID drivers or SAS drivers.
  3. Any applications above it performing raw disk access, whether kernel or user space. File system drivers, disk cloning software, low level benchmarks, etc.

Granted, 4Kn drives are extremely new and still very rare. There is basically only the 6TB Seagate enterprise drives available ([see here]) and then some Toshiba drives, also enterprise class. But, to protect my future investments in that RAID-6, I got myself a [Toshiba MG04ACA300A] 3TB drive, which was the only barely affordable 4Kn disk I could get, basically also the only one available right now besides the super expensive 6TB Seagates. That way I can check for 4Kn compatibility relatively cheaply (click to enlarge images):

If you look closely, you can spot the nice 4Kn logo right there. In case you ask yourselves “Why 4Kn?”, well, mostly cost and efficiency. 4kiB sectors are 8 times as large as classic 512 byte ones. Thus, for the same data payload you need 8 times less sector gaps, 8 times less synchronization markers and 8 times less address markers. Also, a stronger checksum can be used for data integrity. See this picture from [Wikipedia]:

Sectors

Sector size comparison (Image is © Dougolsen under the CC-BY 3.0 unported license)

Now this efficiency is already there with 512e drives. 512e Advanced Format was supposedly invented, because more than half the programs working with raw disks out there can’t handle variable sector sizes and are hardcoded for 512n. That also includes system firmwares, so your mainboards’ BIOS/UEFI. To solve those issues, they used 4kiB sectors, then let a fast ARM processor translate them into 512 byte sectors on the fly to give legacy software something it could understand.

4Kn on the other hand is the purist, “right” approach. No more emulation, no more cheating. No more 1GHz ARM dual core processor in your hard drive just to be able to serve data fast enough.

Now we already know that Intel controllers won’t work. For fun, I hooked it up to my ASUS P6T Deluxes’ secondary SATA controller though, a Marvell 88SE6120. Plus, I gave the controller the latest possible driver, the quite hard-to-get version 1.2.0.8400. You can download that [here] for x86 and x64.  To forestall the result: It doesn’t work. At all. This is what the systems’ log has to say about it (click to enlarge):

So that’s a complete failure right there. Even after the “plugged out” message, the timeouts would still continue to happen roughly every 30 seconds, accompanied by the whole operating system freezing for 1-2 seconds every time. I cannot say for any other controllers like the Marvell 9128 or Silicon Image chips and others, but I do get the feeling that none of them will be able to handle 4Kn.

Luckily, I do already have the controller for my future RAID-6 right here, an Areca ARC-1883ix-12, the latest and greatest tech with PCIe x8 3.0 and SAS/12Gbit ports with SATA/6Gbit encapsulation. Its firmware and driver supports 4Kn fully as you can see in Arecas [specifications]. The controller features an out-of-band management system via its own ethernet port and integrated web server for browser-based administration, even if the system doesn’t even have any OS booted up. All that needs to be installed on the OS then is a very tiny driver (click to enlarge):

Plus, Areca gives us one small driver for many Windows operating systems. Only for the Windows XP 32-Bit NT5.1 kernel you’ll get a SCSI Miniport driver exclusively, while all newer systems (WinXP x64, Windows Vista, 7, 8) get a more efficient StorPort driver. So, plugged the controller in, installed the driver, hooked up the disk, and it seems we’re good to go:

The 4Kn drive is being recognized

The 4Kn drive is being recognized (click to enlarge)

Now, any legacy master boot record (MBR) partition table has a 32-bit address field. That means, it can address 232 elements. With each element being 512 bytes large, you reach 2TiB. So that’s where the 2TiB limit comes from. With 4Kn however, the smallest addressable atom is now eight times as large: 4096 bytes! So we should be able to reach 16TiB due to the larger sector size. Supposedly, some USB hard drive manufacturers have used this trick (by emulating 4Kn) to make their larger drives work easily on Windows XP. When trying to partition the Toshiba drive however, I hit a wall, as it seems Windows disk management is about as stupid as was the FAT32 formatter on Windows 98:

MBR initialization failed

MBR initialization failed (click to enlarge)

That gets me thinking. On XP x64, I can still just switch from MBR to the GPT partitioning scheme to be able to partition huge block devices. But what about Windows XP 32-bit? I don’t know how the USB drive manufacturers do it, so I can only presume they ship the drives pre-partitioned if its one of those that don’t come with a special mapping tool for XP. In my case, I just switch to GPT and carry on (click to enlarge):

Now I guess I am the first person in the world to be able to look at this, and potentially the last too:

fsutil.exe showing a 4Kn drive on XP x64

fsutil.exe showing a native SATA 4Kn drive on XP x64, encapsulated in SAS. Windows 7 would show the physical and logical sector size separately due to its official 512e support. Windows XP always reports the logical sector size (click to enlarge)

So far so good. The very first and most simple test? Just copy a file onto the newly formatted file system. I picked the 4k (no pun intended) version of the movie “Big Buck Bunny”:

Copying a first file onto the 4Kn disks NTFS file system

Copying a first file onto the 4Kn disks NTFS file system

Hidden files and folders are shown here, but Windows doesn’t seem to want to create a System Volume Information folder for whatever reason. Other than that it’s very fast and seems to work just nicely. Since the speed is affected by the RAID controllers write back cache, I thought I’d try HD Tune 2.55 for a quick sequential benchmark. Or in other words: “Let’s hit our second legacy software wall” (click to enlarge):

Yeah, so… HD Tune never detects anything above 2TiB, but this? At first glance, 375GB might sound quite strange for a 3TB drive. But consider this: 375 × 8 = 3000. What happened here is that HD Tune got the correct sector count of the drive, but misinterpreted each sectors’ size as 512 bytes. Thus, it reports the devices’ size as eight times as small. Reportedly, this is also the exact way how Intels RST drivers fail when trying to address a 4Kn drive. HD Tune 2.55 is thus clearly hardcoded for 512n. There is no way to make this work. Let’s try the paid version of the tool which is usually quite ahead of its free and legacy counterpart (click to enlarge):

Indeed, HD Tune Pro 5.00 works just as it should when accessing the raw drive. Users who don’t want to pay are left dead in the water here. Next, I tried HDTach, also an older tool. HDTach however reads from a formatted file system instead of from a raw block device. The file system abstracts the device to a higher level, so HDTach doesn’t know and doesn’t need to know anything about sectors. As a result, it also just works:

HD Tach benchmarking NTFS on a 4Kn drive

HD Tach benchmarking NTFS on a 4Kn drive (click to enlarge)

Next, let’s try an ancient benchmark, that again accesses drives on the sector level: The ATTO disk benchmark. It is here where we learn that 4Kn, or generally variable sector sizes aren’t space magic. This tool was written well before the times of 512e or 4Kn, and look at that (click to enlarge):

Now what does that tell me? It tells me that hardware developers feared the chaotic landscape of tools and softwares that access disks at low levels. Some might be cleanly programmed, where most may not. That doesn’t just include operating systems’ built-in toolsets, but also 3rd party software, independently from the operating system itself. Maybe it also affects disk cloning software like from Acronis? Volume shadow copies? Bitlocker? Who knows. Thing is, to be sure, you need to test that stuff. And I presume that to go as far as hard drive manufacturers did with 512e, they likely found one abhorrent hell of crappy software during their tests. Nothing else will justify ARM processors at high clock rates on hard drives just to translate sector sizes plus all the massive work that went into defining the 512e Advanced Format standard before 4Kn Advanced Format.

Windows 8 might now fully support 4Kn, but that doesn’t say anything about the 3rd party software you’re going to run on that OS either. So we still live in a Windows world where a lot of fail is waiting for us. Naturally, Linux and certain UNIX systems have adapted much earlier or have never even made the mistake of hardcoding sector sizes into their kernels and tools.

But now, to the final piece of my preliminary tests: Truecrypt. A disk encryption software I still trust despite the project having been shut down. Still being audited without any terrible security hole discoveries so far, it’s my main choice for cross-platform disk encryption, working cleanly on at least Windows, MacOS X and Linux.

Now, 4Kn is disabled for MacOS X in Truecrypts source code, but seemingly, this [can be fixed]. I also discovered that TC will refuse to use anything other than 512n on Linux if Linux kernel crypto is unavailable or disabled by the user in TC, see this part of Truecrypts CoreUnix.cpp:

1
2
3
4
5
6
7
8
9
10
#if defined (TC_LINUX)
if (volume->GetSectorSize() != TC_SECTOR_SIZE_LEGACY)
{
  if (options.Protection == VolumeProtection::HiddenVolumeReadOnly)
    throw UnsupportedSectorSizeHiddenVolumeProtection();
 
  if (options.NoKernelCrypto)
    throw UnsupportedSectorSizeNoKernelCrypto();
}
#endif
#if defined (TC_LINUX)
if (volume->GetSectorSize() != TC_SECTOR_SIZE_LEGACY)
{
  if (options.Protection == VolumeProtection::HiddenVolumeReadOnly)
    throw UnsupportedSectorSizeHiddenVolumeProtection();

  if (options.NoKernelCrypto)
    throw UnsupportedSectorSizeNoKernelCrypto();
}
#endif

Given that TC_SECTOR_SIZE_LEGACY equals 512, it becomes clear that hidden volumes are unavailable as a whole with 4Kn on Linux, and encryption is completely unavailable altogether if kernel crypto isn’t there. So I checked out the Windows specific parts of the code, but couldn’t find anything suspicious in the source for data volume encryption. It seems 4Kn is not allowed for bootable system volumes (lots of “512’s” there), but for data volumes it seems TC is fully capable of working with variable sector sizes.

Now this code has probably never been run before on an actual SATA 4Kn drive, so let’s just give it a shot (click to enlarge):

Amazingly, Truecrypt, another software written and even abandoned by its original developers before the advent of 4Kn works just fine. This time, Windows does create the System Volume Information folder on the file system within the Truecrypt container, and fsutil.exe once again reports a sector size of 4096 bytes. This shows clearly that TC understands 4Kn and passes the sector size on to any layers above itself in the kernel I/O stack flawlessly (The layer beneath it should be either the NT I/O scheduler or maybe the storage controller driver directly and the layer above it the NTFS file system driver, if my assumptions are correct).

Two final tests for data integrities’ sake:

Both a binary diff and SHA512 checksums prove, that the data copied from a 512n medium to the 4Kn one is still intact

Both a binary diff and SHA512 checksums prove, that the data copied from a 512n medium to the 4Kn one is still intact

So, my final conclusion? Anything that needs to work with a raw block device on a sector-by-sector level needs to be checked out before investing serious money in such hard drives and storage arrays. It might be cleanly programmed, with some foresight. It also might not.

Anything that sits above the file system layer though (anything that reads and writes folders and files instead of raw sectors) will always work nicely, as such software does not need to know anything about sectors.

Given the possibly enormous amount of software with hardcoded routines for 512 byte sectors, my assumption would be that the migration to 4Kn will be quite a sluggish one. We can see that the enterprise sector is adapting first, clearly because Linux and UNIX systems adapt much faster. The consumer market however might not see 4Kn drives anytime soon, given 512 byte sectors have been around for about 60 years (!) now.

Update 2014-01-16 (Linux): I just couldn’t let it go, so I took the Toshiba 4Kn drive to work with me, and hot plugged it into an Intel ICH10R. So that’s the same chipset as the one I ran the Windows tests on, an Intel X58. Only difference is, that now we’re on CentOS 6.6 Linux running the 2.6.32-504.1.3.el6.x86_64 kernel.  This is what dmesg had to say about my hotplugging:

ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata3.00: ATA-8: TOSHIBA MG04ACA300A, FP2A, max UDMA/100
ata3.00: 732566646 sectors, multi 2: LBA48 NCQ (depth 31/32), AA
ata3.00: configured for UDMA/100
ata3: EH complete
scsi 2:0:0:0: Direct-Access     ATA      TOSHIBA MG04ACA3 FP2A PQ: 0 ANSI: 5
sd 2:0:0:0: Attached scsi generic sg7 type 0
sd 2:0:0:0: [sdf] 732566646 4096-byte logical blocks: (3.00 TB/2.72 TiB)
sd 2:0:0:0: [sdf] Write Protect is off
sd 2:0:0:0: [sdf] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:0:0: [sdf] 732566646 4096-byte logical blocks: (3.00 TB/2.72 TiB)
 sdf:
sd 2:0:0:0: [sdf] 732566646 4096-byte logical blocks: (3.00 TB/2.72 TiB)
sd 2:0:0:0: [sdf] Attached SCSI disk

Looking good so far, also the Linux kernel typically cares rather less about the systems BIOS, bypassing whatever crap it’s trying to tell the kernel. Which is usually a good thing. Let’s verify with fdisk:

Note: sector size is 4096 (not 512)

WARNING: The size of this disk is 3.0 TB (3000592982016 bytes).
DOS partition table format can not be used on drives for volumes
larger than (17592186040320 bytes) for 4096-byte sectors. Use parted(1) and GUID 
partition table format (GPT).

Now that’s more like it! fdisk is warning me, that it will be limited to addressing 16TiB on this disk. A regular 512n or 512e drive would be limited to 2TiB as we know. Awesome. So, I created a classic MBR style partition on it, formatted it using the EXT4 file system, and mounted it. And what we get is this:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sdf1             2.7T   73M  2.6T   1% /mnt/sdf1

And Intel is telling us that they don’t manage to give us any Windows drivers that can do 4Kn? Marvell doesn’t even comment on their inabilities? Well, suck this: Linux’ free driver for an Intel ICH10R south bridge (or any other that has a driver coming with the Linux kernel for that matter) seems to have no issues with that whatsoever. I bet it’s the same with BSD. Just weak, Intel. And Marvell. And all you guys who had so much time to prepare and yet did nothing!

Update 2014-01-20 (Windows XP 32-Bit): So what about regular 32-Bit Windows XP? There are stories going around that some USB drives with 3-4TB capacity would use a 4Kn emulation (or real 4Kn, bypassing the 512e layer by telling the drive firmware to do so?), specifically to enable XP compatibility without having to resort to special mapping tools.

Today, I had the time to install XP SP3 on a spare AMD machine (FX9590, 990FX), which is pretty fast thanks to a small, unused testing SSD I still had lying around. Before that I wiped all GPT partition tables from the 4Kn drive, both the one at the start as well as the backup copy at the end of the drive using dd. Again, for this test, the Areca ARC-1883ix-12 was used, now with its SCSI miniport driver, since XP 32-Bit does not support StorPort.

Please note, that this is a German installation of Windows XP SP3. I hope the screenshots are still understandable enough for English speakers.

Recognition and MBR initialization seems to work just fine this time, unlike on XP x64:

The 4Kn Toshiba as detected by Windows XP Pro 32-Bit SP3, again on an Areca ARC-1883ix-12

The 4Kn Toshiba as detected by Windows XP Pro 32-Bit SP3, again on an Areca ARC-1883ix-12 (click to enlarge)

Let’s try to partition it:

Partitioning the drive once more, MBR style

Partitioning the drive once more, MBR style

Sure looks good! And then, we get this:

A Master Boot Record, Windows XP and 4Kn: It does work after all

A Master Boot Record, Windows XP and 4Kn: It does work after all (click to enlarge)

So why does XP x64 not allow for initialization and partitioning of a 4Kn drive using MBR? Maybe because it’s got GPT for that? So in any case, it’s usable on both systems, the older NT 5.1 (XP 32-Bit) as well as the newer NT 5.2 (XP x64, Server 2003). Again, fsutil confirms proper recognition of our 4Kn drive:

fsutil.exe reporting a 4kiB sector size, just like on XP x64

fsutil.exe reporting a 4kiB sector size, just like on XP x64

So all you need – just like on XP x64 – is a proper controller with proper firmware and drivers!

There is one hard limit here though that XP 32-Bit users absolutely need to keep in mind; Huge RAID volumes using LUN carving/splitting and software JBOD/disk spanning using Microsofts Dynamic Volumes are no longer possible when using 4Kn drives. Previously, you could tell certain RAID controllers to just serve huge arrays to the OS in 2TiB LUN slices (e.g. best practice for 3ware controllers on XP 32-Bit). Then, in Windows, you’d just make those slices Dynamic Volumes and span a single NTFS file system over all of them, thus pseudo-breaking the 2TiB barrier.

This can no longer be done, as Dynamic Volumes seemingly do not work with 4Kn drives on Microsoft operating systems before Windows 8, but at least on XP 32-Bit. The option for converting the volume from MBR to GPT is simply greyed out in Windows disk management.

That means that the absolute maximum volume size using 4Kn disks on 32-Bit Windows XP is 16TiB! On XP x64 – thanks to GPT – it’s just a few blocks short of 256TiB, a limit imposed on us by the NTFS file systems’ 32-bit address field and 64kiB clusters, as 232 * 64KiB * 1024 * 1024 * 1024 = 256TiB.

And that concludes my tests, unless I have time and an actual machine to try FreeBSD or OpenBSD UNIX. Or maybe Windows 7. The likelihood for that is not too high at the moment though.

Jan 052015
 

DirectX logo[1] Quite some time ago – I was probably playing around with some DirectShow audio and video codec packs on my Windows system – I hit a wall in my favorite media player, which is [Mediaplayer Classic Home Cinema], in its 64-Bit version to be precise. I love the player, because it has its own built-in splitters and decoders, its small, light-weight, and it can actually use DXVA1 video acceleration on Windows XP / XP x64 with AMD/ATi and nVidia graphics cards. So yeah, Blu-Ray playback with very little CPU load is possible as long as you deal with the ACSS encryption layer properly. Or decode files from your hard disk instead. But I’m wandering from the subject here.

One day, I launched my 64-Bit MPC-HC, tried to decode a new Blu-Ray movie I got, and all of a sudden, this:

MPC-HC "Failed to query the needed interfaces for playback"

MPC-HC failing to render any file – whether video or audio, showing only a cryptic error message

I tried to get to the bottom of this for weeks. Months later I tried again, but I just couldn’t solve it. A lack of usable debug modes and log files didn’t help either. Also, I failed to properly understand the error message “Failed to query the needed interfaces for playback”. Main reason for my failure was that I thought MPC-HC had it all built in – container splitters, A/V decoders, etc. But still, the 64-Bit version failed. Interestingly, the 32-Bit version still worked fine, on XP x64 in this specific case. Today, while trying to help another guy on the web who had issues with his A/V decoding using the K-Lite codec pack, I launched Microsofts excellent [GraphEdit] tool to build a filter graph to show him how to debug codec problems with Microsofts DirectShow system. You can download the tool easily [here]. It can visualize the entire stack of system-wide DirectShow splitters and decoders on Windows, and can thus help you understand how this shit really works. And debug it.

Naturally, I launched the 32-Bit version, as I’ve been working with 32-Bit A/V tools exclusively since that little incident above – minus the x264 encoder maybe, which has its own 64-Bit libav built in. Out of curiosity, I started the 64-Bit version of GraphEdit, and was greeted with this:

GraphEdit x64 failure due to broken DirectShow core

GraphEdit x64 failure due to broken DirectShow core

“DirectShow core components failed to initialize.”, eh? Now this piqued my interest. Immediately the MPC-HC problem from a few years ago came to my mind, and I am still using the very same system today. So I had an additional piece of information now, which I used to search the web for solutions with. Interestingly, I found that this is linked to the entire DirectShow subsystem being de-registered and thus disabled on the system. I had mostly found people who had this problem for the 32-Bit DirectShow core on 64-Bit Windows 7. Also, I learned the name of the DirectShow core library.

quartz.dll.

On any 64-Bit system, the 32-Bit version of this library would sit in %WINDIR%\SysWOW64\ with its 64-Bit sibling residing in %WINDIR%\system32\. I thought: What if I just tried to register the core and see what happens? So, with my 64-Bit DirectShow core broken, I just opened a shell with an administrative account, went to %WINDIR%\system32\ and ran regsvr32.exe quartz.dll. And indeed, the libary wasn’t registered/loaded. See here:

Re-registering quartz.dll

Re-registering the 64-Bit version of quartz.dll (click to enlarge)

Fascinating, I thought. Now I don’t know what kind of shit software would disable my entire 64-Bit DirectShow subsystem. Maybe one of those smart little codec packs that usually bring more problems that solutions with them? Maybe it was something else I did to my system? I wouldn’t know what, but it’s not like I can remember everything I did to my systems’ DLLs. Now, let’s try to launch the 64-Bit version of GraphEdit again, with a 64-Bit version of [LAVfilters] installed. That’s basically [libav] on Windows with a DirectShow layer wrapped around and a nice installer shipped with it. YEAH, it’s a codec pack alright. But in my opinion, libav and ffmpeg are the ones to be trusted, just like on Linux and UNIX too. And Android. And iOS. And OSX. Blah. Here we go:

64-Bit GraphEdit being fed the Open Source movie "Big Buck Bunny"

64-Bit GraphEdit being fed the Open Source movie “Big Buck Bunny” (click to enlarge)

And all of a sudden, GraphEdit launches just fine, and presents us a properly working filter graph after having loaded the movie [Bick Buck Bunny] in its 4k/UHD version. We can see the container being split, video and audio streams being picked up by the pins of the respective libav decoders, which feed the streams to a video renderer and a DirectSound output device – which happens to be an X-Fi card in this case. All pins are connected just fine, so this looks good. Now what does it look like in MPC-HC now? Like this (warning – large image, 5MB+, this will take time to load from my server!):

64-Bit MPC-HC playing the 4k/UHD version of "Big Buck Bunny"

64-Bit MPC-HC playing the 4k/UHD version of “Big Buck Bunny” (click to enlarge)

So there we go. It seems MPC-HC does rely on DirectShow after all, at least in that it tries to initialize the subsystem. It can after all also use external filters, or in other words system-wide DirectShow codecs too, where its internal codec suite wouldn’t suffice, if that’s ever the case. So MPC-HC seems to want to talk to DirectShow at play time in any case, even if you haven’t even allowed it to use any external filters/codecs. Maybe its internal codec pack even is DirectShow-based too? And if DirectShow is simply not there, it won’t play anything. At all. And I failed to solve this for years. And today I launch one program in a different bitness than usual, and 5 minutes later, everything works again. For all it takes is something like this:

regsvr32.exe %WINDIR%\system32\quartz.dll

Things can be so easy sometimes, if only you know what the fuck really happened…

The importance and significance of error handling and reporting to the user can never be understated!

But hey! Where there is a shell, there is a way, right? Even on Windows. ;)

[1] © Siegel, K. “DirectX 9 logo design“. Kamal Siegel’s Artwork.

Dec 112014
 

FreeBSD + WineSince I’ve abandoned OpenBSD for FreeBSD in my recent attempts to actually use a ‘real’ UNIX system, and all that just because of Wine so I can use some small Windows tools I need (yeah…), I was a bit fed up with Wines’ font anti-aliasing not working. I remember having a similar problem on CentOS Linux some while back, but I can’t remember how I solved it on that OS. Thing is, I just want to document my solution here briefly, so I don’t forget it, if I have to re-do this any time soon. The problem seems to originate from the X11 render extension that is being used for compositing on X11 without any additional compositing engine like Compiz. Wines usage of the extension is actually controlled by its system registry, and the key that seems to need editing is HKEY_CURRENT_USER\Software\Wine\X11 Driver\ClientSideWithRender.

Interestingly, people suggested to switch the use of the X render extension off instead of on, but when inspecting my setup by running wine regedit, I found I didn’t even have the key, and its default is off! So what saved me was:

[HKEY_CURRENT_USER\Software\Wine\X11 Driver]
"ClientSideWithRender"="Y"

Setting this back to “N” is basically the same thing as just deleting the key altogether, at least with the current default configuration of my Wine version on FreeBSD, which is wine-1.6.2. See how it looks like with no anti-aliasing (click to enlarge):

Wine running its regedit on FreeBSD 10 with no font AA

Wine running its regedit on FreeBSD 10 with no font AA

And here with proper AA, again, please click to enlarge so you can see the effect applied:

Wine with proper font anti-aliasing

Wine with proper font anti-aliasing

To demonstrate the effect more prominently, let’s look at some zoomed in versions, click to enlarge again for all the glory or so:

No anti-aliased fonts here, all jagged

No anti-aliased fonts here, all jagged

And:

Wine font anti-aliasing up close, nice and smooth

Wine font anti-aliasing up close, nice and smooth

Now I heard that some people actually prefer the clearer look of jagged fonts, or they may prefer pure gray smoothing, but I actually like the look of this smoothing a lot more. This is with medium font hinting turned on in Xfce4, and it appears very smooth and nice to my eyes.

If you want to try this in case you’re offended by jagged fonts using wine, just take the following:

[HKEY_CURRENT_USER\Software\Wine\X11 Driver]
"ClientSideWithRender"="Y"

Save it into a file like wine-Xrender-fontAA.reg, and then import it into your wines registry database by opening a terminal, going into the directory where your reg file sits, and by running: wine regedit ./wine-Xrender-fontAA.reg. Then restart your wine application and it should work for 99% of the fonts. I’ve encountered one font in Bruce Schneiers [PasswordSafe] that wouldn’t anti-alias, ever. But it’s the same on Windows, so I’m guessing that’s ok.

Switching it off is just as easy, just edit the file, change “Y” to “N” and re-run the wine regedit command. But as I said, I’ll keep it, no more eye cancer when starting Win32 applications on FreeBSD. :)

Dec 032014
 

HP/Compaq nx6310 logoRecently, a friend of mine gave me a free notebook for operating system testing. Well, actually I traded in two DDR-II sticks for it, but that’s still almost free. It’s an older HP/Compaq nx6310 in its low-end configuration with an i940GML chipset, Pentium-M based Celeron M430, a crappy hard drive, 2GB RAM (already upgraded from 512MB) and a DVD±RW burner. The worst part of that laptop was probably the terrible XGA (1024×768) TN+Film LCD panel. The thing came without WiFi too. The antennas were there with properly sealed plugs, but the WiFi module itself was missing in its PCIe mini slot. So I flashed in a nice [hacked Mod BIOS] from [MyDigitalLife] based on the latest Core 2 capable BIOS version F.0E to remove the WiFi card whitelist from the SLIC 2.1 enabled BIOS and plugged in a full-height 4965AGN card from Intel! Then gave it a Crucial m500 SSD which works miracles despite the slow-bandwidth SATA/1.5Gbps interface, and an Intel Core Duo T2450 processor for its Socket M, which is the fastest possible option for this book.

The CPU will only work with a single core on the i940GML chipset/cheapset, and only FSB533 chips will do, as I couldn’t find a way to program the clock generator for the front side bus. Still, given the clock speed of 2GHz and the 2MB L2 cache this is still the best option here, unless you need 64-Bit x86, for which a Core 2 Duo T5300 with 1.73GHz would be available.

If you know which modified BIOS to flash, the WiFi upgrade is very easy, and the CPU upgrade can also be done very quickly and conveniently, which is really nice. But then comes the LCD panel. The ugliest of the ugly in the already pretty ugly field of TN+Film panels with a very coarse grain look and limited space on screen due to the XGA 1024×768 resolution on 15″. An SXGA+ (1400×1050) WVA panel upgrade option identified by the HP part number 413679-001 exists though, which can replace this lower spec one going by the number 413677-001. Just make sure you also get the SXGA+ LVDS cable, as the default LVDS display cable will only work with the XGA screen! Plugging the XGA cable into the SXGA+ panel will yield nothing but an erratic white screen with horizontal lines. I learned that the hard way… Here’s the cables:

nx6310 LVDS display cables, left: SXGA+, right: XGA

nx6310 LVDS display cables, left: SXGA+, right: XGA

So I got the panel and an extra SXGA+ cable from eBay UK, and started with the disassembly, which can be tricky for the first time, as some parts like the display bezel can be prone to breaking. Also, besides a Phillips screwdriver, you will need a size T10 torx screwdriver for this. I would suggest consulting the nx6310s [HP service manual] for display assembly removal. It doesn’t cover everything, only the replacing of an entire display assembly, not the panel in it specifically, but it’s still very helpful. Just don’t disconnect the WiFi antenna cables as they’re asking for, and mind that the switch cover removal sequence is wrong; There is no “LED board cable” to remove for this machine, you can skip that. So let’s assume you followed that manual already for removal of the keyboard and switch cover, and let’s continue from there:

HP nx6310 with keyboard and switch cover removed

HP nx6310 with keyboard and switch cover removed

The copper plate near the middle, just left of the internal memory slot covers the chipset, to the lower left we can see the CPU hotplate from which a heatpipe transfers heat to the cooler/radiator on the top left. All of that is easy to remove and reassemble if you want to replace the processor too. On the display bezel you can already see some screws being visible. All of the screws facing you on the display frame are covered by rubber seals, remove them all. There are also four thin film seals on the sides of the display assembly, two on each side. Get rid of them too.

Now remove all screws on the display assembly and tilt the whole display as far back as possible. Try to stick a flat screwdriver between the two plastic halves (front/back) at the bottom and push the front bezel off the back. Be careful, as the front bezel is especially prone to breaking. The first attempt should be made near the hinges, as it’s easiest and safest there. Then, it might be better not to try and stick the screwdriver in at a 90° angle. Try to do it at a steeper angle, between 20-40°, and push it in a few millimeters deep! Try to lift it up not from the side, but from beneath the bezel frame using leverage. You’ll push against the metal frame of the LCD panel below. There is no electronics there you can damage, so this should be safe. Just work slowly and carefully.

Also, double-check whether you removed all the side screws, or you may break some plastic holders of the front bezel by trying to lift it off with the screws still in place! When you’re at the top side, slide the locking mechanism while pushing the bezel off, or it’ll block you.

Now, with the bezel gone, you’ll see a small white tube on the lower side of the panel. That’s the high-voltage AC inverter board that powers the CCFL lamp for lighting up the display panel. Disconnect both cables attached to it, and just put it away somewhere for now:

The inverter board disconnected

The inverter board disconnected

Also, remove the display cable itself. You can just pull it out, it won’t be too hard, so you can’t really damage the cable. Its beneath the now-removed switch cover, in the area where you’d find the power button:

nx6310 with the old XGA cable still attached

nx6310 with the old XGA cable still attached

After bezel removal, there will be four additional screws that hold the panel frame and the rear cover together, one in each corner of the display assembly, facing you directly. Remove them, and let the rear cover slide away carefully. Don’t use force here, or you may damage the WiFi antennas:

The panel and the rear cover separating

The panel and the rear cover separating

The two cables taped to the rear cover are the WiFi antennas, just leave them alone. Pull the display cable plug out of the LVDS socket carefully, and remove the tapes, then just gently pull the cable out. You may want to remove and save the tapes for later if your SXGA+ doesn’t have its own. Now, there are two frames holding the panel. Tilt the panel so that it stands 90° upright to prevent it from just falling out during the next step. The two frames are screwed to the panel with four small screws on each side. Remove all the screws, then pull out the panel.

Put the new SXGA+ panel in, screw it to the frames, plug in and attach the new LVDS cable and it should look a bit like this:

New panel and cable installed

New panel and cable installed

To make the rear assembly feel stronger to the touch and make the cover less flexible, you might also put some filler material between rear cover and panel, as the new one might be a bit thinner. I used some light foam for that, but you can also just skip that part. Just screw everything back together in reverse order, reconnect the inverter board and push it into its seat in the reattached rear cover below the panel in the middle and don’t forget to attach the new LVDS cable to the system board too:

The new SXGA+ capable LVDS cable attached

The new SXGA+ capable LVDS cable attached

Now, finally: A side-by-side comparison:

Note that the colors look strangely whiteish for the SXGA+ when compared to the XGA. In real life, the white-shift is actually worse for the XGA, no idea why it looks like that on the photograph. Something with the polarization filters on the panels maybe?! Well, frankly, it’s still rather bad with the SXGA+, but not as bad as before. Also, the color space of those two photographs don’t match (again…) because I’m lazy and can’t handle my camera. Oh well. But the new panel is brighter now, which is a nice bonus, and as you can see, there is more space available on the desktop. Since the dpi are now higher, it also looks much more crisp and sharp, and it’s more matte than before, so reflections are reduced and you can work better in bright conditions. Overall, its a sound improvement!

On those pictures you can see FreeBSD 10 UNIX running a terminal emulator window and a Windows XP Pro VirtualBox machine on both panels. All windows were kept at the exact same dimensions, so you can assess the resolution difference properly.

I would also like to add that before learning about the cables, I went on a crazy journey to flash the EDID EEPROM of the panel in an attempt to get it to work, which finally went ok using a hex editor and some [hacked script] for [edid-rw] on Debian 7 Linux, because those panels don’t speak the DDC protocol required by other tools for Windows and DOS. In the end, I managed to write a new vendor string and some other strings to the EDID EEPROM firmware of the panel, which now shows it being manufactured by QDS (QUANTA computer) instead of APP (Apple). Well, just forget about all that. Certain Lenovo Thinkpad upgrade paths might require you to flash the displays’ EDID EEPROM, but this one here doesn’t! It’s all just about the cable.

Now I gotta say, the “free notebook” ain’t so free anymore when it comes to money invested (CPU, WiFi, and LVDS cable were below or around 10€ each, but the SSD was 70€ and the panel 55€ or so), but I’m still quite pleased with my work enhancing the “crappy” book. I’ll also keep FreeBSD for now, I strangely kinda like it after testing a few others natively, like Haiku OS or OpenBSD UNIX 5.6, just for fun. But yeah, with 1400×1050 and an SSD, this is thing feels pretty nice!

I hope this may help somebody other than me, because the EDID flashing suggested by some people leads nowhere for these HP series, and many sellers on ebay just sell you the panel for already SXGA+ equipped books without any mention of the cable, let alone an included cable! So if you can’t get your upgrade to work, the cable is probably why!

PS.: OpenBSD 5.6, you’re damn cool, especially with OpenJDK/Java now being back again! But sadly, I still need wine, so I’ll have to turn the other way again. :(

Nov 142014
 

MOD Music on FreeBSD 10 with xmmsAs much as I am a WinAmp 2 lover on Microsoft Windows, I am an xmms lover on Linux and UNIX. And by that I mean xmms1, not 2. And not Audacious. Just good old xmms. Not only does it look like a WinAmp 2 clone, it even loads WinAmp 2/5 skins and supports a variety of rather obscure plugins, which I partially need and love (like for Commodore 64 SID tunes via libsidplay or libsidplay2 with awesome reSID engine support).

Recently I started playing around with a free Laptop I got and which I wanted to be my operating systems test bed. Currently, I am evaluating – and probably keeping – FreeBSD 10 UNIX. Now it ain’t perfect, but it works pretty well after a bit of work. Always using Linux or XP is just a bit boring by now. ;)

One of the problems I had was with xmms. While the package is there, its built-in tracker module file (mod, 669, it, s3m etc.) support is broken. Play one of those and its xmms-mikmod plugin will cause a segmentation fault immediately when talking to a newer version of libmikmod. Also, recompiling multimedia/xmms from the ports tree produced a binary with the same fault. Then I found this guy [Jakob Steltner] posting about the problem on the ArchLinux bugtracker, [see here].

Based on his work, I created a ports tree compatible patch patch-drv_xmms.c, here is the source code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
--- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
+++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
@@ -117,6 +117,10 @@
        return VC_Init();
 }
 
+static void xmms_CommandLine(CHAR * commandLine)
+{
+}
+
 MDRIVER drv_xmms =
 {
        NULL,
@@ -126,7 +130,8 @@
 #if (LIBMIKMOD_VERSION > 0x030106)
         "xmms",
         NULL,
-#endif
+#endif
+       xmms_CommandLine, // Was missing
         xmms_IsThere,
        VC_SampleLoad,
        VC_SampleUnload
--- Input/mikmod/drv_xmms.c.orig        2003-05-19 23:22:06.000000000 +0200
+++ Input/mikmod/drv_xmms.c     2012-11-16 18:52:41.264644767 +0100
@@ -117,6 +117,10 @@
        return VC_Init();
 }

+static void xmms_CommandLine(CHAR * commandLine)
+{
+}
+
 MDRIVER drv_xmms =
 {
        NULL,
@@ -126,7 +130,8 @@
 #if (LIBMIKMOD_VERSION > 0x030106)
         "xmms",
         NULL,
-#endif
+#endif
+       xmms_CommandLine, // Was missing
         xmms_IsThere,
        VC_SampleLoad,
        VC_SampleUnload

So that means recompiling xmms from source by yourself. But fear not, it’s relatively straightforward on FreeBSD 10.

Assuming that you unpacked your ports tree as root by running portsnap fetch and portsnap extract without altering anything, you will need to put the file in /usr/ports/multimedia/xmms/files/ and the cd to that directory on a terminal. Now run make && make install as root. The patch will be applied automatically.

Now one can once again run xmms and module files just work as they’re supposed to:

A patched xmms playing a MOD file on FreeBSD 10

xmms playing a MOD file on FreeBSD 10 via a patched xmms-mikmod (click to enlarge)

The sad thing is, soon xmms will no longer be with us, at least on FreeBSD. It’s now considered abandonware, as all development has ceased. The xmms port on FreeBSD doesn’t even have a maintainer anymore and it’s [scheduled for deletion] from the ports tree together with all its plugins from ports/audio and ports/multimedia. I just wish I could speak C to some degree and work on that myself. But well…

Seems my favorite audio player is finally dying, but as with all the other old software I like and consider superior to modern alternatives, I’m gonna keep it alive for as long as possible!

You can download Jakobs’ patch that I adapted for the FreeBSD ports tree right here:

Have fun! :)

Edit: xmms once again has a maintainer for its port on FreeBSD, so we can rejoice! I have [submitted the patch to FreeBSDs bugzilla] as suggested by [SirDice] on the [FreeBSD Forums], and Naddy, the ports maintainer integrated the patch promptly! It’s been committed since [revision 375356]! So now you don’t need to patch it by hand and recompile it from source code anymore – at least not on x86 machines. You can either build it from ports as-is or just pull a fully working binary version from the packages repository on FreeBSD 10.x by running:

pkg install xmms

I tested this on FreeBSD 10.0 and 10.1 and it works like a charm, so thanks fly out to Jakob and Naddy! Even though my involvement here was absolutely minimal, it feels awesome to do something good, and help improving a free software project, even if only marginally so. :)

Oct 222014
 

Webchat logoXIN.at has been running an IRC chat server for some time now, but the problem always lies with people needing some client software to use it, like X-Chat or Nettalk or whatever.

People usually just don’t want to install yet another chat client software, no matter how old and well-established IRC itself may be. Alternatively, they can use some other untrusted web interface to connect to either the plain text [irc://www.xin.at:6666] or the encrypted [irc+ssl://www.xin.at:6697] server via a browser, but this isn’t optimal either. Since JavaScript cannot open TCP sockets on its own, and hence cannot connect to an IRC server directly, there are only two kinds of solutions:

  • Purely client-based as a Java Applet or Adobe Flash Applet, neither of wich are very good options.
  • JavaScript client + server backend for handling the actual communication with the IRC server.
    • Server backends exist in JavaScript/Node.js, Perl, Python, PHP etc.

Since I cannot run [Node.js] and [cgi:irc] is unportable due to its reliance on UNIX sockets, only Python and PHP remained. Since PHP was easier for me, I tried the old [WebChat2] software developed by Chris Chabot for this. To achieve connection-oriented encryption security, I wrapped SSL/TLS around the otherwise unencrypting PHP socket server of WebChat2. You can achieve this with cross-platform software like [stunnel], which can essentially wrap SSL around almost every servers connection (minus the complex FTP protocol maybe). While WebChat2’s back end is based on PHP, the front end uses JavaScript/Comet. This is what it looks like:

So that should do away with the “I don’t wanna install some chat client software” problem, especially when considering that most people these days don’t even know what Internet Relay Chat is anymore. ;) It also allows anonymous visitors on this web log to contact me directly, while allowing for a more tap-proof conversation when compared with what typical commercial solutions would give you (think WhatsApp, Skype and the likes). Well, it’s actually not more tap-proof considering the server operator can still read all communication at will, but I would like to believe that I am a more trustworthy server operator than certain big corporations. ;)

Oh, and if you finally do find it in yourself to use some good client software, check out [XChat] on Linux/UNIX and its fork [HexChat] on Windows, or [LimeChat] on MacOS X. There are mobile clients too, like for Android ([AndroIRC], [AndChat]), iOS ([SIRCL], [TurboIRC]), Windows Phone 8 ([IRC Free], [IRC Chat]), Symbian 9.x S60 ([mIRGGI]) and others.

So, all made easy now, whether client software or just web browser! Ah and before I forget it, here’s the link of course:

Edit: Currently, only the following browsers are known to work with the chat (older version may sometimes work, but are untested):

  • Mozilla FireFox 31+
  • Chromium (incl. Chrome/SRWare Iron) 30+
  • Opera 25+
  • Apple Safari 5.1.7+
  • KDE Konqueror 4.3.4+

The following browsers are known to either completely break or to make the interface practically unusable:

  • Internet Explorer <=11
  • Opera <=12.17
Sep 232014
 

CD burning logoAt work I usually have to burn a ton of heavily modified Knoppix CDs for our lectures every year or so. The Knoppix distribution itself is being built by me and a colleague to get a highly secure read-only, server-controlled environment for exams and lectures. Now, usually I’m burning on both a Windows box with Ahead [Nero], and on Linux with the KDE tool [K3B] (despite being a Gnome 2 user), both GUI tools. My Windows box had 2 burners, my Linux box one. To speed things up and increase disc quality at the same time the idea was to plug more burners into the machines and burn each individual disc slower, but parallelized.

I was shocked to learn that K3B can actually not burn to multiple burners at once! I thought I was just being blind, stumbling through the GUI like an idiot, but it’s actually really not there. Nero on the other hand managed to do this for what I believe is already the better part of a decade!

True disc burning stations are just too expensive, like 500€ for the smaller ones instead of the 80-120€ I had to spend on a bunch of drives, so what now? Was I building this for nothing?

Poor mans disc station

Poor mans disc station. Also a shitty photograph, my apologies for that, but I had no real camera available at work.

Well, where there is a shell, there’s a way, right? Being the lazy ass that I am, I was always reluctant to actually use the backend tools of K3B on the command line myself. CD/DVD burning was something I had just always done on a GUI. But now was the time to script that stuff myself, and for simplicities sake I just used the bash. In addition to the shell, the following core tools were used:

  • cut
  • grep
  • mount
  • sudo (For a dismount operation, might require editing /etc/sudoers)

Also, the following additional tools were used (most Linux distributions should have them, conservative RedHat derivatives like CentOS can get the stuff from [EPEL]):

  • [eject(eject and retract drive trays)
  • [sdparm(read SATA device information)
  • sha512sum (produce and compare high-quality checksums)
  • wodim (burn optical discs)

I know there are already scripts for this purpose, but I just wanted to do this myself. Might not be perfect, or even good, but here we go. The work(-in-progress) is divided into three scripts. The first one is just a helper script generating a set of checksum files from a master source (image file or disc) that you want to burn to multiple discs later on, I call it create-checksumfiles.sh. We need one file for each burner device node later, because sha512sum needs that to verify freshly burned discs, so that’s why this exists:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
#!/bin/bash
 
wrongpath=1 # Path for the source/master image is set to invalid in the
            # beginning.
 
# Getting path to the master CD or image file from the user. This will be
# used to generate the checksum for later use by multiburn.sh
until [ $wrongpath -eq 0 ]
do
  echo -e "Please enter the file name of the master image or device"
  echo -e "(if it's a physical disc) to create our checksum. Please"
  echo -e 'provide a full path always!'
  echo -e "e.g.: /home/myuser/isos/master.iso"
  echo -e "or"
  echo -e "/dev/sr0\n"
  read -p "> " -e master
 
  if [ -b $master -o -f $master ] && [ -n "$master" ]; then
    wrongpath=0 # If device or file exists, all ok: Break this loop.
  else
    echo -e "\nI can find neither a file nor a device called $master.\n"
  fi
done
 
echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"
 
checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.
 
# Getting device node name prefix of the users' CD/DVD burners from the
# user.
echo -e "Now please enter the device node prefix of your disc burners."
echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
echo -e "etc."
read -p "> " -e devnode
 
# Getting number of burners in the system from the user.
echo -e "\nNow enter the total number of attached physical CD/DVD burners."
read -p "> " -e burners
 
((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!
 
echo -e "\nDone, creating the following files with the following contents"
echo -e "for later use by the multiburner for disc verification:"
 
# Creating the per-burner checksum files for later use by multiburn.sh.
for ((i=0;i<=$burners;i++))
do
  echo -e " * sum$i.txt: $checksum $devnode$i"
  echo "$checksum $devnode$i" > sum$i.txt
done
 
echo -e ""
#!/bin/bash

wrongpath=1 # Path for the source/master image is set to invalid in the
            # beginning.

# Getting path to the master CD or image file from the user. This will be
# used to generate the checksum for later use by multiburn.sh
until [ $wrongpath -eq 0 ]
do
  echo -e "Please enter the file name of the master image or device"
  echo -e "(if it's a physical disc) to create our checksum. Please"
  echo -e 'provide a full path always!'
  echo -e "e.g.: /home/myuser/isos/master.iso"
  echo -e "or"
  echo -e "/dev/sr0\n"
  read -p "> " -e master

  if [ -b $master -o -f $master ] && [ -n "$master" ]; then
    wrongpath=0 # If device or file exists, all ok: Break this loop.
  else
    echo -e "\nI can find neither a file nor a device called $master.\n"
  fi
done

echo -e "\nComputing SHA512 checksum (may take a few minutes)...\n"

checksum=`sha512sum $master | cut -d' ' -f1` # Computing checksum.

# Getting device node name prefix of the users' CD/DVD burners from the
# user.
echo -e "Now please enter the device node prefix of your disc burners."
echo -e "e.g.: \"/dev/sr\" if you have burners called /dev/sr1, /dev/sr2,"
echo -e "etc."
read -p "> " -e devnode

# Getting number of burners in the system from the user.
echo -e "\nNow enter the total number of attached physical CD/DVD burners."
read -p "> " -e burners

((burners--)) # Decrementing by 1. E.g. 5 burners means 0..4, not 1..5!

echo -e "\nDone, creating the following files with the following contents"
echo -e "for later use by the multiburner for disc verification:"

# Creating the per-burner checksum files for later use by multiburn.sh.
for ((i=0;i<=$burners;i++))
do
  echo -e " * sum$i.txt: $checksum $devnode$i"
  echo "$checksum $devnode$i" > sum$i.txt
done

echo -e ""

As you can see it’s getting its information from the user interactively on the shell. It’s asking the user where the master medium to checksum is to be found, what the users burner / optical drive devices are called, and how many of them there are in the system. When done, it’ll generate a checksum file for each burner device, called e.g. sum0.txt, sum1.txt, … sum<n>.txt.

Now to burn and verify media in a parallel fashion, I’m using an old concept I have used before. There are two more scripts, one is the controller/launcher, which will then spawn an arbitrary amount of the second script, that I call a worker. First the controller script, here called multiburn.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
#!/bin/bash
 
if [ $# -eq 0 ]; then
  echo -e "\nPlease specify the number of rounds you want to use for burning."
  echo -e "Each round produces a set of CDs determined by the number of"
  echo -e "burners specified in $0."
  echo -e "\ne.g.: ./multiburn.sh 3\n"
  exit
fi
 
#@========================@
#| User-configurable part:|
#@========================@
 
# Path that the image resides in.
prefix="/home/knoppix/"
 
# Image to burn to discs.
image="knoppix-2014-09.iso"
 
# Number of rounds are specified via command line parameter.
copies=$1
 
# Number of available /dev/sr* devices to be used, starting
# with and including /dev/sr0 always.
burners=3
 
# Device node name used on your Linux system, like "/dev/sr" for burners
# called /dev/sr0, /dev/sr1, etc.
devnode="/dev/sr"
 
# Number of blocks per complete disc. You NEED to specify this properly!
# Failing to do so will break the script. You can read the block count 
# from a burnt master disc by running e.g. 
# ´sdparm --command=capacity /dev/sr*´ on it.
blocks=340000
 
# Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
speed=32
 
#@===========================@
#|NON user-configurable part:|
#@===========================@
 
# Checking whether all required tools are present first:
# Checking for eject:
if [ ! `which eject 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sdparm:
if [ ! `which sdparm 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sha512sum:
if [ ! `which sha512sum 2>/dev/null` ]; then
  echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sudo:
if [ ! `which sudo 2>/dev/null` ]; then
  echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for wodim:
if [ ! `which wodim 2>/dev/null` ]; then
  echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  exit
fi
 
((burners--)) # Reducing number of burners by one as we also have a burner "0".
 
# Initial burner ejection:
echo -e "\nEjecting trays of all burners...\n"
for ((g=0;g<=$burners;g++))
do
  eject $devnode$g &
done
wait
 
# Ask user for confirmation to start the burning session.
echo -e "Burner trays ejected, please insert the discs and"
echo -e "press any key to start.\n"
read -n1 -s # Wait for key press.
 
# Retract trays on first round. Waiting for disc will be done in
# the worker script afterwards.
for ((l=0;l<=$burners;l++))
do
  eject -t $devnode$l &
done
 
for ((i=1;i<=$copies;i++)) # Iterating through burning rounds.
do
  for ((h=0;h<=$burners;h++)) # Iterating through all burners per round.
  do
    echo -e "Burning to $devnode$h, round $i."
    # Burn image to burners in the background:
    ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &
  done
  wait # Wait for background processes to terminate.
  ((j=$i+1));
  if [ $j -le $copies ]; then
    # Ask user for confirmation to start next round:
    echo -e "\nRemove discs and place new discs in the drives, then"
    echo -e "press a key for the next round #$j."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k &
    done
    wait
  else
    # Ask user for confirmation to terminate script after last round.
    echo -e "\n$i rounds done, remove discs and press a key for termination."
    echo -e "Trays will close automatically."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k & # Pull remaining empty trays back in.
    done
    wait
  fi
done
#!/bin/bash

if [ $# -eq 0 ]; then
  echo -e "\nPlease specify the number of rounds you want to use for burning."
  echo -e "Each round produces a set of CDs determined by the number of"
  echo -e "burners specified in $0."
  echo -e "\ne.g.: ./multiburn.sh 3\n"
  exit
fi

#@========================@
#| User-configurable part:|
#@========================@

# Path that the image resides in.
prefix="/home/knoppix/"

# Image to burn to discs.
image="knoppix-2014-09.iso"

# Number of rounds are specified via command line parameter.
copies=$1

# Number of available /dev/sr* devices to be used, starting
# with and including /dev/sr0 always.
burners=3

# Device node name used on your Linux system, like "/dev/sr" for burners
# called /dev/sr0, /dev/sr1, etc.
devnode="/dev/sr"

# Number of blocks per complete disc. You NEED to specify this properly!
# Failing to do so will break the script. You can read the block count 
# from a burnt master disc by running e.g. 
# ´sdparm --command=capacity /dev/sr*´ on it.
blocks=340000

# Burning speed in factors. For CDs, 1 = 150KiB/s, 48x = 7.2MiB/s, etc.
speed=32

#@===========================@
#|NON user-configurable part:|
#@===========================@

# Checking whether all required tools are present first:
# Checking for eject:
if [ ! `which eject 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sdparm:
if [ ! `which sdparm 2>/dev/null` ]; then
  echo -e "\e[0;33msdparm not found. $0 cannot operate without sdparm, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sha512sum:
if [ ! `which sha512sum 2>/dev/null` ]; then
  echo -e "\e[0;33msha512sum not found. $0 cannot operate without sha512sum, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for sudo:
if [ ! `which sudo 2>/dev/null` ]; then
  echo -e "\e[0;33msudo not found. $0 cannot operate without sudo, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m"
  exit
fi
# Checking for wodim:
if [ ! `which wodim 2>/dev/null` ]; then
  echo -e "\e[0;33mwodim not found. $0 cannot operate without wodim, you'll need to install"
  echo -e "the tool before $0 can work. Terminating...\e[0m\n"
  exit
fi

((burners--)) # Reducing number of burners by one as we also have a burner "0".

# Initial burner ejection:
echo -e "\nEjecting trays of all burners...\n"
for ((g=0;g<=$burners;g++))
do
  eject $devnode$g &
done
wait

# Ask user for confirmation to start the burning session.
echo -e "Burner trays ejected, please insert the discs and"
echo -e "press any key to start.\n"
read -n1 -s # Wait for key press.

# Retract trays on first round. Waiting for disc will be done in
# the worker script afterwards.
for ((l=0;l<=$burners;l++))
do
  eject -t $devnode$l &
done

for ((i=1;i<=$copies;i++)) # Iterating through burning rounds.
do
  for ((h=0;h<=$burners;h++)) # Iterating through all burners per round.
  do
    echo -e "Burning to $devnode$h, round $i."
    # Burn image to burners in the background:
    ./burn-and-check-worker.sh $h $prefix$image $blocks $i $speed $devnode &
  done
  wait # Wait for background processes to terminate.
  ((j=$i+1));
  if [ $j -le $copies ]; then
    # Ask user for confirmation to start next round:
    echo -e "\nRemove discs and place new discs in the drives, then"
    echo -e "press a key for the next round #$j."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k &
    done
    wait
  else
    # Ask user for confirmation to terminate script after last round.
    echo -e "\n$i rounds done, remove discs and press a key for termination."
    echo -e "Trays will close automatically."
    read -n1 -s # Wait for key press.
    for ((k=0;k<=$burners;k++))
    do
      eject -t $devnode$k & # Pull remaining empty trays back in.
    done
    wait
  fi
done

This one will take one parameter on the command line which will define the number of “rounds”. Since I have to burn a lot of identical discs this makes my life easier. If you have 5 burners, and you ask the script to go for 5 rounds that would mean you get 5 × 5 = 25 discs, if all goes well. It also needs to know the size of the medium in blocks for a later phase. For now you have to specify that within the script. The documentation inside shows you how to get that number, basically by checking a physical master disc with sdparm –command=capacity.

Other things you need to specify are the path to the image, the image files’ name, the device node name prefix, and the burning speed in factor notation. Also, of course, the number of physical burners available in the system. When run, it’ll eject all trays, prompt the user to put in discs, and launch the burning & checksumming workers in parallel.

The controller script will wait for all background workers within a round to terminate, and only then prompt the user to remove and replace all discs with new blank media. If this is the last round already, it’ll prompt the user to remove the last media set, and will then retract all trays by itself at the press of any key. All tray ejection and retraction is done automatically, so with all your drive trays still empty and closed, you launch the script, it’ll eject all drive trays for you, and retract after a keypress signaling the script all trays have been loaded by the user etc.

Let’s take a look at the worker script, which is actually doing the burning & verifying, I call this burn-and-check-worker.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
#!/bin/bash
 
burner=$1   # Burner number for this process.
image=$2    # Image file to burn.
blocks=$3   # Image size in blocks.
round=$4    # Current round (purely to show the info to the user).
speed=$5    # Burning speed.
devnode=$6  # Device node prefix (devnode+burner = burner device).
bwait=0     # Timeout variable for "blank media ready?" waiting loop.
mwait=0     # Timeout variable for automount waiting loop.
swait=0     # Timeout variable for "disc ready?" waiting loop.
m=0         # Boolean indicating automount failure.
 
echo -e "Now burning $image to $devnode$burner, round $round."
 
# The following code will check whether the drive has a blank medium
# loaded ready for writing. Otherwise, the burning might be started too
# early when using drives with slow disc access.
until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
do
  ((bwait++))
  if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
    echo -e "(Was trying to burn to $devnode$burner in round $round,"
    echo -e "failed to detect any blank medium in the drive.)\e[0m"
    eject $devnode$burner
    exit
  fi
  sleep 1 # Sleep 1 second before next check.
done
 
wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.
 
# Notify user if burning failed.
if [[ $? != 0 ]]; then
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  echo -e "Ejecting and aborting this thread.\e[0m\n"
  eject $devnode$burner
  exit
fi
 
# The following code will eject and reload the disc to clear the device
# status and then wait for the drive to become ready and its disc to
# become readable (checking the discs block count as output by sdparm).
eject $devnode$burner && eject -t $devnode$burner
until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
do
  ((swait++))
  if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
    echo -e "(Was trying to access $devnode$burner in round $round,"
    echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
    exit
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
 
# The next part is only necessary if your system auto-mounts optical media.
# This is usually the case, but if your system doesn't do this, you need to
# comment the next block out. This will otherwise wait for the disc to
# become mounted. We need to dismount afterwards for proper checksumming.
until [ -n "`mount | grep $devnode$burner`" ]
do
  ((mwait++))
  if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
    echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
    echo -e "Attempting to carry on..."
    echo -e "(Was waiting for disc on $devnode$burner to automount in"
    echo -e "round $round for 30 seconds.)\e[0m\n."
    m=1
    break
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  sudo umount $devnode$burner # Dismount burner as root/superuser.
fi
 
# On to the checksumming.
echo -e "Now comparing checksums for $devnode$burner, round $round."
sha512sum -c sum$burner.txt # Comparing checksums.
if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
fi
 
eject $devnode$burner # Ejecting disc after completion.
#!/bin/bash

burner=$1   # Burner number for this process.
image=$2    # Image file to burn.
blocks=$3   # Image size in blocks.
round=$4    # Current round (purely to show the info to the user).
speed=$5    # Burning speed.
devnode=$6  # Device node prefix (devnode+burner = burner device).
bwait=0     # Timeout variable for "blank media ready?" waiting loop.
mwait=0     # Timeout variable for automount waiting loop.
swait=0     # Timeout variable for "disc ready?" waiting loop.
m=0         # Boolean indicating automount failure.

echo -e "Now burning $image to $devnode$burner, round $round."

# The following code will check whether the drive has a blank medium
# loaded ready for writing. Otherwise, the burning might be started too
# early when using drives with slow disc access.
until [ "`sdparm --command=capacity $devnode$burner | grep blocks:\ 1`" ]
do
  ((bwait++))
  if [ $bwait -gt 30 ]; then # Abort if blank disc cannot be detected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, blank media did not become ready. Ejecting and aborting this thread..."
    echo -e "(Was trying to burn to $devnode$burner in round $round,"
    echo -e "failed to detect any blank medium in the drive.)\e[0m"
    eject $devnode$burner
    exit
  fi
  sleep 1 # Sleep 1 second before next check.
done

wodim -dao speed=$speed dev=$devnode$burner $image # Burning image.

# Notify user if burning failed.
if [[ $? != 0 ]]; then
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, burning process ran into trouble."
  echo -e "Ejecting and aborting this thread.\e[0m\n"
  eject $devnode$burner
  exit
fi

# The following code will eject and reload the disc to clear the device
# status and then wait for the drive to become ready and its disc to
# become readable (checking the discs block count as output by sdparm).
eject $devnode$burner && eject -t $devnode$burner
until [ "`sdparm --command=capacity $devnode$burner | grep $blocks`" = "blocks: $blocks" ]
do
  ((swait++))
  if [ $swait -gt 30 ]; then # Abort if disc cannot be redetected for 30 seconds.
    echo -e "\n\e[0;31mFAILURE, device failed to become ready. Aborting this thread..."
    echo -e "(Was trying to access $devnode$burner in round $round,"
    echo -e "failed to re-read medium for 30 seconds after retraction.)\e[0m\n."
    exit
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done

# The next part is only necessary if your system auto-mounts optical media.
# This is usually the case, but if your system doesn't do this, you need to
# comment the next block out. This will otherwise wait for the disc to
# become mounted. We need to dismount afterwards for proper checksumming.
until [ -n "`mount | grep $devnode$burner`" ]
do
  ((mwait++))
  if [ $mwait -gt 30 ]; then # Warn user that disc was not automounted.
    echo -e "\n\e[0;33mWARNING, disc did not automount as expected."
    echo -e "Attempting to carry on..."
    echo -e "(Was waiting for disc on $devnode$burner to automount in"
    echo -e "round $round for 30 seconds.)\e[0m\n."
    m=1
    break
  fi
  sleep 1 # Sleep 1 second before next check to avoid unnecessary load.
done
if [ ! $m = 1 ]; then # Only need to dismount if disc was automounted.
  sleep 1 # Give the mounter a bit of time to lose the "busy" state.
  sudo umount $devnode$burner # Dismount burner as root/superuser.
fi

# On to the checksumming.
echo -e "Now comparing checksums for $devnode$burner, round $round."
sha512sum -c sum$burner.txt # Comparing checksums.
if [[ $? != 0 ]]; then # If checksumming produced errors, notify user. 
  echo -e "\n\e[0;31mFAILURE while burning $image to $devnode$burner, checksum mismatch.\e[0m\n"
fi

eject $devnode$burner # Ejecting disc after completion.

So as you can probably see, this is not very polished, scripts aren’t using configuration files yet (would be a nice to have), and it’s still a bit chaotic when it comes to actual usability and smoothness. It does work quite well however, with the device/disc readiness checking as well as the anti-automount workaround having been the major challenges (now I know why K3B ejects the disc before starting its checksumming, it’s simply impossible to read from the disc after burning finishes).

When run, it looks like this (user names have been removed and paths altered for the screenshot):

Multiburner

“multiburner.sh” at work. I was lucky enough to hit a bad disc, so we can see the checksumming at work here. The disc actually became unreadable near its end. Verification is really important for reliable disc deployment.

When using a poor mans disc burning station like this, I would actually recommend putting stickers on the trays like I did. That way, you’ll immediately know which disc to throw into the garbage bin.

This could still use a lot of polishing, and it’s quite sad, that the “big” GUI tools can’t do parallel burning, but I think I can now make due. Oh, and I actually also tried Gnomes “brasero” burning tool, and that one is far too minimalistic and can also not burn to multiple devices at the same time. They may be other GUI fatsos that can do it, but I didn’t want to try and get any of those installed on my older CentOS 6 Linux, so I just did it the UNIX way, even if not very elegantly. ;)

Maybe this can help someone out there, even though I think there might be better scripts than mine to get it done, but still. Otherwise, it’s just documentation for myself again. :)

Edit: Updated the scripts to implement a proper blank media detection to avoid burning starting prematurely in rare cases. In addition to that, I added some code to detect burning errors (where the burning process itself would fail) and notify the user about it. Also applied some cosmetic changes.

Edit 2: Added tool detection to multiburn.sh, and removed redundant color codes in warning & error messages in burn-and-check-worker.sh.

Article logo based on the works of Lorian and Marcin Sochacki, “DVD.png” licensed under the CC BY-SA 3.0.

Sep 062014
 

The Grim Dawn logoGame compatibility is generally becoming a major issue on Windows XP and XP x64, and I’m not even talking about Direct3D 10/11 here. Microsofts own software development kits and development environments (VisualStudio) come preconfigured in a pretty “Anti-XP” way these days, even if you intend to just build Direc3D 9 or OpenGL 4 applications with it.

There are examples where even Indie developers building Direct3D 9.0c games refuse to deal with the situation in any way other than “Please go install a new OS”, Planetary Annihilation being the prime example. Not so for Grim Dawn though, a project by a former Titan Quest developer which I [helped fund] on Kickstarter a while back. In their recent build numbered B20, an issue arose that seemed to be linked to XP exclusively. See this:

Grim Dawn B20 if_indextoname() Bug

Grim Dawn B20 if_indextoname() bug, in this case on a German 32-Bit Windows XP. Looks similar on XP x64 though. © by [Hevel].

More information can be seen in the corresponding Grim Dawn [forum thread], where I and others reported the issue, determining that it was on XP only in the process. That thread actually consists of two issues, just focus on the if_indextoname() one. This is also documented at [Microsofts MSDN library].

The function seems to be related to DNS name resolution and is a part of the Windows IP Helper API on Windows Vista and newer.  if_indextoname() does however not exist on any NT5.x system, which means Windows 2000, XP and 2oo3 Server which includes XP x64 and there is no fallback DLL. My assumption is, that this happened because of the newly added multiplayer netcode in the game.

Now the interesting part: After me and a few other XP users reported the issue starting on the 30th of August, it took the developers only 3 days to roll out a hotfix via Steam, and all was good again! I believe nowadays you can judge developers by how well they support niche systems, and in this case support was stellar. It may also have something to do with the Grim Dawn developers actively participating in their forums of course. That’s also great, as you can interact with them directly! No in-between publisher customer support center crap, but actually people who know their stuff, ’cause they’re the ones building it!

So I’d like to say a big “Thank you, and keep up the good work!” here!

Jul 292014
 

JustitiaI thought it impossible, but it is indeed happening. As reported [here]German flag, the Austrian [Supreme Court of Justice] (not the same thing as the constitutional court) ruled that in case of massive copyright infringements, enforcement of a nation-wide ban of certain servers is justifiable. If such a ban is being decided upon, every Internet provider receives a written court order and of course has to obey such an order immediately and put in place effective mechanisms to block access to a certain host – nation wide! As far as I know that means an IP ban currently, not a host name ban.

The first round of active bans starts with 1st of August 2014, where [The Piratebay], [Kinox.to] and [Movie4k.to] will effectively have to be banned in Austria. This resembles Internet bans as seen in “The great firewall” of China or in more extreme cases in North Korea. Naturally, this is not good, as it represents a growing encroachment on our Internet freedoms.

I remember, when I was in China, I kept an SSH2 server and HTTP proxy open at home, so I could tunnel home through my SSH2 connection and then use the local HTTP proxy via that encrypted connection to access all of the Internet, because I knew access would be free and untampered with here. Now it seems people are getting ready to do the same thing and use encrypted virtual private networks (VPNs) or the Tor network to reach such sites using foreign exit nodes.

This is madness!

Austria was supposed to be a (relatively) free country where Internet services can not be banned simply for certain potentials they may offer.

Naturally, Internet Service Providers – even the largest ones – have protested sharply against this and even sued for having this insanity thwarted, but unfortunately they lost their case. It seems that despite several achieved victories on the front lines of a free Internet we’re heading into stormy waters now. ISPs also said that they’d be wrongfully pushed into the roles of judges because it would be them to decide, whether a web sites principal purpose is copyright infringement, thus justifying the ban (that’s the weird part).

Naturally, driving users towards using anonymization networks and VPNs only serves to further criminalize the use of those services too, giving them a bad name and making the problem worse.

Soon, we will be witnessing the first stones being set to form the fundament of a Great Firewall of Austria. The 1st of August will not be a good day, not at all.

Update: It seems that the matter is being discussed and renegotiated at the moment. As a result, the requested bans have been pushed back to an undefined point in time. So for now, all three sites I mentioned remain reachable within Austria. I will keep you updated as soon as any news about this surface.

Update 2, 2014-10-08: And here we go, the VAP (“Verein für Anti-Piraterie”, or anti-piracy association) did it. There are now DNS blockades in place for the domains kino.to and also movie4k.to. Querying any DNS server of the providers A1, Drei, Tele2 or UPC for those domain names will result in the IP address “0.0.0.0”, thus rendering the web sites inaccessible for any “normal” user. The VAP has now even been asking for an IP ban, which could easily affect multi-service machines – think email servers using the same IP – and also virtual hosts, where multiple websites/domains are hosted on a single machine or any high availability cluster of machines that does not use DNS-level clustering (well, you wouldn’t be able to resolve the DNS name anymore anyway).

Users have reacted by simply using other, free DNS servers on the web, and the site operators have reacted by using alternate top-level domain names, like movie4k.tv for instance. It seems the war is on. Providers like UPC have stated in public interviews, that this process is ethically questionable, as soon people in power may learn what kind of a tool such censorship could be to them – potentially eliminating criticism or any publication that they’d rather see gone.

I would like to add – once again – that I find it highly disturbing that a supposedly free country like Austria would implement measures reminiscent of things that happen in Turkey, China or North Korea…

Oh and The Pirate Bay is next it seems.