May 292014
 

Truecrypt LogoJust recently, I was happily hacking away at the Truecrypt 7.1a source code to enhance its abilities under Linux, and everybody was eagerly awaiting the next version of the open source disk encryption software since the developers told me they were working on “UEFI+GPT booting”, and now BOOM. Truecrypt website gone, forum gone, all former versions’ downloads gone. Replaced by a redirection to Truecrypts SourceForge site, showing a very primitive page telling users to migrate to Bitlocker on Windows and Filevault on MacOSX. And told to just “install some crypto stuff on Linux and follow the documentation”.

Seriously, what the fuck?

Just look at this shit (a snippet from the OSX part):

The Truecrypt website now

The Truecrypt website now

Farther up they’re saying the same thing, warning the user that it is not secure with the following addition: “as it may contain unfixed security issues”

There is also a new Truecrypt version 7.2 stripped of most of the functionality. It can only be used to decrypt and mount anymore, so this is their “migration version”. Funny thing is, the GPG signatures and keys seem to check out. It’s truly the Truecrypt developers’ keys that were used for signing the binaries.

Trying to get you a screenshot of the old web site for comparison from the WayBackMachine, you get this:

Can't fetch http://www.truecrypt.org from the WayBackMachine

Can’t fetch http://www.truecrypt.org from the WayBackMachine. Access denied.

Now, before I give you the related links, let me sum up the current theories as to what might have occurred here:

  • http://www.truecrypt.org has been attacked and compromised, along with the SourceForge Account (denied by SourceForge administrators atm) and the signing keys.
  • A 3-letter agency has put pressure on the Truecrypt foundation, forcing them to implement a back door. The devs burn the project instead.
  • The Truecrypt developers had enough of the pretty lacking donation support from the community and just let it die.
  • The crowdfunded Truecrypt Audit project found something very nasty (seems not to be the case according to auditors).
  • Truecrypt was an NSA project all along, and maintenance has become tedious. So they tell people to migrate to NSA-compromised solutions that are less work, as they don’t have to write the code themselves (Bitlocker, Filevault). Or, maybe an unannounced NSA backdoor was discovered after all. Of course, any compromise of commercial products stands unproven.

Here are some links from around the world, including statements by cryptographers who are members of the Truecrypt audit project:

If this is legit, it’s really, really, really bad. One of the worst things that could’ve happened. Ever. I pray that this is just a hack/deface and nothing more, but it sure as hell ain’t looking good!

There is no real cross-platform alternative, Bitlocker is not available to all Windows users, and we may be left with nothing but a big question mark over our heads. I hope that more official statements will come, but given the clandestine nature of the TC developers, this might never happen…

Update: This starts to look more and more legit. So if this is truly the end, I will dearly miss the Truecrypt forum. Such a great community with good, capable people. I learned a lot there. So Dan, nkro, xtxfw, catBot/booBot, BeardedBlunder and all you many others whose nicks my failing brain can not remember: I will likely never find you guys again on the web, but thanks for all your contributions!

Update 2: Recently, a man called Steve Barnhart, who had contact with Truecrypt auditor Matthew Green said, that a Truecrypt developer named “David” had told him via email, that whichever developers were still left had lost interest in the project. The conversation can be read [here]!

I once got a reply from a Truecrypt developer in early 2013, when asking about the state of UEFI+GPT bootloader code too…

I just dug up that email from my archive, and the address contained the full name of the sender. And yes, it was a “David”. This could very well be the nail in the coffin. Sounds as if it was truly not the NSA this time around.

Apr 282014
 

Truecrypt LogoJust recently a user from the Truecrypt forums reported a problem about Truecrypt on Linux that puzzled me at first. It took quite some digging to get to the bottom of it, which is why I am posting this here. See [this thread] (note: The original TrueCrypt project is dead by now, and its forums are offline) and my reply to it. Basically, it was all about not being able to mount more than 8 Truecrypt volumes at once.

So, Truecrypt is using the device mapper and loop device nodes to mount its encrypted volumes after which the file systems inside said volumes are being mounted. That user called iapx86 in the Truecrypt forums did just that successfully until he hit a limit with the 9th volume he was attempting to mount. So he had quite a few containers lying around that he wanted mounted simultaneously.

At first I thought this was some weird Truecrypt bug. Then I searched for the culprit within the Linux device mapper, when it finally hit me. It had to be the loop device driver. Actually, Truecrypt even told me so already, see here:

TC loop error

Truecrypt reporting an error related to a loop device

After just a bit more research it became clear that the loop driver has this limit set. So now, the first part is to find out whether you even have a loop module like on ArchLinux for instance. Other distributions like RedHat Enterprise Linux and its descendants like CentOS, SuSE and so on have it compiled into the Linux kernel directly. First, see whether you have the module:

lsmod | grep loop

If your system is very modern, the loop.ko module might no longer have been loaded automatically. In that case please try the following as root or with sudo:

modprobe loop

In case this works (more below if it doesn’t), you may adjust loops’ options at runtime, one of which is the limit imposed on the number of loop device nodes there may be at once. In my example, we’ll raise this from below 10 to a nice value of 128, run this command as root or with sudo:

sysctl loop.max_loop=128

Now retry, and you’ll most likely find this working. To make the change permanent, you can either add loop.max_loop=128 as a kernel option to your boot loaders kernel line, in the file /boot/grub/grub.conf for grub v1 for instance, or you can also edit /etc/sysctl.conf, adding a line saying loop.max_loop=128. In both cases the change should be applied at boot time.

If you cannot find any loop module: In case modprobe loop only tells you something like “FATAL: Module loop not found.”, the loop driver will likely have been compiled directly into your Linux kernel. As mentioned above, this is common for multiple Linux distributions. In that case, changing loops’ options at runtime becomes impossible. In such a case you have to take the kernel parameter road. Just add the following to your kernel line in your boot loaders configuration file  and mind the missing loop. in front, this is because we’re now no longer dealing with a kernel module option, but with a kernel option:

max_loop=128

You can add this right after all other options in the kernel line, in case there are any. On a classic grub v1 setup on CentOS 6 Linux, this may look somewhat like this in /boot/grub/grub.conf just to give you an example:

title CentOS (2.6.32-431.11.2.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 ro root=/dev/sda1 LANG=en_US.UTF-8 crashkernel=128M rhgb quiet max_loop=128
        initrd /initramfs-2.6.32-431.11.2.el6.x86_64.img

You will need to scroll horizontally above to see all the options given to the kernel, max_loop=128 being the last one. After this, reboot and your machine will be able to use up to 128 loop devices. In case you need even more than that, you can also raise that limit even further, like to 256 or whatever you need. There may be some upper hard limit, but I haven’t experimented enough with this. People have reported at least 256 to work and I guess very few people would need even more on single-user workstations with Truecrypt installed.

In any case, the final result should look like this, here we can see 10 devices mounted, which is just the beginning of what is now possible when it comes to the amount of volumes you may use simultaneously:

TC without loop error

With the loop driver reconfigured, Truecrypt no longer complains about running out of loop devices

And that’s that!

Update: Ah, I totally forgot, that iapx86 on the Truecrypt forums also wanted to have more slots available, because Truecrypt limits the slots for mounting to 64. Of course, if your loop driver can give you 128 or 256 or whatever devices, we wouldn’t wanna have Truecrypt impose yet another limit on us, right? This one is tricky however, as we will need to modify the Truecrypt source code and recompile it ourselves. The code we need to modify sits in Core/CoreBase.h. Look for this part:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 64; }

What we need to alter is the constant returned by the C++ virtual function GetLastSlotNumber(). In my case I just tried 256, so I edited the code to look like this:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 256; }

Now I recompiled Truecrypt from source with make. Keep in mind that you need to download the PKCS #11 headers first, right into the directory where the Makefile also sits:

wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h

Also, please pay attention to the dependencies listed in the Linux section of the Readme.txt that you get after unpacking the source code just to make sure you have everything Truecrypt needs to link against. When compiled and linked successfully by running make in the directory where the Makefile is, the resulting binary will be Main/truecrypt. You can just copy that over your existing binary which may for instance be called /usr/local/bin/truecrypt and you should get something like this:

More mounting slots for Truecrypt on Linux

More than 64 mounting slots for Truecrypt on Linux

Bingo! Looking good. Now I won’t say this is perfectly safe and that I’ve tested it inside and out, so do this strictly at your own risk! For me it seems to work fine. Your mileage may vary.

Update 2: Since this kinda fits in here, I decided to show you how I tested the operation of this. Creating many volumes to check whether the modifications worked required an automated process. After all, we’d need more than 64 volumes created and mounted! Since Truecrypt does not allow for automated volume creation without manual user interaction, I used the Tcl-based expect system to interact with the Truecrypt dialogs. At first I would feed Truecrypt a static string for its required entropy data of 320 characters. But thanks to the help of some capable and helpful coders on [StackOverflow], I can now present one of their solutions (I’m gonna pick the first one provided).

I split this up into 3 scripts, which is not overly elegant, but works for me. The first script, which I shall call tc-create-master.sh will just run a loop. The user can specify the amount of iterations in the script. That number represents the amount of volumes to be created. In the loop, a Tcl expect script is being called many times to create the actual volumes. That expect script I called tc-create-worker.sh. Master first, as you can see I have started with 128 volumes, which is twice as much as the vanilla Truecrypt binary would allow you to use or 16 times as many loop devices the Linux kernel would usually let you create at the same time:

#!/bin/bash
# Define number of volumes to create here:
n=128
 
for i in {1..$n} 
do 
  ./tc-create-worker.sh $i 
done

And this is the worker written in Tcl/expect, enhanced thanks to StackOverflow users. This is platform-independent too, not like my crappy original that you can also see at StackOverflow and in the Truecrypt forums:

#!/usr/bin/expect
 
proc random_str {n} {
  for {set i 1} {$i <= $n} {incr i} {
    append data [format %c [expr {33+int(94*rand())}]]
  }
  return $data
}
 
set num [lindex $argv]
set entropyfeed [random_str 320]
puts $entropyfeed
spawn truecrypt -t --create /home/user/vol$num.tc --encryption=AES --filesystem=none --hash=SHA-512 --password=test --size=33554432 --volume-type=normal
expect "\r\nEnter keyfile path" {
  send "\r"
}
expect "Please type at least 320 randomly chosen characters and then press Enter:" {
  send "$entropyfeed\r"
  exp_continue
}

This part first generates a random floating-point number between 0,0 and 1,0. Then it multiplies that by 94 to reach something like 0..93,999˙. Then, it adds that value to 33 and makes an integer out of it by truncating the parts behind the comma, resulting in random numbers in the range of 33..126. Now how does this make any sense? We want to generate random printable characters to feed to Truecrypt as entropy data!? Well, in the 7-bit ASCII table, the range of printable characters starts at number 33 and ends at number 126. And that’s what format %c does with those numbers, converting them according to their address in the ASCII table. Cool, eh? I thought that was pretty cool.

The rest is just waiting for Truecrypts dialogs and feeding the proper input to it, first just an <ENTER> keypress (\r) and second the entropy data plus another <ENTER>. I chose not to let Truecrypt format the containers with an actual file system to save some time and make the process faster. Now, mounting is considerably easier:

#!/bin/bash
# Define number of volumes to mount here:
n=128
 
for i in {1..$n}
do
  truecrypt -t -k "" --protect-hidden=no --password=test --slot=$i --filesystem=none vol$i.tc
done

Unfortunately, I lost more speed here than I had won by not formatting the volumes, because I chose the expensive and strong SHA-512 hash algorithm for my containers. RIPEMD-160 would’ve been a lot faster. As it is, mounting took several minutes for 128 volumes, but oh well, still not too excessive. Unmounting all of them at once is a lot faster and can be achieved by running the simple command truecrypt -t -d. Since no screenshot that I can make can prove that I can truly mount 128 volumes at once (unless I forge one by concatenation), it’s better to let the command line show it:

[thrawn@linuxbox ~]$ echo &amp;&amp; mount | grep truecrypt | wc -l 
 
128

Nah, forget that, I’m gonna give you that screenshot anyway, stitched or not, it’s too hilarious to skip that, oh and don’t mind the strange window decorations, that’s because I opened Truecrypt on Linux via remote SSH+X11 on a Windows XP x64 machine with GDI UI:

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us!

Oh man… I guess the sky’s the limit. I just wonder why some of those limits (8 loop devices maximum in the Linux kernel, 64 mounting slots in Truecrypt) are actually in place by default…

Edit: Actually, the limit is now confirmed to be 256 volumes due to a hard coded limit in the loop.ko driver. Well, still better than the 8 we started with, eh?

Jan 132014
 

CentOS logo[Red Hat Enterprise Linux], being one of the commercially most successful Linux systems naturally attracts a lot of attention and forks. Now there is [Scientific Linux] on one side, catering to the needs of the academic community, then the quickly developing [Fedora Linux] which is where Red Hat is basically getting upcoming software tested, and then, finally, [CentOS], the “Community Enterprise Operating System”, which is basically just the desktop+server version of Red Hat Enterprise Linux (RHEL) copied with all Red Hat logos etc. removed and replaced. Now Fedora is already under the wings of Red Hat, and the CentOS project is now too. According to [this announcement] from the 7th of January, CentOS is joining Red Hat as its free desktop & server Linux sibling.

The CentOS project itself is advertising this major change using the slogan “New Look – New CentOS”, which is also reflected in their radically reworked web site design:

The new look of the CentOS web site

The new look of the CentOS web site

Let me quote their own announcement, when it comes to what changes we can expect now.

What won’t change:

  • The CentOS Linux platform isn’t changing. The process and methods built up around the platform however are going to become more open, more inclusive and transparent.
  • The sponsor driven content network that has been central to the success of the CentOS efforts over the years stays intact.
  • The bugs, issues, and incident handling process stays as it has been with more opportunities for community members to get involved at various stages of the process.
  • The Red Hat Enterprise Linux to CentOS firewall will also remain. Members and contributors to the CentOS efforts are still isolated from the RHEL Groups inside Red Hat, with the only interface being srpm / source path tracking, no sooner than is considered released. In summary:  we retain an upstream.

What will change:

  • Some of us now work for Red Hat, but not RHEL. This should not have any impact to our ability to do what we have done in the past, it should facilitate a more rapid pace of development and evolution for our work on the community platform.
  • Red Hat is offering to sponsor some of the buildsystem and initial content delivery resources – how we are able to consume these and when we are able to make use of this is to be decided.
  • Sources that we consume, in the platform, in the addons, or the parallel stacks such as Xen4CentOS will become easier to consume with a git.centos.org being setup, with the scripts and rpm metadata needed to create binaries being published there. The Board also aims to put together a plan to allow groups to come together within the CentOS ecosystem as a Special Interest Group (SIG) and build CentOS Variants on our resources, as officially endorsed. You can read about the proposal at [http://www.centos.org/variants/].
  • Because we are now able to work with the Red Hat legal teams, some of the contraints that resulted in efforts like CentOS-QA being behind closed doors, now go away and we hope to have the entire build, test, and delivery chain open to anyone who wishes to come and join the effort.
  • The changes we make are going to be community inclusive, and promoted, proposed, formalised, and actioned in an open community centric manner on the centos-devel mailing list. And I highly encourage everyone to come along and participate.

So there you have it. Of course there will be a lot more finer details about all this, but if you want to learn about those, I would like to point you to the CentOS community instead:

Jan 112014
 

SteamOS logoYep, it’s SteamOS time again. Last time I’ve tried to cover its basic operation, running Steam itself and a few independently developed Steam games. But what I felt was still lacking was support for any average Linux application. Don’t get me wrong, SteamOS isn’t meant as a general purpose desktop operating system, I know that, but since the game selection is still somewhat limited, you might want to look into other things you can do with this OS. Initially, the system is configured to fetch binary packages only from Valves own server. Let’s have a look at the /etc/apt/sources.list which defines the package sources for Debians apt package manager:

## internal SteamOS repo
deb http://repo.steampowered.com/steamos alchemist main contrib non-free
deb-src http://repo.steampowered.com/steamos alchemist main contrib non-free

So as you can see, everything you’ll install using apt will come from the Steam server itself. And let me just say that ain’t very much. Last time I mentioned that SteamOS is a modified Debian 7.1, so now we’re going to try and see how compatible the userlands of the two operating systems really are. As user root, let’s add the following package sources for Debian stable generated by [this web tool] to /etc/apt/sources.list:

## Debian repos:
deb http://ftp.at.debian.org/debian stable main contrib non-free
deb-src http://ftp.at.debian.org/debian stable main contrib non-free

deb http://ftp.debian.org/debian/ wheezy-updates main contrib non-free
deb-src http://ftp.debian.org/debian/ wheezy-updates main contrib non-free

deb http://security.debian.org/ wheezy/updates main contrib non-free
deb-src http://security.debian.org/ wheezy/updates main contrib non-free

(This will become invalid as soon as Jessie becomes the next stable Debian release, replacing Wheezy. If you want to stay on Wheezy with SteamOS, change the “stable” strings to “wheezy”)

After that, also as root, run the command aptitude update to make apt aware of the newly added package sources. Now I tried to install a few typical applications, starting with a seriously fat one: LibreOffice, just like that: aptitude install libreoffice. This one was tricky, as it somehow depends on the proprietary AMD/ATi video card Xorg driver fglrx. Luckily, aptitude can suggest some alternative routes to solve dependencies, and one is to just replace the driver with the one from Debian. This might be suboptimal, but it works.

On top of that – and much more flawlessly so – I installed the Filezilla FTP/SFTP client, The Gimp for graphical content creation and even Wine, in the same simple way. Also, I did something really wrong, vile and perverted with Wine, installing the Apple Safari browser in its MS Windows version on SteamOS Linux. ;)

See the following screenshots:

So, I assume the answer is: Yes, you can turn SteamOS into a more versatile platform rather easily. There may be caveats and you may actually wreck your system doing stuff like that though. So whenever you’re installing new software using apt-get or aptitude, you should pay very close attention to what the package manager is telling you. Otherwise you may accidentally remove or break important packages on your system, worst case being an unbootable machine.

Real Steam Machines will of course ship with a recovery medium to restore the OS into factory condition, but I feel the risks should be mentioned at least. Oh, and running Apple Safari on Wine 1.4.1 really sucks by the way, crashes all the time. ;) Bad luck Debian doesn’t seem to have Wine 1.7. But smaller and older Windows tools should work rather fine.

So yeah, this opens up SteamOS to a wider range of possibilities. In my opinion, Valve should include the Debian package repositories to begin with, or rather mirror them to their own servers and protect their own modifications using repository priority protections to not let the user mess up things too easily.

On top of that – if I were a project leader for that – I’d make my team develop something like the graphical apt frontend “synaptic” in a simpler fashion, maybe like PC-BSDs “AppCafe”, so that people can install software in a super easy way.

I hope Valve is aware of its OS needing some more software variety for the targeted user audience. Most of the people buying Steam Machines are probably not experienced Linux users, so an easier way would be a good idea…

Jan 092014
 

SteamOS logoI’m usally always willing to try and play around with a new operating system, as I’ve now explored many Linux distributions, BSD UNIX distributions, OpenSolaris, Haiku OS, Windows etc. Now I wanted to give Valves SteamOS a chance to show me what it can do. The [Steam] platform is still the leading platform for digital game distribution besides services like Electronic Arts [Origin] or [GOG]. Other than EA, who focuses on Windows exclusively, Valve Software dared to move its client application onto MacOS X and Linux as well. While rather limited to Ubuntu 12.04 LTS and higher, it was still an important move for gaming on Linux.

Sure, far less games are available on Linux than on Windows, but some AAA titles like X3, Serious Sam 3: BFE or Metro: Last Light are actually available amongst many Indie games. Some use a packaged Wine distribution, some are native ELF binaries.

Now Valve wants to let OEMs like Alienware, Falcon Northwest etc. build dedicated [Steam Machines] (PDF), that will ship with SteamOS (and Steam, naturally) preinstalled. Since SteamOS is not based on Ubuntu Linux, but rather Debian 7.1 Wheezy, a Linux compatibility layer – likely just a set of libraries – has been included to ensure that Linux capable Steam games targeted at the Ubuntu platforms can still work out of the box.

Steam Piston

The original “Steam Piston” prototype machine

But the funny thing: Users may just modify SteamOS to their liking as it will be distributed freely by Valve. Well, it’s a Debian fork after all! So you can already get the official [beta version] or some modified ones that loosen the restrictions like UEFI+GPT booting a little, and build your own Steam Machine if you so desire. Now I had this running on VirtualBox with 3D passthrough via OpenGL enabled. It sucks, but I can at least show you a few indie OpenGL games running, plus my notorious x264 benchmark:

The only game severely misbehaving was “Hotline Miami”, as it would make the whole display manager constantly switch resolutions, making the system almost unusable. Also, other games had problems running in fullscreen, all probably by courtesy of the VirtualBox Xorg driver. I suppose with a native machine and a “real” graphics card driver like nvidia or radeon/fglrx these problems would disappear.

As for other software, the Valve apt repository does give you a very few things, most importantly stuff like their modified Linux kernel source code, but if you’re looking for a wide range of applications like maybe Mozilla Thunderbird, or LibreOffice etc., all that stuff is simply not there yet. You do get a compiler plus toolchain, but what about easy binary package installation? It may be possible to just add the Debian 7 repositories and install stuff from there, but I haven’t attempted that yet. Not sure if that would be safe, as Valve backported a newer kernel and more importantly a newer libc to SteamOS. But I’m gonna try soon and let you know how it worked out.

Maybe I’ll also come up with an installation guide for the x264 benchmark on SteamOS, although it’s naturally quite similar to any other Linux system, given that an almost complete build toolchain is provided by Valve.

Now, given the still meager availability of top-tier games on Linux, the BIG question is: Can Valve succeed with their “Steam Machines” and really make game developers finally go for Linux, or will they fail with the big commercial game studios? Time will tell, but for the first time there is a really big player going for it, so we can have some hope at least, even though some people including myself have reservations about the trustworthiness of Steam. But if this works out, others will surely follow, and we might see bigger Linux game releases even outside of Steam once again!

Oct 102013
 

SSD alignment logoRecently, a friend from a forum asked away about what he would need to be cautious about when installing Windows 98 to a flash medium, like an SSD or in his case an IDE CompactFlash card. While this is the extreme case, people tend to ask such questions also for Windows XP and XP x64 or other older systems. So I thought I’d show you, using this Win98 + CF card case as an example. Most of the stuff would be similar for more “modern” systems like maybe Windows 2000, XP, Linux 2.4 etc. anyway.

For this, we are going to use the bootable CD version of [gparted]. In case you do not know gparted yet, you can think of it as an OpenSource PartitionMagic, that you can also boot from CD or USB. If you know other graphical partition managers, you will most likely feel right at home with gparted. Since this article is partially meant as a guide for my friend, parts of the screenshots will show german language, but it shouldn’t be too much of a problem. First, let’s boot this thing up from CD (or USB key if you have no optical drive). See the following series of screenshots, it’s pretty self-explanatory:

Now that we’re up and running you will be presented with a list of available drives accessible at “GParted / devices”. In our case we don’t need to do that as we only have one device posing as a 16GB large CompactFlash card as you will see below. Since this is a factory-new device, we shall assume that it will be unformatted and unpartitioned. If it IS pre-formatted you would need to check the given alignment first, see below to understand what i mean. But you can also just delete whatever partitions are on there if you don’t need any data from the device and proceed with this guide afterwards.

What we will do is create a properly aligned partition, so we won’t create unnecessary wear on our NAND flash and so we won’t sacrifice performance. This is critical especially for slower flash media, because if you’re going to install an operating system onto that, you’d really feel the pain with random I/O on a misaligned partition (seek performance drops, far too large reads/writes would occur etc.). Just follow these steps here to partition the device and pre-format it with FAT32. If you’re on Windows 2000 or newer, you may want to choose NTFS instead, for Linux choose whatever bootable file system your target distribution understands, like maybe JFS or something. If your flash medium supports TRIM (newer SATA SSDs do), choose EXT4, XFS or BTRFS instead, for BSD UNIX pick UFS or ZFS if you are ok with those:

This is it, our partition starts at sector number 2048 (1MiB, or 1MB as I tend to say), which will work for all known flash media. If you expect your medium to have larger blocks, you could set the start to sector 20480 instead, which would mean 10MB. I don’t think there are media out there with blocks THAT large, but by doing this you could be a 100% certain that it’ll be aligned. Time to shut gparted down:

After that, you may still re-format the partition from DOS or legacy Windows, but make sure to not re-partition the disk from within a legacy operating system, so no fdisk and no disk management partitioning stuff in Win2000/XP!

On a side note, FAT32 file systems as shown here can’t be [TRIM]’ed on Windows, so when the media gets enough write cycles, writes will permanently slow down, with maybe garbage collection helping a little bit. You could hook that FAT32-formatted disk up to a modern Linux and TRIM it there though, if it’s a new enough SATA drive. In my friends’ case that’s not an issue anyway, as he is using IDE/PATA flash that doesn’t support the TRIM command to begin with. But if you do have a modern enough SATA system and a TRIM-capable SSD, you might want to go with NTFS for Windows or EXT4/BTRFS/XFS for Linux as well as UFS/ZFS for BSD UNIX if you can, as those file systems are supported by current TRIM implementations, or yeah, FAT32 for data exchange between operating systems. Keep in mind though, to TRIM FAT32 SATA SSDs, Linux is required at the time of this writing.

And no, no IDE/PATA media can say “TRIM” as far as I know.

Also: You can re-align existing partitions with gparted in a somewhat similar fashion as shown above by just editing them. This may be useful if you messed up with DOS or WinXP and have already installed your system on a misaligned partition.  Gparted will then have to re-align the entire file system on that partition though, and that may take hours, e.g. for my Intel 320 600GB SSD (about 400GB full) it took almost 3 hours. To see which file systems gparted supports for re-alignment, [look here] (It’s mostly “shrink” that is required)!

Sep 192013
 

Knoppix logoSince 2 or 3 years I have been building a Knoppix-based Linux-on-CD distribution for teaching (exercises and exams) at our University. A colleague of mine has written the proper server/client tools for source code submission and evaluation while I have adapted and hardened the system for our needs, based on tons of modifications on the CD and also on the server side, from where a lot of local behavior is controlled by server-side scripts which can dis-/enable the network or USB ports on clients, but which can also shut clients down or verify the media to effectively prevent CD forgery (Its pretty cool to see them students try, so far they never came even remotely close to breaking it).

Now I am supposed to ensure that the system can use sound. It seems some Java-based exercises may contain sound creation and playback in the future. To make sure the system can support new sound chips (which it currently does not) and potentially also new GPUs I tried to hack a new Linux kernel into the system to replace the good old 2.6.32.6, which had the ALSA sound system built in, so I couldn’t update it as a kernel module. Also, updating the whole 6.2.1 thing to Knoppix 7 is just not an option, as it would be far too much work. The kernel update it had to be.

And so the adventure began…

Continue reading »

Sep 102013
 

NSA logoWith all that talk about the [National Security Agency] stealing our stuff (especially our most basic freedoms), it was time to look at a few things that Mr. Snowden and others before him have found out about how the NSA actually attempts to break certain encryption ciphers that are present in OpenSSLs and GnuTLSs cipher suites. Now that it has been clearly determined that a NSA listening post has been established in Vienna, Austria (protestors are on the scene), it may seem a good thing to look over a few details here. Especially now that the vulnerabilities are widely known and potentially exploitable by other perpetrators.

I am no cryptologist, so I won’t try to convince you that I understand this stuff. But from what I do understand, there is a side-channel attack vulnerability in certain block ciphers like for instance AES256-CBC-SHA or RSA-DES-CBC-SHA. I don’t know what it is exactly that’s vulnerable, but whoever may listen closely on one of the endpoints (client or server) of such a connection may determine crucial information by looking at the connections timing information, which is the side channel. Plus, there is another vulnerability concerning the Deflate protocol compression in TLS, which you shouldn’t confuse with stuff like mod_deflate in Apache, as this “Deflate” exists within the TLS protocol itself.

As most client systems – especially mobile operating systems like Android, iOS or Blackberry OS – are compromised and backdoored, it is quite possible that somebody is listening. I’m not saying “likely”, but possible. By hardening the server, the possibility of negotiating a vulnerable encrypted connection becomes zero – hopefully at least. :roll:

Ok, I’m not going to say “this is going to protect you from the NSA completely”, as nobody can truly know what they’re capable of. But it will make you more secure, as some vulnerable connections will no longer be allowed, and compromised/vulnerable clients are secure as long as they connect to a properly configured server. Of course you may also lock down the client by updating your browser for instance, as Firefox and Chrome have been known to be affected. But for now, the server-side.

I am going to discuss this for the Apache web server specifically, but it’s equally valid for other servers, as long as they’re appropriately configurable.Big Apache web server logoFirst, make sure your Apache is compatible with the SSL/TLS compression option SSLCompression [on|off]. Apache web servers starting from 2.2.24 or 2.4.3 should have this directive. Also, you should use [OpenSSL >=1.0] (link goes to the Win32 version, for *nix check your distributions package sources) to be able to use SSLCompression and also more modern TLSv1.1 and TLSv1.2 versions. If your server is new enough and properly SSL-enabled, please check your SSL configuration either in httpd.conf or in a separate ssl.conf included in httpd.conf, which is what some installers use as a default. You will need to change the SSLCipherSuite directive to not allow any vulnerable block ciphers, disable SSL/TLS protocol compression, and a few things more. Also make sure NOT to load mod_deflate, as this opens up similar loopholes as the default for the SSL/TLS protocols themselves do!

Edit: Please note that mixing Win32 versions of OpenSSL >=1.0 with the standard Apache version from www.apache.org will cause trouble, so a drop-in replacement is not recommended for several reasons, two being that that Apache version is linked against OpenSSL 0.9.8* (breaking TLS v1.1/1.2) and also built with a VC6 compiler, where OpenSSL >=1.0 is built with at least a VC9 compiler. Trying to run all VC9 binaries (Apache+PHP+SSL) only works on NT 5.1+ (Windows XP/2003 or newer), so if you’re on Win2000 you’ll be stuck with older binaries or you’ll need to accept stability and performance issues.

Edit 2: I now found out that the latest version of OpenSSL 0.9.8, namely 0.9.8y also supports switching off SSL/TLS deflate compression. That means you can somewhat safely use 0.9.8y which is bundled with the latest Apache 2.2 release too. It won’t give you TLS v1.1/1.2, but leaves you with a few safe ciphers at least!

See here:

SSLEngine On
SSLCertificateFile <path to your certificate>
SSLCertificateKeyFile <path to your private key>
ServerName <your server name:ssl port>
SSLCompression off
SSLHonorCipherOrder on
SSLProtocol All -SSLv2
SSLCipherSuite !aNULL:!eNULL:!EXPORT:!DSS:!DES:!DHE-RSA-AES256-SHA:!AES256-SHA:!DHE-RSA-AES128-SHA:!EDH-RSA-DES-CBC3-SHA:!DES-CBC3-SHA:!DHE-RSA-AES128-SHA:!DES-CBC3-SHA:!AES128-SHA:RC4-SHA:RC4-MD5:ALL

This could even make you eligible for a VISA/Mastercard PCI certification if need be. This disables all known vulnerable block ciphers and said compression. On top of that, make sure that you comment out the loading of mod_deflate if not already done:

# LoadModule mod_deflate modules/mod_deflate.so

Now restart your webserver and enjoy!

The same thing can of course be done for mail servers, FTP servers, IRC servers and so on. All that is required is a proper configurability and compatibility with secure libraries like OpenSSL >=1.0 or at least 0.9.8y. If your server can do that, it can also be secured against these modern side channel attacks!

If you wish to verify the safety specifically against BEAST/CRIME attack vectors, you may want to check out [this tool right here]. It’s available as a Java program, .Net/C# program and source code. For the Java version, just run it like this:

java -jar TestSSLServer.jar <server host name> <server port>

This will tell you whether your server supports deflate, which cipher suites it supports and whether it’s BEAST or CRIME vulnerable. A nice point to start! For the client side, a similar cipher suite configuration may be possible to ensure the client won’t allow the negotiation of a vulnerable connection. Just updating your software may be an easier way in certain situations of course. A good looking output of that tool might appear somewhat like this:

Supported versions: SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv3
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_IDEA_CBC_SHA
     RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
     RSA_WITH_CAMELLIA_256_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
     TLS_RSA_WITH_SEED_CBC_SHA
     TLS_DHE_RSA_WITH_SEED_CBC_SHA
  (TLSv1.0: idem)
  (TLSv1.1: idem)
  TLSv1.2
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_IDEA_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA256
     RSA_WITH_AES_256_CBC_SHA256
     RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA256
     DHE_RSA_WITH_AES_256_CBC_SHA256
     RSA_WITH_CAMELLIA_256_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
     TLS_RSA_WITH_SEED_CBC_SHA
     TLS_DHE_RSA_WITH_SEED_CBC_SHA
     TLS_RSA_WITH_AES_128_GCM_SHA256
     TLS_RSA_WITH_AES_256_GCM_SHA384
     TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
     TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
----------------------
Server certificate(s):
  2a2bf5d7cdd54df648e074343450e2942770ab6ff0: EMAILADDRESS=me@myserver.com, CN=www.myserver.com, OU=MYSERVER, O=MYSERVER.com, L=My City, ST=My County, C=COM
----------------------
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: protected
CRIME status: protected

Plus, as always: Using open source software may give you an advantage here, as you can at least reduce the chances of inviting a backdoor eavesdropping on your connections onto your system. As for smartphones: Better downgrade to Symbian or just throw them away altogether, just like your tablets (yeah, that’s not the most useful piece of advice, I know…).

Update: And here a little something for your SSL-enabled UnrealIRCD IRC server.

UnrealIRCD logoThis IRC server has a directive called server-cipher-list in the context set::ssl, so it’s set::ssl::server-cipher-list. Here an example configuration, all the non-SSL specific stuff has been removed:

set {
  ssl { 
    trusted-ca-file "your-ca-cert.crt";
    certificate "your-server-cert.pem";
    key "your-server-key.pem";
    renegotiate-bytes "64m";
    renegotiate-time "10h";
    server-cipher-list "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:RC4-SHA:RC4-MD5:PSK-RC4-SHA";
  };    
};

Update 2: And some more from the Gene6 FTP server, which is not open source, but still extremely configurable. Just drop in OpenSSL >=1.0 (libeay32.dll, ssleay32.dll, libssl32.dll) as a replacement, and add the following line to your settings.ini files for SSL-enabled FTP domains, you can find the files in the Accounts\yourdomainname subfolders of your G6 FTP installation:

Gene6 FTP server logo

SSLCipherList=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:RC4-SHA:RC4-MD5:PSK-RC4-SHA

With that and those OpenSSL >=1.0 libraries, your G6 FTP server is now fully TLSv1.2 compliant and will use only safe ciphers!

Finally: As I am not the most competent user in the field of connection-oriented encryption, please just post a comment if you find some incorrect or missing information, thank you!

Sep 052013
 

OpenLDAP logoWhen we migrated from CentOS Linux 5 to 6 at work, I also wanted to make our OpenLDAP clients communicate with the OpenLDAP server – named slapd – in an encrypted fashion, so nobody can grab user names or home directory locations off the wire when clients and server communicate. So yes, we are using OpenLDAP to store all user names, password hashes, IDs, home directory locations and NFS exports in a central database. Kind of like a Microsoft domain system. Only more free.

To do that, I created my own certificate authority and a server certificate with a Perl script called CA.pl. What I missed is to set the validity period of both the CA certificate and server certificate to something really high. So it was the default – 1 year. Back then I thought it would be going to be interesting when that runs out, and so it was!

All of a sudden, some users couldn’t log in anywhere anymore, other users still worked. On some systems, a user would be recognized, but no home directory mount would occur. A courtesy of two different cache levels in mixed states on different clients (nscd cache and sssd cache) with the LDAP server itself already being non-functional.

So, the log files on clients and server didn’t tell anything. ANYthing. A mystery, even at higher log levels. So after fooling around for like two hours I decided to fire up Wireshark and analyze the network traffic directly. And what I found was a “certificate invalid” message on the TLSv1/TCP protocol layer. I crafted something similar here for you to look at, as I don’t have a screenshot of the original problem:

Wireshark detecting TLS problems

This is a problem slightly different from the actual one. It would say “Certificate invalid” instead of “Unknown CA”. In this case I just used a wrong CA to kind of replay the issue and show you what it might look like. IP and Mac addresses have been removed from this screenshot.

Kind of crazy that I had to use Wireshark to debug the problem. So clearly, a new certificate authority (CA) and server certificate were required. I tried to do that again with CA.pl, but it strangely failed this time in the certificate creation process. One that I tried to create manually with OpenSSL as I usually do with HTTPS certificates didn’t work as the clients complained that there was no ciphers available. I guess I did something wrong there.

There is a nice Perl+GTK tool that you can use to build CA certs as well as go through the process of doing signing requests and then signing certs with CA certs and exporting the resulting files in different formats however. That tool is called [TinyCA2] and is based on OpenSSL, as usual. See here:

The process is all the same as it is with pure command line OpenSSL or CA.pl: First, create your own certificate authority, then a server certificate with its corresponding request to the CA. Then sign that request with the CA certificate and you’re all done and you can now export the CA cert, signed server cert and private key to *.pem files or other types of encodings if necessary. And here you can make sure, that both your CA cert and server cert have very long validity periods.

Please note, that the CA cert should have a slightly longer validity period, as the server cert is created and signed afterwards, and having the server certificate being valid for just 1 minute longer than the signing CAs own certificate is unsupported and can result in serious problems with the software. In my case, I just gave the CA certificate 10 years + 1 day and the server certificate 10 years. Easy enough.

After that, put the CA cert as well as server cert + key on your OpenLDAP server as specified in /etc/openldap/slapd.conf and the CA cert on your clients as specified by /etc/sssd/sssd.conf or if you’re not using RedHats sssd, then it’d be /etc/openldap/ldap.conf and /etc/nslcd.conf, which are however deprecated and not recommended anymore. Also make sure your LDAP server can access the files (maybe chown ldap:ldap or chown root:ldap) and furthermore please make sure nobody can steal the private key!!

Then, restart your LDAP server as well as sssd on all clients, or nslcd on all clients if you’re using the old stuff. If an NFS server is also part of your infrastructure, you will also need to restart the rpcidmapd service on all systems accessing that NFS or you’ll get a huge ownership mess when all user ids are being wrongly mapped to the unprivileged user nobody, which may happen with NFS when your idmapd is in an inconsistent state after fooling around with sssd or nslcd.

That’s of course not meant to be guide on how to set up OpenLDAP + TLS as a whole, just for replacing the certificates. So if you ever get “user does not exist” upon calling id user or su – user and people can’t log in anymore, that might be your issue if you’re using OpenLDAP with TLS or SSL. Just never ever delete root from your /etc/passwd. Otherwise even you will be locked out as soon as slapd – your LDAP server – goes down! And then you’ll need Knoppix or something similar – nobody wants to actually stand up from ones chair and to need to physically access the servers. ;)

Aug 092013
 

Filezilla logoI think I can say without raising too much critisism, that [Filezilla] is one of the best if not the best cross-platform FTP client ever built. It works on many operating systems like Windows, Linux, BSD UNIX etc. Now, I’ve been stuck with an older version of Filezilla, namely 3.5.2 because of a [tedious bug] that arose when you built the client and linked it against GnuTLS 2.8.x, which is typically what you’ll have when you’re running some Enterprise Linux distribution like me (CentOS 6.4 in my case).

If you’re using SSL-enhanced FTP servers which understand FTPS and FTPES (not to be confused with the SSH-based SFTP), any Filezilla version >=3.5.3 linked against any GnuTLS 2.8.x will break when connecting to such a server using SSL/TLS. So while Filezilla 3.7.3 compiled just fine on my system, it broke at runtime when doing that. See here:

Broken Filezilla: Connection log

(click to enlarge)

Here is the dreaded GnuTLS -50 error. From what I have read, it’s a problem that comes from a bug in the way SSL handshaking is done. It appears that if a server supports low-security ciphers in its cipher suite, the negotiation will break, as Filezilla only allows high-security ciphers since 3.5.3. This is somehow linked to GnuTLS 2.8.x. So even if a server supports high-security ciphers, it still breaks if low-security ones are also supported. That’s my current assumption at least. The only way seems to be to completely remove low-sec ciphers from the servers side. Which is sometimes impossible. But then..  Why does it work on Windows? Let’s first see what the latest Filezilla 3.7.3 reports on CentOS 6 Linux when asked for version numbers (under Help / About…):

Broken Filezilla: Version info

Aha. Ok. So wxWidgets 2.8.12 which I have installed myself (the packages are called wxGTK and wxGTK-devel!), SQLite 3.6.20 which comes with CentOS 6.4 and GnuTLS 2.8.5, which is also shipped with this Linux distribution. All stock then. Let’s compare that to the Windows version, which works just fine:

Working Filezilla on Windows: Version info

So the Filezilla guys used the same wxWidgets GUI libraries as I did, a slightly newer SQLite and most importantly a far newer major release of GnuTLS. Now I have tried GnuTLS 2.8.6 too, the lastest 2.8.x branch stable release, and I even tried to build and link against an unstable 2.9.9, all bugged in the same way. So the only way is to install a GnuTLS >=3.0. However, GnuTLS 3.x requires [nettle], some low-level crypto library, which CentOS doesn’t have.

On top of that, the latest version 2.7 of nettle won’t compile on CentOS 6, you’ll just get a buttload of really nasty compiler errors, but the latest GnuTLS 3.1.13 only requires nettle 2.5, so we’ll go with that. You can still get that version from [here].

Download it, and from there it’s as easy as this (you can leave out the -O3 -ffast-math in case you’re not comfortable with aggressive compiler optimizations or you run into stability issues):

tar -xzvf ./nettle-2.5.tar.gz
cd ./nettle-2.5/
export CFLAGS="-march=native -O3 -ffast-math -pipe"
export CXXFLAGS="-march=native -O3 -ffast-math -pipe"
./configure
make
su
make install

Now we can build GnuTLS 3.1.13, which can be downloaded from [here]. On my CentOS 6.4 I managed to make it link against nettle 2.5 and build like this (-O3 and -ffast-math may be omitted again):

tar -xJvf ./gnutls-3.1.13.tar.xz
cd ./gnutls-3.1.13/
export CFLAGS="-march=native -O3 -ffast-math -I/usr/local/include -L/usr/local/lib64"
export CXXFLAGS="-march=native -O3 -ffast-math -I/usr/local/include -L/usr/local/lib64"
./configure
make
su
make install

Perfect. Now please download the [latest Filezilla source code] which is currently version 3.7.3, and you should be able to link it against our brand new GnuTLS 3.1.13 in the following way (-O3 -ffast-math may be omitted again):

tar -xjvf ./FileZilla_3.7.3_src.tar.bz2
cd ./filezilla-3.7.3/
export CFLAGS="-march=native -O3 -ffast-math -pipe -I/usr/local/include -L/usr/local/lib -L/usr/local/lib64"
export CXXFLAGS="-march=native -O3 -ffast-math -pipe -I/usr/local/include -L/usr/local/lib -L/usr/local/lib64"
export LD_LIBRARY_PATH="/usr/local/lib"
./configure --with-tinyxml=builtin --with-libgnutls-prefix=/usr/local
make
su
make install

This will make sure all the linking will work and not fall back to the systems default GnuTLS 2.8.5 or any other version of the 2.8.x branch your system might have. Now if that is done, just launch Filezilla, and look at the version numbers under Help / About… again to verify it has been linked correctly:

Working Filezilla: Version infoWorking Filezilla: Connection log

In my case, GnuTLS is now even newer than the version that has been used by the developers themselves to create their Windows build. 3.1.13 instead of 3.1.11. But in any case, a version of 3.1.x that should fix the cipher negotiation problem. Take a deep breath and try your FTPS or FTPES connection now, it should work even if the server supports low-security ciphers:

Working Filezilla: Connection log

(click to enlarge)

Space magic done! It works indeed. Note that this will probably not help you, if the SSL-enhanced FTP server you want to connect to supports no high-security ciphers. Your server will need to offer at least some of those in its cipher suite. But if it does low+high, this should fix it. And by using nettle 2.5, the latest stable version of GnuTLS can still be compiled on CentOS 6, and therefore also on Scientific Linux 6 and RedHat Enterprise Linux 6. Should be somewhat similar in Debian Squeeze and maybe SuSE Linux Enterprise Desktop.

Finally, a problem solved. Now I can use the latest version of Filezilla across all my operating systems again! Funny that nobody suggested this way in the Filezilla forums. Hm. Maybe I should.

Edit: While testing PC-BSD 9.2 UNIX I have now found out, that GnuTLS 2.12.x also seems to fix this, so you don’t absolutely need GnuTLS 3.x+. See this screenshot of FileZilla on PC-BSD 9.2 (based on FreeBSD 9.2) linked against GnuTLS 2.12.23:

Filezilla with GnuTLS 2.12.23 on BSD

So yes, this also works flawlessly!

Update 2014-04-30: And finally, I found a way to build the latest stuff on CentOS 6.5 Linux, opening up the way to newer versions of Filezilla and more importantly GnuTLS. That would include [nettle 2.7.1], [GnuTLS 3.2.13] and [Filezilla 3.8.0] as of the time of writing. Compilation and having the program work at runtime was tricky though. I am not perfectly sure if my documentation is complete (if you try this and fail, just post a comment!), but I think this should get you going, skipping the easy commands like unpacking the archives etc. here and leaving out non-essential flags like “-march=native” or “-O3”. You can still add those of course. Let’s start with nettle 2.7.1, this time disabling OpenSSL glue code:

./configure --disable-openssl
make
su
make install

Now, GnuTLS 3.2.13. For this we’ll need a bit more trickery, especially for the pkg-config library detection that GnuTLS uses to locate nettle stuff, but also, unfortunately, LD_LIBRARY_PATH:

export CFLAGS="-I/usr/local/include -L/usr/local/lib64"
export CXXFLAGS="-I/usr/local/include -L/usr/local/lib64"
export LDFLAGS="-I/usr/local/include -L/usr/local/lib64"
export PKG_CONFIG_PATH="/usr/local/lib64/pkgconfig"
export LD_LIBRARY_PATH="/usr/local/lib64"
./configure
make
su
make install

After that, assuming you’re still sitting in the same shell you built and installed GnuTLS in, thus preserving the environment variable from above, continue with Filezilla 3.8.0 exactly like this:

unset LD_LIBRARY_PATH
export CFLAGS="-I/usr/local/include -L/usr/local/lib64 -L/usr/local/lib"
export CXXFLAGS="-I/usr/local/include -L/usr/local/lib64 -L/usr/local/lib"
export LDFLAGS="-I/usr/local/include -L/usr/local/lib64 -L/usr/local/lib"
./configure --with-tinyxml=builtin --with-libgnutls-prefix=/usr/local
make
su
make install

We’re not done yet though. Since we played around with an evil LD_LIBRARY_PATH to get this built and linked, our /usr/local/bin/filezilla binary will just crash with a symbol error caused by GnuTLS unable to locate certain nettle functions. I would suggest working around that by writing yourself a small launcher script to fire up Filezilla with the proper path set for its session. I’d call this script /usr/local/sbin/filezilla.sh. You can then use that to launch the program, and the code is super simple:

#!/bin/bash
export LD_LIBRARY_PATH="/usr/local/lib64"
/usr/local/bin/filezilla

Kinda dirty, but it works. And here is the result:

Filezilla 3.8.0 working with GnuTLS 3.2.13 and nettle 2.7.1 on CentOS 6.5 Linux

Besides missing a few optimization flags, this actually seems to work pretty well so far. A few tests using encrypted connections with a modern TLS v1.2 protocol AES-256-GCM cipher suggests everything is in order. Great!