May 032014

Internet Explorer logoAs I have… well, “reported” in my feverish delirium on the 08th of April, support for Windows XP and Windows XP Professional x64 Edition has ended on that very day. So how is it exactly, that I can now look at this:

Microsoft patching an IE security flaw for Windows XP x64 SP2

Microsoft patching an IE security flaw for Windows XP x64 SP2 as reported [here] and on several other sites despite official support having ended on 2014-04-08.

So what’s it gonna be, Microsoft? We now get the “super critical” ones, or the ones that get that [very special kind of media attention] – it’s not every day that the U.S. department of homeland security tells XP users to switch browsers after all – and the others you drop because official support has ended? Sure, this flaw is critical, allowing easy remote code execution by presenting malicious websites to any version of Internet Explorer, all the way down to IE6, which by todays standards is a completely neolithic browser. And even IE6 on XP gets the update, which is hilarious even for a die hard conservative Windows user like me.

Well, Microsofts Trustworthy Computing TechNet blogger, Mr. Dustin C. Childs [wrote on his weblog], that we shouldn’t be expecting more. Quote:

“[…] We have made the decision to issue a security update for Windows XP users. Windows XP is no longer supported by Microsoft,and we continue to encourage customers to migrate to a modern operating system, such as Windows 7 or 8.1. Additionally, customers are encouraged to upgrade to the latest version of Internet Explorer, IE 11. […]”

-Dustin C. Childs, Microsoft Trustworthy Computing

Of course they would say that… Plugging the worst of holes while not raising any hopes is probably the right strategy from their point of view. It seems that there is still too much XP out there for them to handle by refusal only.

I wonder though, will something like this happen again? Was Windows 2000 not provided with the fix because it’s considered too ancient when compared to XP/XP x64? There is no really reliable standpoint here, so we’ll have to wait and see. More information and downloads follow:

  • [Download] security update for KB2964358 for Windows XP x86 for offline installation.
  • [Download] security update for  KB2964358 for Windows XP Professional x64 Edition for offline installation.
  • Microsoft [KB2964358 knowledgebase article].
  • Microsoft TechNet [Security Bulletin MS14-021] providing more extensive information about the flaw and severity ratings for all browser versions (IE6-11) for all operating systems said to be affected, plus information on how to undo the ACL modifications that were provided as a quick fix before the real patch came out.

Of course, if you have automatic updates turned on, you don’t have to download the files from above, that’s just for the distant future after Microsoft will have switched off Windows Update for XP altogether.

Oh and, as always, there is one thing that you could also do: Just don’t use Internet Explorer. There are enough other options these days.

Apr 302014

Heartbleed logoOk, before you say anything: I know I’m late! About posting this at least, in case you know what I’m gonna talk about. But at some point, a server operator should inform the public about what kind of position that server had in the recent incident relating to that massive security hole in OpenSSLs Heartbeat system, which was supposed to provide a cheap way of keeping encrypted network connections alive without having to do slowish renegotiations. Right? Naturally, is providing several services using encryption of which a part relies on OpenSSL, of which another part relies on modern OpenSSL, which was affected by this (versions 1.0.1 – 1.0.1e).

Now, was affected by the massive OpenSSL bug dubbed “Heartbleed”, which can potentially leak confidential information out of RAM addressed by OpenSSL? Unfortunately, the answer to this is YES.

Oh noes! My anus is bleeding passwords and private keys!

Oh noes! My anus is bleeding passwords and private keys out of my memory!

Out of the public services being hosted on with optional encryption support using SSL and TLS protocols, the following were actually not affected:

  • Web server (the one you’re reading this weblog on right now.)
  • Mail server

And the following two were vulnerable starting with 2013-09-10 up until 2014-04-08, where this bug has been fixed here, users notified and certificates recreated:

  • FTP server
  • IRC server (likely not really affected: used OpenSSL 1.0.1e, but was linked against 0.9.8 and hence refused modern protocols and ciphers. Likely no heartbeat present to exploit at all)

So as you can see, the duration that was vulnerable for was rather short. Plus, those services are partially obscure in nature due to their ancient protocols and their configuration, so attacks having happened is not impossible, but at least highly unlikely when compared to web and mail servers. Additionally, in general is keeping a low profile with comparably little content and only a hand full of users, so it will never be a prime target for any serious manual attack at least. Luckily, the IRC server didn’t have much to leak besides a possible certificate private key – if it’s even possible in this special case – and FTP users have been informed about a password change being a wise move.

On top of that, log analysis suggests (although it cannot fully prove it), that there were no encrypted connections opened for the sole purpose of exploiting Heartbleed to get access to’s system memory in use by OpenSSL. So something like a connection to a server negotiating TLS, but never actually authenticating as a user, just keeping the connection open as long as possible to eavesdrop a bit sending heartbeat packets. It seems that stuff like that has actually not happened. Guess not too many people speak FTP+SSL or IRC+SSL anymore these days.

Lacking any real computer forensics here, this is were we stand. Given the very small attack surface and the narrow time window, it is sane to assume we weren’t attacked here. Should you be a user of my IRC server or the FTP server, you’ve been informed quite some time ago anyway. ;)

Apr 282014

Truecrypt LogoJust recently a user from the Truecrypt forums reported a problem about Truecrypt on Linux that puzzled me at first. It took quite some digging to get to the bottom of it, which is why I am posting this here. See [this thread] (note: The original TrueCrypt project is dead by now, and its forums are offline) and my reply to it. Basically, it was all about not being able to mount more than 8 Truecrypt volumes at once.

So, Truecrypt is using the device mapper and loop device nodes to mount its encrypted volumes after which the file systems inside said volumes are being mounted. That user called iapx86 in the Truecrypt forums did just that successfully until he hit a limit with the 9th volume he was attempting to mount. So he had quite a few containers lying around that he wanted mounted simultaneously.

At first I thought this was some weird Truecrypt bug. Then I searched for the culprit within the Linux device mapper, when it finally hit me. It had to be the loop device driver. Actually, Truecrypt even told me so already, see here:

TC loop error

Truecrypt reporting an error related to a loop device

After just a bit more research it became clear that the loop driver has this limit set. So now, the first part is to find out whether you even have a loop module like on ArchLinux for instance. Other distributions like RedHat Enterprise Linux and its descendants like CentOS, SuSE and so on have it compiled into the Linux kernel directly. First, see whether you have the module:

lsmod | grep loop

If your system is very modern, the loop.ko module might no longer have been loaded automatically. In that case please try the following as root or with sudo:

modprobe loop

In case this works (more below if it doesn’t), you may adjust loops’ options at runtime, one of which is the limit imposed on the number of loop device nodes there may be at once. In my example, we’ll raise this from below 10 to a nice value of 128, run this command as root or with sudo:

sysctl loop.max_loop=128

Now retry, and you’ll most likely find this working. To make the change permanent, you can either add loop.max_loop=128 as a kernel option to your boot loaders kernel line, in the file /boot/grub/grub.conf for grub v1 for instance, or you can also edit /etc/sysctl.conf, adding a line saying loop.max_loop=128. In both cases the change should be applied at boot time.

If you cannot find any loop module: In case modprobe loop only tells you something like “FATAL: Module loop not found.”, the loop driver will likely have been compiled directly into your Linux kernel. As mentioned above, this is common for multiple Linux distributions. In that case, changing loops’ options at runtime becomes impossible. In such a case you have to take the kernel parameter road. Just add the following to your kernel line in your boot loaders configuration file  and mind the missing loop. in front, this is because we’re now no longer dealing with a kernel module option, but with a kernel option:


You can add this right after all other options in the kernel line, in case there are any. On a classic grub v1 setup on CentOS 6 Linux, this may look somewhat like this in /boot/grub/grub.conf just to give you an example:

title CentOS (2.6.32-431.11.2.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 ro root=/dev/sda1 LANG=en_US.UTF-8 crashkernel=128M rhgb quiet max_loop=128
        initrd /initramfs-2.6.32-431.11.2.el6.x86_64.img

You will need to scroll horizontally above to see all the options given to the kernel, max_loop=128 being the last one. After this, reboot and your machine will be able to use up to 128 loop devices. In case you need even more than that, you can also raise that limit even further, like to 256 or whatever you need. There may be some upper hard limit, but I haven’t experimented enough with this. People have reported at least 256 to work and I guess very few people would need even more on single-user workstations with Truecrypt installed.

In any case, the final result should look like this, here we can see 10 devices mounted, which is just the beginning of what is now possible when it comes to the amount of volumes you may use simultaneously:

TC without loop error

With the loop driver reconfigured, Truecrypt no longer complains about running out of loop devices

And that’s that!

Update: Ah, I totally forgot, that iapx86 on the Truecrypt forums also wanted to have more slots available, because Truecrypt limits the slots for mounting to 64. Of course, if your loop driver can give you 128 or 256 or whatever devices, we wouldn’t wanna have Truecrypt impose yet another limit on us, right? This one is tricky however, as we will need to modify the Truecrypt source code and recompile it ourselves. The code we need to modify sits in Core/CoreBase.h. Look for this part:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 64; }

What we need to alter is the constant returned by the C++ virtual function GetLastSlotNumber(). In my case I just tried 256, so I edited the code to look like this:

virtual VolumeSlotNumber GetFirstFreeSlotNumber (VolumeSlotNumber startFrom = 0) const;
virtual VolumeSlotNumber GetFirstSlotNumber () const { return 1; }
virtual VolumeSlotNumber GetLastSlotNumber () const { return 256; }

Now I recompiled Truecrypt from source with make. Keep in mind that you need to download the PKCS #11 headers first, right into the directory where the Makefile also sits:


Also, please pay attention to the dependencies listed in the Linux section of the Readme.txt that you get after unpacking the source code just to make sure you have everything Truecrypt needs to link against. When compiled and linked successfully by running make in the directory where the Makefile is, the resulting binary will be Main/truecrypt. You can just copy that over your existing binary which may for instance be called /usr/local/bin/truecrypt and you should get something like this:

More mounting slots for Truecrypt on Linux

More than 64 mounting slots for Truecrypt on Linux

Bingo! Looking good. Now I won’t say this is perfectly safe and that I’ve tested it inside and out, so do this strictly at your own risk! For me it seems to work fine. Your mileage may vary.

Update 2: Since this kinda fits in here, I decided to show you how I tested the operation of this. Creating many volumes to check whether the modifications worked required an automated process. After all, we’d need more than 64 volumes created and mounted! Since Truecrypt does not allow for automated volume creation without manual user interaction, I used the Tcl-based expect system to interact with the Truecrypt dialogs. At first I would feed Truecrypt a static string for its required entropy data of 320 characters. But thanks to the help of some capable and helpful coders on [StackOverflow], I can now present one of their solutions (I’m gonna pick the first one provided).

I split this up into 3 scripts, which is not overly elegant, but works for me. The first script, which I shall call will just run a loop. The user can specify the amount of iterations in the script. That number represents the amount of volumes to be created. In the loop, a Tcl expect script is being called many times to create the actual volumes. That expect script I called Master first, as you can see I have started with 128 volumes, which is twice as much as the vanilla Truecrypt binary would allow you to use or 16 times as many loop devices the Linux kernel would usually let you create at the same time:

# Define number of volumes to create here:
for i in {1..$n} 
  ./ $i 

And this is the worker written in Tcl/expect, enhanced thanks to StackOverflow users. This is platform-independent too, not like my crappy original that you can also see at StackOverflow and in the Truecrypt forums:

proc random_str {n} {
  for {set i 1} {$i <= $n} {incr i} {
    append data [format %c [expr {33+int(94*rand())}]]
  return $data
set num [lindex $argv]
set entropyfeed [random_str 320]
puts $entropyfeed
spawn truecrypt -t --create /home/user/vol$ --encryption=AES --filesystem=none --hash=SHA-512 --password=test --size=33554432 --volume-type=normal
expect "\r\nEnter keyfile path" {
  send "\r"
expect "Please type at least 320 randomly chosen characters and then press Enter:" {
  send "$entropyfeed\r"

This part first generates a random floating-point number between 0,0 and 1,0. Then it multiplies that by 94 to reach something like 0..93,999˙. Then, it adds that value to 33 and makes an integer out of it by truncating the parts behind the comma, resulting in random numbers in the range of 33..126. Now how does this make any sense? We want to generate random printable characters to feed to Truecrypt as entropy data!? Well, in the 7-bit ASCII table, the range of printable characters starts at number 33 and ends at number 126. And that’s what format %c does with those numbers, converting them according to their address in the ASCII table. Cool, eh? I thought that was pretty cool.

The rest is just waiting for Truecrypts dialogs and feeding the proper input to it, first just an <ENTER> keypress (\r) and second the entropy data plus another <ENTER>. I chose not to let Truecrypt format the containers with an actual file system to save some time and make the process faster. Now, mounting is considerably easier:

# Define number of volumes to mount here:
for i in {1..$n}
  truecrypt -t -k "" --protect-hidden=no --password=test --slot=$i --filesystem=none vol$

Unfortunately, I lost more speed here than I had won by not formatting the volumes, because I chose the expensive and strong SHA-512 hash algorithm for my containers. RIPEMD-160 would’ve been a lot faster. As it is, mounting took several minutes for 128 volumes, but oh well, still not too excessive. Unmounting all of them at once is a lot faster and can be achieved by running the simple command truecrypt -t -d. Since no screenshot that I can make can prove that I can truly mount 128 volumes at once (unless I forge one by concatenation), it’s better to let the command line show it:

[thrawn@linuxbox ~]$ echo &amp;&amp; mount | grep truecrypt | wc -l 

Nah, forget that, I’m gonna give you that screenshot anyway, stitched or not, it’s too hilarious to skip that, oh and don’t mind the strange window decorations, that’s because I opened Truecrypt on Linux via remote SSH+X11 on a Windows XP x64 machine with GDI UI:

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us

Mounting a lot more volumes than the Linux Kernel and/or Truecrypt would usually let us!

Oh man… I guess the sky’s the limit. I just wonder why some of those limits (8 loop devices maximum in the Linux kernel, 64 mounting slots in Truecrypt) are actually in place by default…

Edit: Actually, the limit is now confirmed to be 256 volumes due to a hard coded limit in the loop.ko driver. Well, still better than the 8 we started with, eh?

Oct 042013

Razer logoFirst, for all of you who do not know [Razer], the company is known for designing and building “gaming gear” like mice, keyboards, headsets and stuff like that. While I am personally convinced that the build quality of Razer products sucks (I have a Razer Lachesis mouse myself, ultra cheap plastic), they are very expensive and somewhat well accepted and respected in the gaming community, for whatever reason there may be.

Ok, so much for that. Now to their new software. Razer [Synapse 2.0]. Previously, your mouse/keyboard would come with a driver for multiple operating systems and a small application for Windows systems to configure the device. All of that offline, naturally.

Now, Razer Synapse 2.0 is different. The application requires an Internet connection as well as an account in the “Razer cloud”. What it does is to synchronize all your input device settings to the cloud servers so you can reattach the device to another machine with Razer Synapse 2.0, log in with your account (!) and use the device with the settings you’re comfortable with. There is however one significant problem I have with that.

Without an account, you cannot use the software, as confirmed by Razer themselves. What you are presented with is this:

Razer Synapse 2.0 login screen

Razer Synapse 2.0 login screen

No account, no configuring your mouse or keyboard! Without configuration it stays dumb. No macros, no DPI settings (unless there is a hardware button for that), no firmware updating, nothing. Resembles DRM, but what would hardware need DRM for? It makes no sense! So why is Razer doing this? Let me show you a little something taken from the Razer [privacy policy] which is being referred to by the [terms of service] for Synapse 2.0, it’s quite enlightening:

“We collect information by using “log data” and “cookies,” by obtaining information from your usage of our Services, and by asking for information when visitors buy a product, register for and/or use a Service, or do various other things, as described below.” – Razer privacy policy

“Aggregate Information and Non-identifying Information: We may share aggregated information that does not include Personal Information and we may otherwise disclose Non-Identifying Information and Log Data (which also do not include Personal Information) with third parties for industry analysis, demographic profiling and other purposes. Any such information shared in these contexts will not contain your Personal Information. If aggregated or Non-Identifying Information is tied to your Personal Information, it will not be disclosed except as set forth in this Privacy Policy.” – Razer privacy policy

“Advertising Networks and SNSs: We work with third party advertising networks which may collect information about your online activities through cookies and other technologies when you use the Site. The information gathered by these advertising networks is used to make predictions about you, your interests or preferences and to display advertisements across the Internet tailored to your apparent interests. We do not permit these ad networks to collect Personal Information about you on the Site. Please note that we do not have access to or control over cookies and other technologies these ad networks may use to collect information about your interests, and the information practices of these ad networks are not covered by this Privacy Policy.” – Razer privacy policy

We may also provide SNSs with your e-mail address so that they can better customize the advertisements that are displayed to you when you use those SNSs. Please note that we do not have direct control over these SNSs and their activities are not covered by this Privacy Policy. However, if we provide your e-mail address to SNSs for this purpose, we shall have a reasonable basis to believe that the SNS shall (a) protect the security and integrity of the data while it is within the SNS’s systems; (b) guard against the accidental or unauthorized access, use, alteration or disclosure of the data within the SNS’s systems; and (c) shall not use the data if you are not a member of the SNS.” – Razer privacy policy

“To use the Services, you have to register for and maintain a Razer account in a form as required by us (“Account”). You agree to provide accurate information when you register.” – Razer Synapse 2.0 terms of service

So basically, it’s saying that it may very well be spy ware. Always-on spy ware, mind you. For as long as you stay logged on to Synapse 2.0 to change your mouse settings or whatever, it may collect and share data from your machine. It may listen in on your emails, web browsing behavior, everything! I’m not saying it does serious shit like that, but what other reason could they have to write terms of service and a privacy policy like that? Synapse 2.0 doesn’t really do anything good for the user. And it does most surely collect some data from your system.

On top of that, a lot of new Razer devices don’t come with their regular older software anymore. So you’re forced to use Synapse 2.0 if you want to properly use the devices. Plus, there is lots of luring you in going on right now. I have for instance signed up with Synapse 2.0 (on their website, I have not actually installed the software) to get some Razer cockpit dice for [Mechwarrior Online] (which I think is otherwise a cool game) for free. So you might actually get caught in that trap just for a pair of furry dice like these:

Razer MWO dice

Razers free Synapse 2.0 dice for your Battlemech cockpit in MWO

And frankly, the dice don’t even move around that nicely, where is my PhysX? Or Havok? Or whatever CryEngine 3 uses, probably its own physics engine. And if you do install the Razer Synapse 2.0 “trojan” on your system just for that or some of the other ingame bonuses Razer gives away… oh well. You just sold yourself out.

So frankly, I do not know what kind of stuff Synapse 2.0 really spies on. But the terms of service would indicate that it might spy on pretty much anything and sell out the profiling data plus your email address to whoever pays enough. And there I was, wondering where all those waves of spam came from recently. A new kind of spam syntactically, and in greater volume than before. I haven’t linked it to the Synapse 2.0 registration at first, but the timing is almost perfect!


Sep 102013

NSA logoWith all that talk about the [National Security Agency] stealing our stuff (especially our most basic freedoms), it was time to look at a few things that Mr. Snowden and others before him have found out about how the NSA actually attempts to break certain encryption ciphers that are present in OpenSSLs and GnuTLSs cipher suites. Now that it has been clearly determined that a NSA listening post has been established in Vienna, Austria (protestors are on the scene), it may seem a good thing to look over a few details here. Especially now that the vulnerabilities are widely known and potentially exploitable by other perpetrators.

I am no cryptologist, so I won’t try to convince you that I understand this stuff. But from what I do understand, there is a side-channel attack vulnerability in certain block ciphers like for instance AES256-CBC-SHA or RSA-DES-CBC-SHA. I don’t know what it is exactly that’s vulnerable, but whoever may listen closely on one of the endpoints (client or server) of such a connection may determine crucial information by looking at the connections timing information, which is the side channel. Plus, there is another vulnerability concerning the Deflate protocol compression in TLS, which you shouldn’t confuse with stuff like mod_deflate in Apache, as this “Deflate” exists within the TLS protocol itself.

As most client systems – especially mobile operating systems like Android, iOS or Blackberry OS – are compromised and backdoored, it is quite possible that somebody is listening. I’m not saying “likely”, but possible. By hardening the server, the possibility of negotiating a vulnerable encrypted connection becomes zero – hopefully at least. :roll:

Ok, I’m not going to say “this is going to protect you from the NSA completely”, as nobody can truly know what they’re capable of. But it will make you more secure, as some vulnerable connections will no longer be allowed, and compromised/vulnerable clients are secure as long as they connect to a properly configured server. Of course you may also lock down the client by updating your browser for instance, as Firefox and Chrome have been known to be affected. But for now, the server-side.

I am going to discuss this for the Apache web server specifically, but it’s equally valid for other servers, as long as they’re appropriately configurable.Big Apache web server logoFirst, make sure your Apache is compatible with the SSL/TLS compression option SSLCompression [on|off]. Apache web servers starting from 2.2.24 or 2.4.3 should have this directive. Also, you should use [OpenSSL >=1.0] (link goes to the Win32 version, for *nix check your distributions package sources) to be able to use SSLCompression and also more modern TLSv1.1 and TLSv1.2 versions. If your server is new enough and properly SSL-enabled, please check your SSL configuration either in httpd.conf or in a separate ssl.conf included in httpd.conf, which is what some installers use as a default. You will need to change the SSLCipherSuite directive to not allow any vulnerable block ciphers, disable SSL/TLS protocol compression, and a few things more. Also make sure NOT to load mod_deflate, as this opens up similar loopholes as the default for the SSL/TLS protocols themselves do!

Edit: Please note that mixing Win32 versions of OpenSSL >=1.0 with the standard Apache version from will cause trouble, so a drop-in replacement is not recommended for several reasons, two being that that Apache version is linked against OpenSSL 0.9.8* (breaking TLS v1.1/1.2) and also built with a VC6 compiler, where OpenSSL >=1.0 is built with at least a VC9 compiler. Trying to run all VC9 binaries (Apache+PHP+SSL) only works on NT 5.1+ (Windows XP/2003 or newer), so if you’re on Win2000 you’ll be stuck with older binaries or you’ll need to accept stability and performance issues.

Edit 2: I now found out that the latest version of OpenSSL 0.9.8, namely 0.9.8y also supports switching off SSL/TLS deflate compression. That means you can somewhat safely use 0.9.8y which is bundled with the latest Apache 2.2 release too. It won’t give you TLS v1.1/1.2, but leaves you with a few safe ciphers at least!

See here:

SSLEngine On
SSLCertificateFile <path to your certificate>
SSLCertificateKeyFile <path to your private key>
ServerName <your server name:ssl port>
SSLCompression off
SSLHonorCipherOrder on
SSLProtocol All -SSLv2

This could even make you eligible for a VISA/Mastercard PCI certification if need be. This disables all known vulnerable block ciphers and said compression. On top of that, make sure that you comment out the loading of mod_deflate if not already done:

# LoadModule mod_deflate modules/

Now restart your webserver and enjoy!

The same thing can of course be done for mail servers, FTP servers, IRC servers and so on. All that is required is a proper configurability and compatibility with secure libraries like OpenSSL >=1.0 or at least 0.9.8y. If your server can do that, it can also be secured against these modern side channel attacks!

If you wish to verify the safety specifically against BEAST/CRIME attack vectors, you may want to check out [this tool right here]. It’s available as a Java program, .Net/C# program and source code. For the Java version, just run it like this:

java -jar TestSSLServer.jar <server host name> <server port>

This will tell you whether your server supports deflate, which cipher suites it supports and whether it’s BEAST or CRIME vulnerable. A nice point to start! For the client side, a similar cipher suite configuration may be possible to ensure the client won’t allow the negotiation of a vulnerable connection. Just updating your software may be an easier way in certain situations of course. A good looking output of that tool might appear somewhat like this:

Supported versions: SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  (TLSv1.0: idem)
  (TLSv1.1: idem)
Server certificate(s):
  2a2bf5d7cdd54df648e074343450e2942770ab6ff0:,, OU=MYSERVER,, L=My City, ST=My County, C=COM
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: protected
CRIME status: protected

Plus, as always: Using open source software may give you an advantage here, as you can at least reduce the chances of inviting a backdoor eavesdropping on your connections onto your system. As for smartphones: Better downgrade to Symbian or just throw them away altogether, just like your tablets (yeah, that’s not the most useful piece of advice, I know…).

Update: And here a little something for your SSL-enabled UnrealIRCD IRC server.

UnrealIRCD logoThis IRC server has a directive called server-cipher-list in the context set::ssl, so it’s set::ssl::server-cipher-list. Here an example configuration, all the non-SSL specific stuff has been removed:

set {
  ssl { 
    trusted-ca-file "your-ca-cert.crt";
    certificate "your-server-cert.pem";
    key "your-server-key.pem";
    renegotiate-bytes "64m";
    renegotiate-time "10h";

Update 2: And some more from the Gene6 FTP server, which is not open source, but still extremely configurable. Just drop in OpenSSL >=1.0 (libeay32.dll, ssleay32.dll, libssl32.dll) as a replacement, and add the following line to your settings.ini files for SSL-enabled FTP domains, you can find the files in the Accounts\yourdomainname subfolders of your G6 FTP installation:

Gene6 FTP server logo


With that and those OpenSSL >=1.0 libraries, your G6 FTP server is now fully TLSv1.2 compliant and will use only safe ciphers!

Finally: As I am not the most competent user in the field of connection-oriented encryption, please just post a comment if you find some incorrect or missing information, thank you!

Sep 052013

OpenLDAP logoWhen we migrated from CentOS Linux 5 to 6 at work, I also wanted to make our OpenLDAP clients communicate with the OpenLDAP server – named slapd – in an encrypted fashion, so nobody can grab user names or home directory locations off the wire when clients and server communicate. So yes, we are using OpenLDAP to store all user names, password hashes, IDs, home directory locations and NFS exports in a central database. Kind of like a Microsoft domain system. Only more free.

To do that, I created my own certificate authority and a server certificate with a Perl script called What I missed is to set the validity period of both the CA certificate and server certificate to something really high. So it was the default – 1 year. Back then I thought it would be going to be interesting when that runs out, and so it was!

All of a sudden, some users couldn’t log in anywhere anymore, other users still worked. On some systems, a user would be recognized, but no home directory mount would occur. A courtesy of two different cache levels in mixed states on different clients (nscd cache and sssd cache) with the LDAP server itself already being non-functional.

So, the log files on clients and server didn’t tell anything. ANYthing. A mystery, even at higher log levels. So after fooling around for like two hours I decided to fire up Wireshark and analyze the network traffic directly. And what I found was a “certificate invalid” message on the TLSv1/TCP protocol layer. I crafted something similar here for you to look at, as I don’t have a screenshot of the original problem:

Wireshark detecting TLS problems

This is a problem slightly different from the actual one. It would say “Certificate invalid” instead of “Unknown CA”. In this case I just used a wrong CA to kind of replay the issue and show you what it might look like. IP and Mac addresses have been removed from this screenshot.

Kind of crazy that I had to use Wireshark to debug the problem. So clearly, a new certificate authority (CA) and server certificate were required. I tried to do that again with, but it strangely failed this time in the certificate creation process. One that I tried to create manually with OpenSSL as I usually do with HTTPS certificates didn’t work as the clients complained that there was no ciphers available. I guess I did something wrong there.

There is a nice Perl+GTK tool that you can use to build CA certs as well as go through the process of doing signing requests and then signing certs with CA certs and exporting the resulting files in different formats however. That tool is called [TinyCA2] and is based on OpenSSL, as usual. See here:

The process is all the same as it is with pure command line OpenSSL or First, create your own certificate authority, then a server certificate with its corresponding request to the CA. Then sign that request with the CA certificate and you’re all done and you can now export the CA cert, signed server cert and private key to *.pem files or other types of encodings if necessary. And here you can make sure, that both your CA cert and server cert have very long validity periods.

Please note, that the CA cert should have a slightly longer validity period, as the server cert is created and signed afterwards, and having the server certificate being valid for just 1 minute longer than the signing CAs own certificate is unsupported and can result in serious problems with the software. In my case, I just gave the CA certificate 10 years + 1 day and the server certificate 10 years. Easy enough.

After that, put the CA cert as well as server cert + key on your OpenLDAP server as specified in /etc/openldap/slapd.conf and the CA cert on your clients as specified by /etc/sssd/sssd.conf or if you’re not using RedHats sssd, then it’d be /etc/openldap/ldap.conf and /etc/nslcd.conf, which are however deprecated and not recommended anymore. Also make sure your LDAP server can access the files (maybe chown ldap:ldap or chown root:ldap) and furthermore please make sure nobody can steal the private key!!

Then, restart your LDAP server as well as sssd on all clients, or nslcd on all clients if you’re using the old stuff. If an NFS server is also part of your infrastructure, you will also need to restart the rpcidmapd service on all systems accessing that NFS or you’ll get a huge ownership mess when all user ids are being wrongly mapped to the unprivileged user nobody, which may happen with NFS when your idmapd is in an inconsistent state after fooling around with sssd or nslcd.

That’s of course not meant to be guide on how to set up OpenLDAP + TLS as a whole, just for replacing the certificates. So if you ever get “user does not exist” upon calling id user or su – user and people can’t log in anymore, that might be your issue if you’re using OpenLDAP with TLS or SSL. Just never ever delete root from your /etc/passwd. Otherwise even you will be locked out as soon as slapd – your LDAP server – goes down! And then you’ll need Knoppix or something similar – nobody wants to actually stand up from ones chair and to need to physically access the servers. ;)

Aug 062013

WebDAV logoRecently I was reading through my web servers log file, only to find some rather alarming lines that I didn’t immediately understand. It was some HTTP PUT methods obviously trying to implant PHP scripts onto my server via WebDAV. A few years back I had WebDAV implemented on my webserver so users could run subversion repositories for software development on my server, via HTTPS+SVN.

So I did a little bit of research and found out, that there were some Apache web server distributions like XAMPP, that had WebDAV switched on by default, and that included a default user and password for – now hold on – full access!

I never allowed anonymous access to the WebDAV repositories, which were isolated from the usual web directories anyway. Still, there were some lines that made me feel a bit uncomfortable, like this one, which seemed to be an attempt at doing an anonymous, unauthenticated PUT: - - [24/Dec/2011:01:53:20 +0100] "PUT /webdav/sPriiNt...php HTTP/1.1" 200 346

Now, the part that made my eyebrows raise was that my web server seems to have responded with HTTP 200 to this PUT method right there, which means that an existing resource has been modified, or in other words overwritten even though my server never had a path “/webdav/”. For the creation of a fresh new file it should’ve responded with HTTP 201 instead. So there were many such PUT lines in the log file, but I was unable to locate the initial uploads, the one where my server should’ve responded with “PUT /webdav/sPriiNt…php HTTP/1.1” 201 346.

In the meantime, WebDAV is long switched off and my server now responds with 405 to all PUT attempts (“Method not implemented”). But I kept all the data of all directories and could still not locate any of those PHP files, nor any DELETE method calls in the log file. Now did my server really accept the PUT or not?! It seems not, especially since those scripts were never accessed via HTTP according to the logs. Well, at least that was all before I had to reinstall the server anyway, because of a major storage fuckup.

Still, I went ahead and googled for some of the PHP script names, and I found some really interesting shit. Looking for them, I found dubious lists of hundreds of web servers with both IP addresses and hostnames and the according paths to said uploaded PHP scripts. Most of them where obsolete already, but I managed to find some vulnerable servers where the PHP script was still accessible and executable.

What I found was a DDoS script that enabled you to specify a target IP/host and a time and duration of attack. That means, all those scripts are uploaded via what I assume to be a botnet to a wide range of vulnerable web servers, and then lists would be created for the human attackers to use and attack arbitrary targets in a timed fashion.

So I continued to search around and finally found some already modified full source code plus a WebDAV vulnerability exploit all packed together in a nice RAR archive, even including a tutorial written for what seems to be “non-hackers”. With that it was extremely easy to reconstruct an even more dangerous variety of the attack procedure which was targetted at Windows servers specifically:

  • Let the bundled WebDavLinkCrawler scan for WebDAV-capable servers for as long as you please, then press “stop”.
  • Copy all IP addresses collected by the crawler and presented to the user, save them in a TXT file.
  • Open the IP scanner exploit and load that TXT file. Let it scan all WebDAV capable servers for exploitable ones.
  • After that’s done, save the list of vulnerable servers.
  • Verify an arbitrary server by surfing to http://<ip>/webdav, looking for a “WebDAV testpage”. If it’s there, it worked.
  • Install the bundled BitKinex tool, press CTRL+2, enter an arbitrary name, then enter the servers IP address. Use “wampp” as the user name, “xampp” as the password. (That’s said default credentials with full access). Enter “/webdav” as the data source.
  • Click on the Http/WebDAV tab, click browse, and upload your PHP exploit!
  • Access http://<ip>/webdav/yournicephpscript.php. A shell will come up if system calls are allowed.
  • Run the following commands in that shell: net user <username> /add, then net user <username> <newpassword>, then net localgroup “Administrators” “<username>” /add. Only works if XAMPP is running with administrative privileges of course, but on Windows machines it often does.
  • Connect to the IP via RDP (remote desktop protocol, built into pretty much any Windows OS). Welcome to your free server!

Well, now this was even more dangerous and less likely to succeed, but I can imagine there where lots of successful attacks like that. And maybe even more that aimed at providing the DDoS interface I was looking for in the beginning. Good thing that my web server runs under a user without any administrative privileges, eh?

What surprised me most is how much the exploit is not just targeted at Windows servers, but also seemingly meant to be used by regular Windows operators. And there I was, thinking that people would usually attack from Linux or BSD machines. But na-ah. All EXE files, RAR archives and easy tutorials. As if the black hatters would have a large “audience” willingly helping them pull off their attacks.

An interesting thought, don’t you think?

Also funny that parts of the tutorials were written in German, more precisely the part with the WebDavLinkCrawler which seems to be a german tool.

This does offer some interesting insights into how DDoS and server hijacking attempts are pulled off. How or why people are making those exploits newbie-proof I still do not fully understand. Maybe this is also some sort of institutionalized cracking going on? I simply do not know enough to be able to tell.

But it sure doesn’t look like what I would have expected. No UNIX stuff there at all. And I still wonder whether I ever was exploitable or not, given how I had WebDAV locked down. Who knows.

Jul 072013

HardeningHardening WordPress – at least it’s easier than hardening steel alloys! It seems the base installation of a WordPress web log (like this one here) is not very well prepared for attacks from the web, especially brute force break-in attempts. After the last attack, I simply did a few file and folder renaming tricks to block people from accessing the corresponding scripts. This was ineffective, as there are obviously multiple attack vectors, not just wp-login.php or wp-admin.

The renaming still worked for a while, but recently, new attacks happened, slowing down the server and even crashing some cached PHP byte code, which essentially meant crashing the whole web server, as PHP is loaded as a module for performance reasons. Needless to say, this was bad. Really bad.

However, a quick search amongst available plugins quickly revealed several solutions, mostly revolving around security by obscurity, which is in this case actually a very valid approach, masking all critical scripts and folders with required ID/URL strings. Wrong string? You get relayed to some arbitrary location. I decided to relay the attacker to his own localhost. With some luck it hits an open port on his or her own machine. ;)

Also, there are other useful plugins, like tarpit-style ones that simply ban IP addresses that fail too many login attempts in case somebody does actually guess the necessary URL strings to access the real login page. There are even ones, that you can tie to the well-known fail2ban scripts on Linux operating systems, that work together with iptables.


Bottom line is: Even if your web log is small, with very few visitors – like mine – you should still lock certain areas down properly and harden the site. Otherwise there will most definitely be shit happening. You may want to look for login page hardening scripts and also ones that can limit login attempts. A masking script and a tarpit-style one should work together quite well, like maybe [Stealth Login Page] and [Limit Login Attempts], which are confirmed to work together very well. There are also others that you could look into, like [Login Security Solution], [Secure Hidden Login] or [Simple Login Lockdown] and many more.

For now, this seems to handle the problem very well!

May 302013

Hack logoTonight at around 10:30PM, came under attack by what I assume to be a larger bot network. As a result, all CPU-intensive web applications on my server, including this weblog, forums, CMS’ etc. went offline because of the attacks causing excessive PHP CPU cycle usage and timeouts of all scripts. It seems the primary target of the attacks was this weblogs login page, but the largely inactive forum was also a target obviously. Log file analysis suggests, that this was a distributed brute force attack to break into web services (and probably a few thousand other web services on other servers). The attacking hosts were many machines spread across multiple IP pools, so I have to assume that this is some major attack to break into multiple web services at once, for purposes currently not known to me.

Amusingly, the attack failed on this server while remaining in its infancy. As a security measure, runtime of PHP scripts is limited on this server. The attack was so massively parallelized, that it simply overloaded everything, causing all scripts to return HTTP 503 errors to almost all attacking hosts. In laymans terms: This server was simply too SLOW to allow the attackers scripts to ever succeed. I can only laugh about that simple fact. This server was just too old and crappy for this shit. ;)

Well, still, for that reason, this weblog and most of’s web services went offline for around 2½ hours. For now, it is back online, and hopefully the brute force attacks have ceased by now, as my monitoring would suggest. We’ll see.

Feb 012012

Tux is dying. Sometimes.There has been a recent [security advisory] about a certain exploit, that affects most Linux Kernels v2.6.39-rc1 and newer, if they contained the upstream patch [198214a7]. The flaw came with a re-enabled mem_write() function and enhancements made to the handling of the /proc/pid/mem permission checks when writing to running programs memory via the facility. It is possible to use this to escalate a users privileges, supposedly all the way up to root. Now that alone is bad enough, but the enhancement has been back-ported by some distributors like RedHat, and hence has also hit CentOS and Scientific Linux even though their kernel versions are <2.6.39-rc1. By now there are kernel updates available to get the problem fixed. In the meantime, there is some sample code available to show wether your specific machine is vulnerable to this kind of /proc/pid/mem hijacking or not. See the source code:

expand/collapse source code
  1. /*
  2.   simple reproducer that tests whether we have commit 198214a7ee50375fa71a65e518341980cfd4b2f0 or not
  3.  */
  5. #define _LARGEFILE64_SOURCE
  6. #include &lt;stdio.h&gt;
  7. #include &lt;stdlib.h&gt;
  8. #include &lt;sys/types.h&gt;
  9. #include &lt;sys/stat.h&gt;
  10. #include &lt;fcntl.h&gt;
  11. #include &lt;unistd.h&gt;
  13. char *s = "not vulnerable";
  14. char *s2 = "vulnerable";
  16. int main(int argc, char **argv)
  17. {
  19.   int fd;
  20.   loff_t r;
  22.   fd = open("/proc/self/mem", O_RDWR);
  23.   if(fd &lt; 0) {
  24.     perror("open: ");
  25.     goto end;
  26.   }
  28.   if(lseek64(fd, (off64_t) &amp;s, SEEK_SET) &lt; 0) {
  29.     perror("lsee64: ");
  30.     goto end;
  31.   }
  33.   if(write(fd, &amp;s2, sizeof(s2)) &lt; 0) {
  34.     perror("write: ");
  35.   }
  37. end:
  38.   close(fd);
  39.   printf("%s\n", s);
  40. }

You can compile and test the code with GCC:

$ gcc test.c -o test
$ ./test

In case your system is vulnerable, the code will just output “vulnerable”, otherwise it will say “write: : Invalid argument” and “not vulnerable”. In that case your kernel is either too old to have the enhancement or has been fixed already. If your system is vulnerable, updating your Kernel to a fixed version or even downgrading in case there is none available for your distribution is highly recommended.