Jul 282017
 

Mail.ru logo1.) Introduction

Of course you could say: “If you’re going to use Russian software, that’s what you’d have to expect!”. But yeah. I’ve actually used tools written by Russian developers before, and they used to be very slim and fast, so I thought, why not give it a shot. Background is that I’ve finally ditched my ancient Nokia E72 “smart phone” based on Symbian 9.2 / S60 3rd, which has become almost unusable because of its lack of modern SSL ciphers (most websites won’t let you connect anymore) and because of its Skype and ICQ clients being banned from their respective servers.

So I finally went ahead and got myself an Android 7.1.1 device, the Blackberry KEYone, my second attempt at using the OS (first was a Motorola Milestone 2 with Android 2.1, a failure because of many reasons).

Anyway, I had to find an eMail app that would let me do two things:

  1. Display and send everything as plain text (I hate HTML mails and find them pretty insulting to be honest)
  2. Allow me to connect to mail servers which support only older SSL/TLS protocols and ciphers (I’ve got no choice here)

2.) The Mail.ru email client on Android

2a.) The app itself

So, I tested a lot of clients, one of which was [Mail.ru], a pretty high-ranked email app (4.6/5) with more than 10 million installs out there. Superficially, it looks just like pretty much any other email client, because there are likely readily available Android libraries for implementing email clients:

Mail.ru client ad

An image directly from the Google play store, showing the apps’ GUI (click to enlarge)

So they advertise it with slogans like “ideal application for any mail” and “add all your email boxes in one application”. Actually, it’s ideal for just one thing: To hand over all your email accounts and emails to a Russian company and with it the Russian government – because in Russia, companies have to yield to the government and grant it full access to user accounts and data by default.

I guess free Russian developers and actual Russian software companies have to be treated very differently!

What I did was to enter my own email account credentials in the Mail.ru app to be able to fetch my emails via IMAP. I found that the client does not meet my personal requirements (no way to force plain text email), so after my quick test, I just uninstalled the app.

2b.) What the app does without you noticing

However, by that time, the Mail.ru app had already leaked my account credentials to certain mail.ru and my.com servers (my.com is a part of the bigger Mail.ru group), which had now started to log into my account from Russia – periodically checking all my email boxes and downloading every single message stored on my own server. Let’s have a look at the logs!

Here is their first connection attempt, coming from 5.61.237.44 (sapif30.m.smailru.net) as well as the second one from 94.100.185.215 (rimap21.i.mail.ru):

Tue 2017-07-25 14:59:27: Session 5554; child 3; thread 1232
Tue 2017-07-25 14:59:26: Accepting IMAP connection from [5.61.237.44:42273]
Tue 2017-07-25 14:59:27: SSL negotiation successful (♡)
Tue 2017-07-25 14:59:27: --> * OK ♡ IMAP4rev1 ♡ ready
Tue 2017-07-25 14:59:27:  1 OK LOGIN completed
Tue 2017-07-25 14:59:27:  1 OK LIST completed
Tue 2017-07-25 14:59:27:  * BYE IMAP engine signing off (no errors)
Tue 2017-07-25 14:59:27: --> . OK LOGOUT completed
Tue 2017-07-25 14:59:27: IMAP session complete, (2654 bytes)
Tue 2017-07-25 14:59:27: ----------
Tue 2017-07-25 15:00:04: ---------- Partial transcript, remainder will follow.
Tue 2017-07-25 15:00:04: Session 5556; child 4; thread 3588
Tue 2017-07-25 14:59:28: Accepting IMAP connection from [94.100.185.215:53424]
Tue 2017-07-25 14:59:28: SSL negotiation successful (♡)
Tue 2017-07-25 14:59:28: --> * OK ♡ IMAP4rev1 ♡ ready
Tue 2017-07-25 14:59:28:  1 OK LOGIN completed
Tue 2017-07-25 14:59:28:  * CAPABILITY ♡
Tue 2017-07-25 14:59:28: --> 2 OK CAPABILITY completed

You might have guessed it, the ♡ marks things I cut from the logs for privacy reasons. Guess I got a bit too creative. ;) Anyway, this was only the beginning. Later, some mail collector servers from the IP range 185.30.17*.** (collector*.my.com) started to log in and download all my emails from all my folders! Here’s just a small excerpt from the commands issued with one of my archive folders serving as an example – most of the stuff has been cut out to make it more concise:

Tue 2017-07-25 14:59:29: <-- 3 LIST "" "*"
Tue 2017-07-25 14:59:31: <-- 23 FETCH 22:* (UID FLAGS)
Tue 2017-07-25 14:59:52: <-- 49 STATUS "Archives" (UIDNEXT MESSAGES UNSEEN UIDVALIDITY)
Tue 2017-07-25 14:59:52: <-- 49 STATUS "Archives" (UIDNEXT MESSAGES UNSEEN UIDVALIDITY)
Tue 2017-07-25 14:59:53: <-- 50 STATUS "Archives/2013" (UIDNEXT MESSAGES UNSEEN UIDVALIDITY)
Tue 2017-07-25 14:59:53: <-- 51 SELECT "Archives/2013"
Tue 2017-07-25 14:59:53: <-- 52 FETCH 1:* (UID FLAGS)
Tue 2017-07-25 14:59:53: <-- 53 UID FETCH 7 (RFC822.SIZE BODY.PEEK[] INTERNALDATE)

All of those are just the remote commands issued to my server. Note that in IMAP4, UID FETCH <UID> BODY.PEEK[] at the bottom is an actual message download. Needless to say, there were thousands of those going unchecked, because it took me 3 days to discover the leak. And I only discovered it coincidentally too. So by that time they had long downloaded all my emails from my own server to Russia. If you're not running your own mail server, you wouldn't even notice this.

So if you just happened to enter your AOL, Yahoo, gmail or Hotmail accounts, you'd never see those Russian servers accessing those accounts remotely!

3.) This can't be ok, can it?

This behavior is completely unacceptable and has been reported to Google as it is borderline regarding Googles' own privacy policy:

Privacy Policy & Secure Transmission

If your app handles personal or sensitive user data (including personally identifiable information, financial and payment information, authentication information, phonebook or contact data, microphone and camera sensor data, and sensitive device data) then your app must:

  • Post a privacy policy in both the designated field in the Play Console and from within the Play distributed app itself.
  • Handle the user data securely, including transmitting it using modern cryptography (for example, over HTTPS).

 

The privacy policy must, together with any in-app disclosures, comprehensively disclose how your app collects, uses and shares user data, including the types of parties with whom it’s shared.

Prominent Disclosure Requirement

If your app collects and transmits personal or sensitive user data unrelated to functionality described prominently in the app’s listing on Google Play or in the app interface, then prior to the collection and transmission, it must prominently highlight how the user data will be used and have the user provide affirmative consent for such use.

First of all, the mail collectors drop down to cryptographic ciphers even I wouldn't use anymore when asked to do so. I mean, it sounds hypocritical coming from me (because I'm actually using very old ciphers too, as I'm out of options on my ancient server), but they do fall back to what's by no means "modern cryptography". Also, the leaking of account credentials and data to Russian servers and the continuous use of said data even after the user has stopped using Mail.ru services is not mentioned anywhere while installing or using the app, not that I could see at least.

I most definitely didn't give my consent to having the app use my data like this - I wasn't presented with an EULA during the installation or use of the software. Also, the (Russian...) email they had sent me after accounts were set up in the app didn't show an EULA or privacy statement either. It's even worse considering [Mail.ru's history] in terms of handling that information.

None of this is new either, see e.g. [this Reddit] (MyMail is from my.com - as said, a part of the Mail.ru Group).

Well, I started to look around and found a Mail.ru [user agreement] online. The interesting part is point 4.1.3:

4.1.3 In addition to the registration procedure on the Internet Service specified in clause 4.1. the user may be granted the right to register through using its data (login and password) of the e-mail box registered at the third person’s resource.

Irrespective of using any method of registration on the Internet Service the User’s password used to visit the Internet Service shall be beyond the reach of Mail.Ru.

Now that part is a bit problematic. The "third person's resource" is clearly your own mail account on some other server. So like my email account on my own server. The question is, what exactly does it mean when they say that the users' password shall be "beyond the reach of Mail.Ru"? Guess they'd mean my actual plain text password, right?

Well, no matter if they use hashes with <-- 2 authenticate CRAM-MD5, or instead just plain text <-- 1 LOGIN ♡@♡.♡ ♡♡♡♡♡♡, they do have my password stored away on their servers as clear text (probably on some encrypted file system? But still.). I wouldn't call that "beyond the reach of Mail.ru" anymore.

I guess I could have misread the user agreement (that I wasn't even presented with!) somewhere, but it doesn't seem to me as if they'd be following their own rules regarding privacy?!

If you're using the Mail.ru app I can only advise you uninstall it if you haven't done so already and to change all account passwords ever entered in the application to stop the Russian collector servers from logging into your accounts and "stealing" your email even after app deinstallation.

On a side note: Since K-9 Mail isn't exactly right for me either, I settled with [R2Mail2], which is being developed in Austria by the company [RundQuadrat]. I've been talking with its developer over the last few days, and he seems a like a nice family guy. I do like the client, as it has an impressive feature list, let's just name a few:

  • Manually configurable SSL/TLS cipher list, you can pick which ciphers you want or don't want to use, including the option to support a few deprecated ciphers.
  • Data oriented encryption with either S/MIME, or even PGP and PGP/MIME for emails and also arbitrary files (a small tool for file encryption is embedded in the client).
  • Support for Microsoft Exchange servers
  • Option to stop syncing in the background, so a full shutdown of the app is possible with ease.
  • Full plain text support, so you can force all messages to be displayed and sent in plain text only.
  • The client itself can be password protected and can be instructed to store all local data in encrypted form.
  • Extremely configurable: Reply/Forward Prefixes, host name use in EHLO command, notification LED color ( :roll: ), IPv4/IPv6 preference, Certificate store access & configuration, peak day/time option to boost synchronization, sync grouping with other apps to save battery, local email pruning and many, many other things.

It does come at a price though, as it costs 4.80€. But if you want a seriously powerful and I'd say more trustworthy email application for Android, you might give this a shot. Otherwise, maybe just go with the free K-9 mail app if you want plain text and don't need to rely on mail servers with antiquated SSL/TLS implementations.

But no matter what, stay away from Mail.ru and MyMail!

Jun 292017
 

Microsoft Security Essentials logoRecently, I ran into another issue on my old Windows XP x64 machines, and on regular XP and Windows Vista boxes as well. Microsofts’ Security Essentials software – let’s just call it MSSE – stopped updating itself. Even more problematic was the fact that manual updates wouldn’t work anymore either. It would download the new definitions, but not install them. With no error messages to be found anywhere, I had no idea what to do. Ok, on XP x64, MSSE was never supported to begin with (the last 64-bit version 4.4.304.0 for Vista works though), but the problem also showed up on supported systems, maybe because of their EoL status.

Strangely though, sometimes it would work out of the blue, but mostly, it seems to be broken. This is specifically bad right now, because very recently, the Microsoft Malware Protection Engine that MSSE and many other Microsoft security products are based on has had some critical bugs resulting in potential remote code execution exploits (at the highest privilege levels). By just scanning a file – e.g. an attachment of an email – the code inside would be run and evaluated by the MMPE, and during that phase, the code could “break out” into the system, infecting it with god knows what.

So, updates are really important right now, or the security tool you may rely upon to protect you at least a little bit may become the most dangerous thing on your old XP x64, PosReady2009 XP or Vista box, and not just there but on more modern systems as well (The Windows Defender on Windows 7+ uses the same MMPE).

What I did was to download the full update pack mpam-fe.exe from Microsoft [here], and install it manually. Interestingly, this worked just fine. Based on that, I wrote a very simple little batch script, that automates the process. Only drawback: It relies on one external tool, namely wget.exe, needed to download the package from Microsoft. Sadly, Windows doesn’t seem to have a command line tool to do that on its own. You can get wget by installing [GNU on Windows], a collection of free UNIX command line tools built for Windows.

Once wget.exe is in your users’ search path, you can use the task scheduler to automate the launch of the following updater script (just save it as a .bat file somewhere):

  1. @ECHO OFF
  2. :: Fetch the most current AV definitions (%TEMP% doesn't need 
  3. :: to be quoted, because it returns a short path anyway):
  4. wget.exe --no-check-certificate -O %TEMP%\mpam-fe.exe "https://go.microsoft.com/fwlink/?LinkID=121721&arch=x64"
  5. :: Install them, wait for 120 seconds, then delete the installer:
  6. %TEMP%\mpam-fe.exe
  7. CHOICE /C:AB /D:A /T:120 >NUL 2>&1
  8. DEL /F %TEMP%\mpam-fe.exe
  9. :: And we're done.
  10. EXIT

I have no idea why the regular way of updating MSSE breaks on some systems, but now that I’ve been running the above script on my machines every night, MSSE is staying up to date pretty nicely. Ah yes, one thing: In case your system is 32-bit and not 64-bit, you need to change the URL being called by wget. Just replace the HTTP variable arch=x64 with arch=x86 in that script, and it’ll download the 32-bit version of mpam-fe.exe!

Also note that you can actually abort the 120 second wait by maybe accidentally pressing either A or B in the above example, because the delay is implemented in a weird way using CHOICE, since Windows XP doesn’t have a native sleep or wait command. If you want to prevent such accidents, you can make CHOICE use stranger characters that you would never be able to enter accidentally, like this for example:

  1. CHOICE /C:©® /D:® /T:120 >NUL 2>&1

And with that you can keep older machines using MSSE a tiny little bit more secure in case the auto-update breaks for you as well.

Nov 222016
 

FreeBSD IBM ServeRAID Manager logoAnd yet another FreeBSD-related post: After [updating] the IBM ServeRAID manager on my old Windows 2000 server I wanted to run the management software on any possible client. Given it’s Java stuff, that shouldn’t be too hard, right? Turned out not to be too easy either. Just copying the .jar file over to Linux and UNIX and running it like $ java -jar RaidMan.jar wouldn’t do the trick. Got nothing but some exception I didn’t understand. I wanted to have it work on XP x64 (easy, just use the installer) and Linux (also easy) as well as FreeBSD. But there is no version for FreeBSD?!

The ServeRAID v9.30.21 manager only supports the following operating systems:

  • SCO OpenServer 5 & 6
  • SCO Unixware 7.1.3 & 7.1.4
  • Oracle Solaris 10
  • Novell NetWare 6.5
  • Linux (only certain older distributions)
  • Windows (2000 or newer)

I started by installing the Linux version on my CentOS 6.8 machine. It does come with some platform-specific libraries as well, but those are for running the actual RAID controller management agent for interfacing with the driver on the machine running the ServeRAID controller. But I only needed the user space client program, which is 100% Java stuff. All I needed was the proper invocation to run it! By studying IBMs RaidMan.sh, I came up with a very simple way of launching the manager on FreeBSD by using this script I called serveraid.sh (Java is required naturally):

  1. #!/bin/sh
  2.  
  3. # ServeRAID Manager launcher script for FreeBSD UNIX
  4. # written by GAT. http://www.xin.at/archives/3967
  5. # Requirements: An X11 environment and java/openjdk8-jre
  6.  
  7. curDir="$(pwd)"
  8. baseDir="$(dirname $0)/"
  9.  
  10. mkdir ~/.serveraid 2>/dev/null
  11. cd ~/.serveraid/
  12.  
  13. java -Xms64m -Xmx128m -cp "$baseDir"RaidMan.jar com.ibm.sysmgt.raidmgr.mgtGUI.Launch \
  14. -jar "$baseDir"RaidMan.jar $* < /dev/null >> RaidMan_StartUp.log 2>&1
  15.  
  16. mv ~/RaidAgnt.pps ~/RaidGUI.pps ~/.serveraid/
  17. cd "$curDir"

Now with that you probably still can’t run everything locally (=in a FreeBSD machine with ServeRAID SCSI controller) because of the Linux libraries. I haven’t tried running those components on linuxulator, nor do I care for that. But what I can do is to launch the ServeRAID manager and connect to a remote agent running on Linux or Windows or whatever is supported.

Now since this server/client stuff probably isn’t secure at all (no SSL/TLS I think), I’m running this through an SSH tunnel. However, the Manager refuses to connect to a local port because “localhost” and “127.0.0.1” make it think you want to connect to an actual local RAID controller. It would refuse to add such a host, because an undeleteable “local machine” is always already set up to begin with, and that one won’t work with an SSH tunnel as it’s probably not running over TCP/IP. This can be circumvented easily though!

Open /etc/hosts as root and enter an additional fantasy host name for 127.0.0.1. I did it like that with “xin”:

::1			localhost localhost.my.domain xin
127.0.0.1		localhost localhost.my.domain xin

Now I had a new host “xin” that the ServeRAID manager wouldn’t complain about. Now set up the SSH tunnel to the target machine, I put that part into a script /usr/local/sbin/serveraidtunnel.sh. Here’s an example, 34571 is the ServeRAID agents’ default TCP listen port, 10.20.15.1 shall be the LAN IP of our remote machine hosting the ServeRAID array:

#!/bin/bash
ssh -fN -p22 -L34571:10.20.15.1:34571 mysshuser@www.myserver.com

You’d also need to replace “mysshuser” with your user name on the remote machine, and “www.myserver.com” with the Internet host name of the server via which you can access the ServeRAID machine. Might be the same machine or a port forward to some box within the remote LAN.

Now you can open the ServeRAID manager and connect to the made-up host “xin” (or whichever name you chose), piping traffic to and from the ServeRAID manager through a strongly encrypted SSH tunnel:

IBM ServeRAID Manager on FreeBSD

It even detects the local systems’ operating system “FreeBSD” correctly!

And:

IBM ServeRAID Manager on FreeBSD

Accessing a remote Windows 2000 server with a ServeRAID II controller through an SSH tunnel, coming from FreeBSD 11.0 UNIX

IBM should’ve just given people the RaidMan.jar file with a few launcher scripts to be able to run it on any operating system with a Java runtime environment, whether Windows, or some obscure UNIX flavor or something else entirely, just for the client side. Well, as it stands, it ain’t as straight-forward as it may be on Linux or Windows, but this FreeBSD solution should work similarly on other systems as well, like e.g. Apple MacOS X or HP-UX and others. I tested this with the Sun JRE 1.6.0_32, Oracle JRE 1.8.0_112 and OpenJDK 1.8.0_102 for now, and even though it was originally built for Java 1.4.2, it still works just fine.

Actually, it works even better than with the original JRE bundled with RaidMan.jar, at least on MS Windows (no more GUI glitches).

And for the easy way, here’s the [package]! Unpack it wherever you like, maybe in /usr/local/. On FreeBSD, you need [archivers/p7zip] to unpack it and a preferably modern Java version, like [java/openjdk8-jre], as well as X11 to run the GUI. For easy binary installation: # pkg install p7zip openjdk8-jre. To run the manager, you don’t need any root privileges, you can execute it as a normal user, maybe like this:

$ /usr/local/RaidMan/serveraid.sh

Please note that my script will create your ServeRAID configuration in ~/.serveraid/, so if you want to run it as a different user or on a different machine later on, you should recursively copy that directory to the new user/machine. That’ll retain the local client configuration.

That should do it! :)

Dec 252015
 

NFC logoAnd here we have another NFC-enabled banking card for contactless payment, this time it’s a VISA credit card issued by [Card Complete Austria]. Might I add, this is one of the few companies which also still leisurely prints the full credit cards’ numbers on their invoices, which are then being sent out via regular postal mail, clearly showing off what those letters are on the envelope. Good Job, Card Complete, especially your reply regarding the matter, telling me that “it’s okay, no need to change that”! bash

In any case, I’ve written about this whole NFC thing before in much greater detail, [see here]! That article will show you the risks involved when working with potentially exploitable NFC cards, which mostly pose a data leaking problem. But you may wish to read the comments there as well, especially [this one here]. This shows that VISA cards tend(ed?) to be extremely vulnerable up to the point where a person walking by close enough to your wallet could actually draw money from your credit card without you noticing.

I have no idea whether this security hole still exists in todays’ NFC-enabled VISA cards, but I’m no longer gonna take that risk. Especially not since I don’t even need the NFC tech in that (or any other) banking card. Once again, we’re just gonna physically destroy the induction coil that powers the NFC chip in the card and which also operates as its antenna. Since I’m lazy, I didn’t do the “poor mans’ x-ray” photos this time (you can see how to do that in the first NFC post), just before and after pictures:

So, before:

As you can see, there are already signs of use on this card. Yeah, I’ve been lazy, running around with an NFC enabled VISA for too long already. Time to deal with it then:

Finding a good spot for my slightly oversized drilling head wasn’t so easy on this card, because the card numbers, signature field, magnetic strip and other parts like the hologram where in the way. But in the end I managed to drill a hole in about the right spot. Because of the heavy image editing you can’t see it, but I did damage the first of my credit cards’ numbers, but only slightly, so it should still be o.k., just a bit of paint that came off.

So, once again: You’re not welcome in my wallet, so bye bye, NFC!

Jun 302015
 

NFC logoNFC – or “near-field communication” technology is a now-booming system for sending and receiving small chunks of data over very short distances. You may have heard about modern cellphones supporting the system to read information from small tags – in essence chips you stick onto something to provide local, small pieces of information. This can serve augmented reality purposes for instance, in a sense at least, providing metadata about objects anywhere in the world.

It’s nowadays also being used for payment though, both in conjunction with smartphones and their active NFC chips as well as debit/credit banking cards and their integrated, passive NFC circuitry.

Index:

  1. NFC basics
  2. NFC-capable banking cards
  3. Using a modern Android phone to fetch data from a banking card
  4. The theft issue
  5. Modern cards may be more close-lipped
  6. Killing NFC for good

1.) NFC basics

So there are connections between active chips (say: phone to phone) as well as active-passive ones, in which case the active side (a phone, an electronic cashier) will talk to the passive one. In the latter case, the active chip will generate an electromagnetic field which reaches a copper coil embedded in the passive device or tag, creating enough inductive voltage to power that passive NFC chip.

According to information that can be found on the web and in some specifications, the range should be about 20cm with data transfer rates of 106kbit/s, 212kbit/s or 424kbit/s, and in some non-standard cases 848kbit/s. That’d be 13.25kiB/s, 26.5kiB/s, 53kiB/s or 106kiB/s respectively. The time to build up a connection is around one tenth of a second. There are NFC range extenders [like this one] for active chips however, which can boost the range up to almost 1 meter! And that’s were the alarms start ringing in my head.

Now, why is any of that dangerous to begin with? Because it’s being used for payments and because there may be a significant information leaking issue with some of those banking cards.

2.) NFC-capable banking cards

First of all, I’d like to thank two of my colleagues, which shall remain anonymous, for providing a.) a fully affected debit card and b.) a NFC-capable Android smartphone.

Let’s take a look at our affected card (click to enlarge images, as usual):

A PayPass-based NFC-capable debit card

A PayPass-based NFC-capable debit card, see that PayPass logo?

Now this is not my own card, so I didn’t have unlimited access to it. Since my own cards – both debit and credit – were not NFC-capable yet, I simply ordered a new one from my bank. There are other people on the web who used CT/X-Ray like [here] or [here] to visualize the internals of such cards, but I wanted a cheap solution that every layman can copy easily. As a matter of fact, any bright light (even a cellphones LED flash, when used as a torch) is sufficient, see here:

NFC coil visualized by normal light

The NFC coil on my new card, visualized by normal light, in this case a Sigma EVO X halogen lamp used for riding mountain bikes at night. This is a stitched image assembled from 11 individual photographs. And yes, I left my given name in the clear there. wink

For more clarity, see the next image:

Here I emphasized the coil a bit, so you'd know what to look for

Here I emphasized the coil a bit, so you would know what to look for

Now this coil has two functions: First – as mentioned above – it provides inductive voltage and with it up to 15mA of power to run the NFC chip and potentially some flash memory. Second, it also is the NFC chips’ antenna to properly receive the signal on NFCs 13.56MHz radio frequency spectrum. So, how about we talk to that chip a little ourselves, now shall we?

3.) Using a modern Android phone to fetch data from a Banking card

A Frenchman named [Julien Millau] luckily has developed an Android app called “Banking card reader NFC (EMV)”, which you can find on [Google Play] for free, including the source code as it’s licensed under the [Apache License, v2.0]. There are other apps too, tailored towards cards with local features (I’ll get to those later), but this is a good, generic one.

So what you’ll need is an NFC-capable Android smartphone, that app, and some banking card with NFC enabled. If you’ve got a chatty one on top of things, you can do this:

The basic card info might not look like much, as it’s supposed to show only the cards serial number. Some cards – like this one here – however give you the bank account number instead! Nice one. So this is our information leak #1.

As you can see on the other two images, the card also features some flash memory, holding a very interesting transaction log. By sending hexadecimal commands of the form 00 B2 NN 5C 00 to the card, where NN equals the log entry number, we can get a nice transaction log including amounts paid. So 00 B2 01 5C 00 would get log entry #1, 00 B2 08 5C 00 gets #8, 00 B2 0E 5C 00 gets #14 and so on. After decoding, you get the date and amount of money spent for each transaction, and that includes both NFC transactions and normal full-contact transactions, where you put your card into a real chip reader and enter your pin.

So no matter how you pay, it will be logged on such cards. And that log can be read. Given that NFC is completely pinless, we can just fetch such data without any authentication or encryption holding us back! That’s leak #2. Again, keep in mind that there are those range boosters for active NFC chips! If I put a powered NFC patch kit on my Android phone, in a worst-case scenario I could just walk by you and potentially fetch your transaction logs and bank account number!

Now that did raise a few eyebrows, which is why some banks have reacted to the issue, like my own bank too. But first, to another problem:

4.) The theft issue

Besides leaking information, there is another problem: As said, NFC access is pinless. It’s used for 25€ micropayments mostly, limiting the damage somewhat. Typically, you’ll get 3-5 payments before you have to plug the card back into an ATM or electronic cashier and re-authenticate it using the pin, after which you’ll get another 3-5 contactless payments activated. So with 5 usable payments, you can lose 125€, should your card be stolen. But it doesn’t end there.

In my own country, Austria, we also have an offline cash replacement technology called [Quick]Austrian Flag. With that, you can basically charge your banking card and carry the charge around like real cash. It’s being used for machines where online connections are economically unfeasible, like cigarette vending machines or pay and display machines, where you buy tickets for car parking. The maximum charge for Quick amounts to 400€ total.

Thing is, should you ever choose to charge the full amount, this triggers an activation of Quick-over-NFC! This is actually intentional, so that’s what you have to do to get to that feature, contactless offline payments. The real problem is, that with Quick-over-NFC, all limits are gone, which is confirmed [here]Austrian Flag. So a thief could just waste the entire charge of the card at his hearts’ content, upping the potential worst case loss to a full 525€! Holy hell, that does actually hurt already! Even if you call your bank and get the card locked due to theft, that money is still gone due to the offline nature of Quick. Just like real cash. So better hold on to your card, if you’ve already got that feature activated and money charged onto it!

But let’s get back to the data leak issue again:

5.) Modern cards may be more close-lipped

Banks aren’t entirely ignorant to the problem and related critizisms received, so some of them actually did try to improve the situation. When trying to read my brand-new card from Bank Austria for instance, what we get is this:

First of all, this newer card doesn’t give away my bank account number, but really just the serial number. That takes care of leak #1 to at least some degree. Secondly, the card doesn’t seem to have a transaction log anymore. At least it doesn’t hand one out using known commands. It can of course still be used for NFC payments using [PayWave] or, as it is in my case, [PayPass] and Quick, if activated. But yes, this is more secure, at least when considering the info leak.

But what if I just want to lock it down for good, once and for all?

We can never be sure that there really is no transaction log after all. Maybe we just don’t know the necessary commands. Plus, there still is the micropayment issue.

Now, some banks give you the option to deactivate the feature at your local branch bank, sometimes for free. Volksbank here does this for instance. Not sure how this works and whether it’s really final though. Others may give you the option to send you a NFC-free card, as my bank does. That is if you do know about it and proactively order one for 14€… By default they’d just send you a fully NFC-capable one before the old one expires.

Some banks do neither of the two. Which is why you may want to handle things yourself.

6. Killing NFC for good

Remember that poor mans’ X-Ray from above? All we need to do is to cut the copper coil to fully disable all NFC functionality. I used a microdrill for this, which may be slightly dangerous for the chip due to fast static charge buildup, but it worked fine in my case. You can also use a manual drill or even melt your way through with a soldering iron. Just make sure to not pick a spot that sits within the cards magnetic strip! In any case, we mark the spot first:

A red X above the NFC "wave" logo marks the spot

A red X above the NFC “wave” logo marks the spot. Notice that this card shows both the PayPass and that NFC wave logo.

A few seconds later, my cards’ NFC feature has effectively been dealt with. Tests with both Android phones and actual electronic cashiers have shown that yes, it’s truly gone. All the other full-contact functions like cash withdrawal and payments have also been tested and still work absolutely fine!

Universal Solution™: If it bugs you, just drill a hole in it!

Universal Solution™: If it bugs you, just drill holes in it ’till it’s dead!

So that’s it, no more contactless payments, no more reading information out of the card wirelessly, no more Quick-over-NFC (which only concerns Austrian people anyway, but yeah). Just make sure that the edge of the hole is properly deflashed, so your card won’t get stuck in any ATMs or whatever.

So, all of the good things are still there, and all of what I consider to be the bad things are now gone! Finally, I can put my tin foil hat off again.

Ah yes, tin foil! Before I forget it, another colleague of mine also tried to shield his card using tin foil instead. And indeed, that seems to be sufficient too, in case you don’t wanna physically modify your card. You can even buy readily-made shielded card sleeves to protect you from unauthorized NFC accesses, like [this one here].

I do prefer the final solution instead, but it’s up to you, the option to do it temporarily instead is there also.

So, stay safe! :)

Jul 172014
 

XViewerThe first release of XViewer is now available, providing TK-IP101 users with a way to still manage their installations using modern Java versions and operating systems without any blocker bugs and crashes. I have created a static page about it [here] including downloads and the statements required by TRENDnet. You can also see it on the top right of this weblog. This is the first fruition of TRENDnet allowing me to release my modified version of their original KViewer under the GPLv3 license.

As requested, all traces of TRENDnet and their TK-IP101 box have been removed from the code (not that there were many anyway, as the code was reverse-engineered from the byte code) on top of the rename to XViewer. In time, I will also provide my own documentation for the tool.

Since I am no Java developer, you shouldn’t expect any miracles though. Also, if anyone would be willing to fork it into yet another, even better version of the program, you’re of course welcome to do so!

Happy remote monitoring & managing to you all! :)

Edit: Proper documentation for SSL certificate creation using a modern version of [XCA] (The X certificate and key management tool) and about setting up and using XViewer & XImpcert has now also been made [available]!

Jul 162014
 

XViewer logoIn my [last post] I have talked about the older TRENDnet TK-IP101 KVM-over-IP box I got to manage my server over the network even in conditions where the server itself is no longer reachable (kernel crash, BIOS, etc.).

I also stated that the client software to access the box is in a rather desolate state, which led me to the extreme step of decompiling the Java-based Viewer developed by TRENDnet called KViewer.jar and its companion tool for SSL certificate imports, Impcert.jar.

Usually, software decompilation is a rather shady business, but I did this as a TRENDnet support representative could not help me out any further. After reverse-engineering the software, making it compatible with modern Java Runtime environments and fixing a blocker bug in the crypto code, I sent my code and the binary back to TRENDnet for evaluation, asking them to publish the fixed versions. They refused, stating that the product was end-of-life.

In a second attempt, I asked the guy for permission to release my version of KViewer including the source code and also asked which license I could use (GPL? BSD? MIT?). To my enormous surprise, the support representative conferred with the persons in charge, and told me that it had been decided to grant me permission to release KViewer under the GNU General Public License (GPL), as long as all mention of TRENDnet and related products are removed from the source code and program.

To further distinct the new program from its original, I renamed it to “XViewer”, and its companion tool to “XImpcert”, as a hommage to my server, XIN.at.

KVM host:port

The former KViewer by TRENDnet, that works up to Java 1.6u27

XViewer

XViewer, usable on JRE 1.7 and 1.8

Now, I am no Java developer, I don’t know ANYthing about Java, but what I did manage to do is to fix all errors and warnings currently reported by the Eclipse Luna development environment and the Java Development Kit 1.7u60 on the source code. While my version no longer supports Java 1.6, it does run fine on Java 1.7u60 and 1.8u5, tested on Windows XP Professional x64 Edition and CentOS 6.5 Linux x86_64. A Window closing bug has been fixed by my friend Cosmonate, and I myself got rid of a few more. In addition to that, new buttons have been added for an embedded “About” window and an embedded GPLv3 license as suggested by TRENDnet.

On top of that, I hereby state that I am not affiliated with TRENDnet and that TRENDnet of course cannot be held liable for any damage or any problems resulting from the use of the modified Java viewer now known as XViewer or its companion tool XImpcert. That shall be said even before the release, as suggested to TRENDnet by myself and subsequently confirmed to be a statement required by the company.

In the very near future, I will create a dedicated site about XViewer on this weblog, maybe tomorrow or the day after tomorrow.

Oh and of course: Thanks fly out to Albert from TRENDnet and the people there who decided to grant me permission to re-release their viewer under the GPL! This is not something that we can usually take for granted, so kudos to TRENDnet for that one!

Jul 112014
 

TK-IP101 logoAttention please: This article contains some pretty negative connotations about the software shipped with the TRENDnet TK-IP101 KVM-over-IP product. While I will not remove what I have written, I have to say that TRENDnet went lengths to support me in getting things done, including allowing me to decompile and re-release their Java software in a fixed form under the free GNU General Public license. Please [read this] to learn more. This is extremely nice, and so it shall be stated before you read anything bad about this product, so you can see things in perspective! And no, TRENDnet has not asked me to post this paragraph, those are my own words entirely.

I thought that being able to manage my server out-of-band would be a good idea. It does sound good, right? Being able to remotely control it even if the kernel has crashed and being able to remotely access everything down to the BIOS level. A job for a KVM-over-IP switch. So I got this slightly old [TK-IP101] from TRENDnet. Turns out that wasn’t the smartest move, and it’s actually a 400€ piece of hardware. The box itself seems pretty ok at first, connecting to your KVM switch fabric or a single server via PS/2, USB and VGA. Plus, you can hook up a local PS/2 keyboard and mouse too. Offering what was supposed to be highly secure SSL PKI autentication via server+client certificates, so that only clients with the proper certificate may connect plus a web interface, this sounded really good!

TRENDnet TK-IP101

TRENDnet TK-IP101

It all breaks down when it comes to the software though. First of all, the guide for certificate creation that is supposed to be found on the CD that comes with the box is just not there. Also, the XCA software TRENDnet suggests one should use was also missing. Not good. Luckily, the software is open source and can be downloaded from the [XCA SourceForge project]. It’s basically a graphical OpenSSL front end. Create a PEM-encoded root certificate, PEM-encoded server certificate and a PKCS#12 client certificate, the latter signed by the root cert. So much for that. Oh, and I uploaded that TRENDnet XCA guide for you in case it’s missing on your CD too, its a bit different for the newer version of XCA, but just keep in mind to create keys beforehand and to use certificate requests instead of certificates. You then need to sign the requests with the root certificate. With that information plus the guide you should be able to manage certificate creation:

But it doesn’t end there. First I tried the Windows based viewer utility that comes with its own certificate import tool. Import works, but the tool will not do client+server authentication. What it WILL do before terminating itself is this:

TK-IP101 IPViewer.exe bug

TK-IP101 IPViewer.exe bug

I really tried to fix this. I even ran it on Linux with Wine just to do an strace on it, looking for failing open() calls. Nothing. So I thought… Why not try the second option, the Java Viewer that goes by the name of KViewer.jar? Usually I don’t install Java, but why not try it out with Oracles Java 1.7u60, eh? Well:

So yeah. What the hell happened there? It took me days to determine the exact cause, but I’ll cut to the chase: With Java 1.6u29, Oracle introduced multiple changes in the way SSL/TLS worked, also due to the disclosure of the BEAST vulnerability. When testing, I found that the software would work fine when run with JRE 1.6u27, but not with later versions. Since Java code is pretty easily decompiled (thanks fly out to Martin A. for pointing that out) and the Viewer just came as a JAR file, I thought I’d embark on the adventure of decompiling Java code using the [Java Decompiler]:

Java Decompiler decompiling KViewer.jar's Classes

Java Decompiler decompiling KViewer.jar’s Classes

This results in surprisingly readable code. That is, if you’re into Java. Which I am not. But yeah. The Java Decompiler is pretty convenient as it allows you to decompile all classes within a JAR and to extract all other resources along with the generated *.java files. And those I imported into a Java development environment I knew, Eclipse Luna.

Eclipse Luna

Eclipse Luna

Eclipse Luna (using a JDK 7u60) immediately complained about 15 or 16 errors and about 60 warnings. Mostly that was missing primitive declarations and other smaller things that even I managed to fix, got rid even of the warnings. But the SSL bug persisted in my Java 7 build just as it did before. See the following two traces, tracing SSL and handshaking errors, one working ok on JRE 1.6u27, and one broken on JRE 1.7u60:

So first I got some ideas from stuff [posted here at Oracle], and added the following two system properties in varying combinations directly in the Main class of KViewer.java:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

This didn’t really do any good though, especially since “interoperable” mode should work anyway and is being set as the default. But today I found [this information on an IBM site]!

It seems that Oracle fixed the BEAST vulnerability in Java 1.6u29 amongst other things. They seem to have done this by disallowing renegotiations for affected implementations of CBCs (Cipher-Block Chaining). Now, this KVM switch can negotiate only a single cipher: SSL_RSA_WITH_3DES_EDE_CBC_SHA. See that “CBC” in there? Yeah, right. And it got blocked, because the implementation in that aged KVM box is no longer considered safe. Since you can’t just switch to a stream-based RC4 cipher, Java has no other choice but to drop the connection! Unless…  you do this:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

Setting the jsse.enableCBCProtection property to false before the negotiation / handshake will make your code tolerate CBC ciphers vulnerable to BEAST attacks. Recompiling KViewer with all the code fixes including this one make it work fine with 2-way PKI authentication using a client certificate on both Java 1.7u60 and even Java 1.8u5. I have tested this using the 64-Bit x86 VMs on CentOS 6.5 Linux as well as on Windows XP Professional x64 Edition and Windows 7 Professional SP1 x64.

De-/Recompiled & "fixed" KViewer connecting to a machine much older even than its own crappy code

De-/Recompiled & “fixed” KViewer.jar connecting to a machine much older even than its own crappy code

I fear I cannot give you the modified source code, as TRENDnet would probably hunt me down, but I’ll give you the compiled byte code at least, the JAR file, so you can use it yourself. If you wanna check out the code, you could just decompile it yourself, losing only my added comments: [KViewer.jar]. (ZIPped, fixed / modified to work on Java 1.7+)

Both the modified code and the byte code “binary” JAR have been returned to TRENDnet in the context of my open support ticket there. I hope they’ll welcome it with open arms instead of suing me for decompiling their Java viewer.

In reality, even this solution is nowhere near perfect. While it does at least allow you to run modern Java runtime environments instead of highly insecure older ones plus using pretty secure PKI auth, it still doesn’t fix the man-in-the-middle-attack issues at hand. TRENDnet should fix their KVM firmware, enable it to run the TLSv1.2 protocol with AES256 Galois-Counter-Mode ciphers (GCM) and fix the many, many problems in their viewer clients. The TK-IP101 being an end-of-life product means that this is likely never gonna happen though.

It does say a lot when the consumer has to hack up the software of a supposedly high-security 400€ networking hardware piece by himself just to make it work properly.

I do still hope that TRENDnet will react positively to this, as they do not offer a modern replacement product to supersede the TK-IP101.

May 292014
 

Truecrypt LogoJust recently, I was happily hacking away at the Truecrypt 7.1a source code to enhance its abilities under Linux, and everybody was eagerly awaiting the next version of the open source disk encryption software since the developers told me they were working on “UEFI+GPT booting”, and now BOOM. Truecrypt website gone, forum gone, all former versions’ downloads gone. Replaced by a redirection to Truecrypts SourceForge site, showing a very primitive page telling users to migrate to Bitlocker on Windows and Filevault on MacOSX. And told to just “install some crypto stuff on Linux and follow the documentation”.

Seriously, what the fuck?

Just look at this shit (a snippet from the OSX part):

The Truecrypt website now

The Truecrypt website now

Farther up they’re saying the same thing, warning the user that it is not secure with the following addition: “as it may contain unfixed security issues”

There is also a new Truecrypt version 7.2 stripped of most of the functionality. It can only be used to decrypt and mount anymore, so this is their “migration version”. Funny thing is, the GPG signatures and keys seem to check out. It’s truly the Truecrypt developers’ keys that were used for signing the binaries.

Trying to get you a screenshot of the old web site for comparison from the WayBackMachine, you get this:

Can't fetch http://www.truecrypt.org from the WayBackMachine

Can’t fetch http://www.truecrypt.org from the WayBackMachine. Access denied.

Now, before I give you the related links, let me sum up the current theories as to what might have occurred here:

  • http://www.truecrypt.org has been attacked and compromised, along with the SourceForge Account (denied by SourceForge administrators atm) and the signing keys.
  • A 3-letter agency has put pressure on the Truecrypt foundation, forcing them to implement a back door. The devs burn the project instead.
  • The Truecrypt developers had enough of the pretty lacking donation support from the community and just let it die.
  • The crowdfunded Truecrypt Audit project found something very nasty (seems not to be the case according to auditors).
  • Truecrypt was an NSA project all along, and maintenance has become tedious. So they tell people to migrate to NSA-compromised solutions that are less work, as they don’t have to write the code themselves (Bitlocker, Filevault). Or, maybe an unannounced NSA backdoor was discovered after all. Of course, any compromise of commercial products stands unproven.

Here are some links from around the world, including statements by cryptographers who are members of the Truecrypt audit project:

If this is legit, it’s really, really, really bad. One of the worst things that could’ve happened. Ever. I pray that this is just a hack/deface and nothing more, but it sure as hell ain’t looking good!

There is no real cross-platform alternative, Bitlocker is not available to all Windows users, and we may be left with nothing but a big question mark over our heads. I hope that more official statements will come, but given the clandestine nature of the TC developers, this might never happen…

Update: This starts to look more and more legit. So if this is truly the end, I will dearly miss the Truecrypt forum. Such a great community with good, capable people. I learned a lot there. So Dan, nkro, xtxfw, catBot/booBot, BeardedBlunder and all you many others whose nicks my failing brain can not remember: I will likely never find you guys again on the web, but thanks for all your contributions!

Update 2: Recently, a man called Steve Barnhart, who had contact with Truecrypt auditor Matthew Green said, that a Truecrypt developer named “David” had told him via email, that whichever developers were still left had lost interest in the project. The conversation can be read [here]!

I once got a reply from a Truecrypt developer in early 2013, when asking about the state of UEFI+GPT bootloader code too…

I just dug up that email from my archive, and the address contained the full name of the sender. And yes, it was a “David”. This could very well be the nail in the coffin. Sounds as if it was truly not the NSA this time around.

May 242014
 

XP Hex hackingJust when things went crazy enough with my backporting of Server 2003 updates to Windows XP Pro x64 Edition, here comes the next “bomb”! User [MasterOf486er] on the [Voodooalert forums]German flag posted a link to the well known German website Winfuture, which focuses primarily on all things Windows. And they describe a way of hacking up Windows XP 32-Bit to act like a Windows Embedded POSReady 2009 system, [see here]German flag! Those so-called “POS” or “Point of Service” systems are typically airport terminals, train/subway ticket vending machines or ATMs and other systems running in Kiosk modes.

And Windows XP based POSReady 2009 systems are supported until [2019-04-09]!

The hack is rather simple, all you need to do to make your 32-Bit Windows XP act as an Embedded POSReady 2009 machine is to add the following to your systems registry:

Windows Registry Editor Version 5.00 

 [HKEY_LOCAL_MACHINE\SYSTEM\WPA\PosReady]
 "Installed"=dword:00000001

I have prepared a .reg file for your enjoyment, that you can just download and double click as Administrator after unpacking:

After entering the data to your registry, re-check Windows Updates, and you should be getting the goods! As always, you’ll have to do this at your own risk, no guarantees for anything from my side. But for now it seems to be working for people on XP 32-Bit!

Please note, that you might be violating Microsofts Windows XP EULA by applying this hack, so you’ve been warned!

Edit: We now have an official statement by a Microsoft spokesperson regarding the POSReady hack. As always, take with a grain of salt. [Source];

“We recently became aware of a hack that purportedly aims to provide security updates to Windows XP customers. The security updates that could be installed are intended for Windows Embedded and Windows Server 2003 customers and do not fully protect Windows XP customers. Windows XP customers also run a significant risk of functionality issues with their machines if they install these updates, as they are not tested against Windows XP. The best way for Windows XP customers to protect their systems is to upgrade to a more modern operating system, like Windows 7 or Windows 8.1.”

They do have a point there though. While we got an IE8 and .Net update, even the lightweight shell library update, there is no guarantee that every hole will be plugged, as POSReady 2009 systems are reduced feature set XPs after all. Also, the updates are naturally untested on regular XP machines, so there is risk. Still, I consider running XP in “POSReady 2009” mode being a better option still, when compared to just run it in “08th of April, 2014” state.