Oct 222014
 

Webchat logoXIN.at has been running an IRC chat server for some time now, but the problem always lies with people needing some client software to use it, like X-Chat or Nettalk or whatever.

People usually just don’t want to install yet another chat client software, no matter how old and well-established IRC itself may be. Alternatively, they can use some other untrusted web interface to connect to either the plain text [irc://www.xin.at:6666] or the encrypted [irc+ssl://www.xin.at:6697] server via a browser, but this isn’t optimal either. Since JavaScript cannot open TCP sockets on its own, and hence cannot connect to an IRC server directly, there are only two kinds of solutions:

  • Purely client-based as a Java Applet or Adobe Flash Applet, neither of wich are very good options.
  • JavaScript client + server backend for handling the actual communication with the IRC server.
    • Server backends exist in JavaScript/Node.js, Perl, Python, PHP etc.

Since I cannot run [Node.js] and [cgi:irc] is unportable due to its reliance on UNIX sockets, only Python and PHP remained. Since PHP was easier for me, I tried the old [WebChat2] software developed by Chris Chabot for this. To achieve connection-oriented encryption security, I wrapped SSL/TLS around the otherwise unencrypting PHP socket server of WebChat2. You can achieve this with cross-platform software like [stunnel], which can essentially wrap SSL around almost every servers connection (minus the complex FTP protocol maybe). While WebChat2’s back end is based on PHP, the front end uses JavaScript/Comet. This is what it looks like:

So that should do away with the “I don’t wanna install some chat client software” problem, especially when considering that most people these days don’t even know what Internet Relay Chat is anymore. ;) It also allows anonymous visitors on this web log to contact me directly, while allowing for a more tap-proof conversation when compared with what typical commercial solutions would give you (think WhatsApp, Skype and the likes). Well, it’s actually not more tap-proof considering the server operator can still read all communication at will, but I would like to believe that I am a more trustworthy server operator than certain big corporations. ;)

Oh, and if you finally do find it in yourself to use some good client software, check out [XChat] on Linux/UNIX and its fork [HexChat] on Windows, or [LimeChat] on MacOS X. There are mobile clients too, like for Android ([AndroIRC], [AndChat]), iOS ([SIRCL], [TurboIRC]), Windows Phone 8 ([IRC Free], [IRC Chat]), Symbian 9.x S60 ([mIRGGI]) and others.

So, all made easy now, whether client software or just web browser! Ah and before I forget it, here’s the link of course:

Edit: Currently, only the following browsers are known to work with the chat (older version may sometimes work, but are untested):

  • Mozilla FireFox 31+
  • Chromium (incl. Chrome/SRWare Iron) 30+
  • Opera 25+
  • Apple Safari 5.1.7+
  • KDE Konqueror 4.3.4+

The following browsers are known to either completely break or to make the interface practically unusable:

  • Internet Explorer <=11
  • Opera <=12.17
Jul 172014
 

XViewerThe first release of XViewer is now available, providing TK-IP101 users with a way to still manage their installations using modern Java versions and operating systems without any blocker bugs and crashes. I have created a static page about it [here] including downloads and the statements required by TRENDnet. You can also see it on the top right of this weblog. This is the first fruition of TRENDnet allowing me to release my modified version of their original KViewer under the GPLv3 license.

As requested, all traces of TRENDnet and their TK-IP101 box have been removed from the code (not that there were many anyway, as the code was reverse-engineered from the byte code) on top of the rename to XViewer. In time, I will also provide my own documentation for the tool.

Since I am no Java developer, you shouldn’t expect any miracles though. Also, if anyone would be willing to fork it into yet another, even better version of the program, you’re of course welcome to do so!

Happy remote monitoring & managing to you all! :)

Edit: Proper documentation for SSL certificate creation using a modern version of [XCA] (The X certificate and key management tool) and about setting up and using XViewer & XImpcert has now also been made [available]!

Jul 162014
 

XViewer logoIn my [last post] I have talked about the older TRENDnet TK-IP101 KVM-over-IP box I got to manage my server over the network even in conditions where the server itself is no longer reachable (kernel crash, BIOS, etc.).

I also stated that the client software to access the box is in a rather desolate state, which led me to the extreme step of decompiling the Java-based Viewer developed by TRENDnet called KViewer.jar and its companion tool for SSL certificate imports, Impcert.jar.

Usually, software decompilation is a rather shady business, but I did this as a TRENDnet support representative could not help me out any further. After reverse-engineering the software, making it compatible with modern Java Runtime environments and fixing a blocker bug in the crypto code, I sent my code and the binary back to TRENDnet for evaluation, asking them to publish the fixed versions. They refused, stating that the product was end-of-life.

In a second attempt, I asked the guy for permission to release my version of KViewer including the source code and also asked which license I could use (GPL? BSD? MIT?). To my enormous surprise, the support representative conferred with the persons in charge, and told me that it had been decided to grant me permission to release KViewer under the GNU General Public License (GPL), as long as all mention of TRENDnet and related products are removed from the source code and program.

To further distinct the new program from its original, I renamed it to “XViewer”, and its companion tool to “XImpcert”, as a hommage to my server, XIN.at.

KVM host:port

The former KViewer by TRENDnet, that works up to Java 1.6u27

XViewer

XViewer, usable on JRE 1.7 and 1.8

Now, I am no Java developer, I don’t know ANYthing about Java, but what I did manage to do is to fix all errors and warnings currently reported by the Eclipse Luna development environment and the Java Development Kit 1.7u60 on the source code. While my version no longer supports Java 1.6, it does run fine on Java 1.7u60 and 1.8u5, tested on Windows XP Professional x64 Edition and CentOS 6.5 Linux x86_64. A Window closing bug has been fixed by my friend Cosmonate, and I myself got rid of a few more. In addition to that, new buttons have been added for an embedded “About” window and an embedded GPLv3 license as suggested by TRENDnet.

On top of that, I hereby state that I am not affiliated with TRENDnet and that TRENDnet of course cannot be held liable for any damage or any problems resulting from the use of the modified Java viewer now known as XViewer or its companion tool XImpcert. That shall be said even before the release, as suggested to TRENDnet by myself and subsequently confirmed to be a statement required by the company.

In the very near future, I will create a dedicated site about XViewer on this weblog, maybe tomorrow or the day after tomorrow.

Oh and of course: Thanks fly out to Albert from TRENDnet and the people there who decided to grant me permission to re-release their viewer under the GPL! This is not something that we can usually take for granted, so kudos to TRENDnet for that one!

Jul 112014
 

TK-IP101 logoAttention please: This article contains some pretty negative connotations about the software shipped with the TRENDnet TK-IP101 KVM-over-IP product. While I will not remove what I have written, I have to say that TRENDnet went lengths to support me in getting things done, including allowing me to decompile and re-release their Java software in a fixed form under the free GNU General Public license. Please [read this] to learn more. This is extremely nice, and so it shall be stated before you read anything bad about this product, so you can see things in perspective! And no, TRENDnet has not asked me to post this paragraph, those are my own words entirely.

I thought that being able to manage my server out-of-band would be a good idea. It does sound good, right? Being able to remotely control it even if the kernel has crashed and being able to remotely access everything down to the BIOS level. A job for a KVM-over-IP switch. So I got this slightly old [TK-IP101] from TRENDnet. Turns out that wasn’t the smartest move, and it’s actually a 400€ piece of hardware. The box itself seems pretty ok at first, connecting to your KVM switch fabric or a single server via PS/2, USB and VGA. Plus, you can hook up a local PS/2 keyboard and mouse too. Offering what was supposed to be highly secure SSL PKI autentication via server+client certificates, so that only clients with the proper certificate may connect plus a web interface, this sounded really good!

TRENDnet TK-IP101

TRENDnet TK-IP101

It all breaks down when it comes to the software though. First of all, the guide for certificate creation that is supposed to be found on the CD that comes with the box is just not there. Also, the XCA software TRENDnet suggests one should use was also missing. Not good. Luckily, the software is open source and can be downloaded from the [XCA SourceForge project]. It’s basically a graphical OpenSSL front end. Create a PEM-encoded root certificate, PEM-encoded server certificate and a PKCS#12 client certificate, the latter signed by the root cert. So much for that. Oh, and I uploaded that TRENDnet XCA guide for you in case it’s missing on your CD too, its a bit different for the newer version of XCA, but just keep in mind to create keys beforehand and to use certificate requests instead of certificates. You then need to sign the requests with the root certificate. With that information plus the guide you should be able to manage certificate creation:

But it doesn’t end there. First I tried the Windows based viewer utility that comes with its own certificate import tool. Import works, but the tool will not do client+server authentication. What it WILL do before terminating itself is this:

TK-IP101 IPViewer.exe bug

TK-IP101 IPViewer.exe bug

I really tried to fix this. I even ran it on Linux with Wine just to do an strace on it, looking for failing open() calls. Nothing. So I thought… Why not try the second option, the Java Viewer that goes by the name of KViewer.jar? Usually I don’t install Java, but why not try it out with Oracles Java 1.7u60, eh? Well:

So yeah. What the hell happened there? It took me days to determine the exact cause, but I’ll cut to the chase: With Java 1.6u29, Oracle introduced multiple changes in the way SSL/TLS worked, also due to the disclosure of the BEAST vulnerability. When testing, I found that the software would work fine when run with JRE 1.6u27, but not with later versions. Since Java code is pretty easily decompiled (thanks fly out to Martin A. for pointing that out) and the Viewer just came as a JAR file, I thought I’d embark on the adventure of decompiling Java code using the [Java Decompiler]:

Java Decompiler decompiling KViewer.jar's Classes

Java Decompiler decompiling KViewer.jar’s Classes

This results in surprisingly readable code. That is, if you’re into Java. Which I am not. But yeah. The Java Decompiler is pretty convenient as it allows you to decompile all classes within a JAR and to extract all other resources along with the generated *.java files. And those I imported into a Java development environment I knew, Eclipse Luna.

Eclipse Luna

Eclipse Luna

Eclipse Luna (using a JDK 7u60) immediately complained about 15 or 16 errors and about 60 warnings. Mostly that was missing primitive declarations and other smaller things that even I managed to fix, got rid even of the warnings. But the SSL bug persisted in my Java 7 build just as it did before. See the following two traces, tracing SSL and handshaking errors, one working ok on JRE 1.6u27, and one broken on JRE 1.7u60:

So first I got some ideas from stuff [posted here at Oracle], and added the following two system properties in varying combinations directly in the Main class of KViewer.java:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

This didn’t really do any good though, especially since “interoperable” mode should work anyway and is being set as the default. But today I found [this information on an IBM site]!

It seems that Oracle fixed the BEAST vulnerability in Java 1.6u29 amongst other things. They seem to have done this by disallowing renegotiations for affected implementations of CBCs (Cipher-Block Chaining). Now, this KVM switch can negotiate only a single cipher: SSL_RSA_WITH_3DES_EDE_CBC_SHA. See that “CBC” in there? Yeah, right. And it got blocked, because the implementation in that aged KVM box is no longer considered safe. Since you can’t just switch to a stream-based RC4 cipher, Java has no other choice but to drop the connection! Unless…  you do this:

public static void main(String[] paramArrayOfString)
{
  /* Added by the GAT from http://wp.xin.at                             */
  /* This disables CBC protection, thus re-opening the connections'     */
  /* BEAST vulnerability. No way around this due to a highly restricted */
  /* KLE ciphersuite. Without this fix, TLS connections with client     */
  /* certificates and PKI authentication will fail!                     */
  java.lang.System.setProperty("jsse.enableCBCProtection", "false");
  /* ------------------------------------------------------------------ */
  /* Added by the GAT from http://wp.xin.at                        */
  /* This enables insecure TLS renegotiation as per CVE-2009-3555  */
  /* in interoperable mode.                                        */
  java.lang.System.setProperty("sun.security.ssl.allowUnsafeRenegotiation", "false");
  java.lang.System.setProperty("sun.security.ssl.allowLegacyHelloMessages", "true");
  /* ------------------------------------------------------------- */
  KViewer localKViewer = new KViewer();
  localKViewer.mainArgs = paramArrayOfString;
  localKViewer.init();
}

Setting the jsse.enableCBCProtection property to false before the negotiation / handshake will make your code tolerate CBC ciphers vulnerable to BEAST attacks. Recompiling KViewer with all the code fixes including this one make it work fine with 2-way PKI authentication using a client certificate on both Java 1.7u60 and even Java 1.8u5. I have tested this using the 64-Bit x86 VMs on CentOS 6.5 Linux as well as on Windows XP Professional x64 Edition and Windows 7 Professional SP1 x64.

De-/Recompiled & "fixed" KViewer connecting to a machine much older even than its own crappy code

De-/Recompiled & “fixed” KViewer.jar connecting to a machine much older even than its own crappy code

I fear I cannot give you the modified source code, as TRENDnet would probably hunt me down, but I’ll give you the compiled byte code at least, the JAR file, so you can use it yourself. If you wanna check out the code, you could just decompile it yourself, losing only my added comments: [KViewer.jar]. (ZIPped, fixed / modified to work on Java 1.7+)

Both the modified code and the byte code “binary” JAR have been returned to TRENDnet in the context of my open support ticket there. I hope they’ll welcome it with open arms instead of suing me for decompiling their Java viewer.

In reality, even this solution is nowhere near perfect. While it does at least allow you to run modern Java runtime environments instead of highly insecure older ones plus using pretty secure PKI auth, it still doesn’t fix the man-in-the-middle-attack issues at hand. TRENDnet should fix their KVM firmware, enable it to run the TLSv1.2 protocol with AES256 Galois-Counter-Mode ciphers (GCM) and fix the many, many problems in their viewer clients. The TK-IP101 being an end-of-life product means that this is likely never gonna happen though.

It does say a lot when the consumer has to hack up the software of a supposedly high-security 400€ networking hardware piece by himself just to make it work properly.

I do still hope that TRENDnet will react positively to this, as they do not offer a modern replacement product to supersede the TK-IP101.

May 282014
 

YaCy logoJust recently I have published my [vision for our networked society], claiming that freedom, self-determination and independence can be reached through decentralization, putting control over our services and data back into the hand of the users. My idea was to use distributed hash tables on a lower level to power search engines or social networks, distributed across a wide field of user-operated home servers. I thought my idea was pure utopia. Something we’d need to work hard for years to accomplish.

After I published it, users approached me via various channels, pointing out already existing software that deals with the centralization problems and dangers of today, like for instance the decentralized social network [Diaspora*] or more significantly even, [YaCy], which is a DHT-based search engine just like I envisioned it.

Let me show you the simple way the YaCy developers chose to show us, what they’re doing exactly. If you’ve read my article about decentralization linked at the beginning, you’ll immediately recognize what’s going on here (Images taken from YaCy):

So you can see, where this is going? In the right direction is where! And how is it implemented? Basically a Java server built on top of the [Apache Solr] / [Lucene] full text search engine well known in certain enterprises with a web interface on top. The web interface can be used for administration and as a simple web search, like we know it already. The Java code works with both Oracle Java 1.7 (Windows, MacOS X) as well as OpenJDK 1.7, which is preferred on Linux. I haven’t tested it, but I presume it might also work on BSD UNIX, as some BSD systems do support OpenJDK 1.7 too. Could also work on OpenSolaris I guess, and it can run with user privileges.

If you want to go the Oracle route on Linux, this also seems to work, at least for me, despite the YaCy developers asking for OpenJDK. But then again, if you wanna stay clear of any even remotely fishy software licenses, just go with OpenJDK!

In case you haven’t noticed yet, I have already joined the YaCy DHT network as a node, and the search using my node as an entry point into the DHT superstructure is embedded in this weblog already, look at the top right and you’ll find it! Mind you, it ain’t the fastest thing on the track, and the quality of its results won’t yet match Google or anything, but we’re getting there! I may also choose to embed it at [http://www.xin.at], not just here. But we’ll see about that.

Also, the web interface has a few nice monitoring gadgets, let’s have a look at those for my own fresh node, too:

Now, YaCy doesn’t provide data all by itself. Like in my original idea, the network needs to pull in data from outside the superstructure, from the regular Internet and make it searchable. For that, YaCys administration web features a simple crawler that you can send off to index everything on a certain domain via HTTP/HTTPS, like “http://wp.xin.at/”, or from a Samba/Windows share server, or from local files, or FTP etc. There is also a more complex, extremely configurable and powerful crawler, but I’ve only used the simple one so far. And it also visualizes what it does, look here:

So the web interface is pretty cool, and it actually works, too! The Crawler also has parsers for tons of file types, like PDF, Office documents (Microsoft and Open-/LibreOffice), media files, ZIP files etc., so it can index the contents and/or meta data of such files too, full text!

While I do not need it, you may actually also disconnect YaCy from the “freeworld” network entirely and use it as an intranet search engine. There is even professional support for that if you’d like to use it in that context within your company.

So there we go, a free, decentralized search engine, that lies not in the hand of some opaque megacorporation, but in our very own hands. How could I’ve missed this?! I have no idea. But it seems even I have walked the world in blindness too, for the three pillars of my vision are more real than I’d have thought; Independence, Self-determination, Freedom. It’s all right there!

And you don’t even need a home server for that. You can just run it on your desktop or laptop too. Not perfect, but this works due to the massively fail-proof nature of the DHT network, as described in my earlier publication.

Seriously, this is pretty damn awesome! We’re far closer to my “stage II” than I would’ve believed just 2 weeks ago. All we need to do now is to solve the social problem and make people actually migrate to freedom! Yeah, it’s the hardest of problems, but at least we have the tech sitting there, ready to be used.

So now it’s your turn (and mine actually): Let’s inform people, lets educate whomever we can, and help ourselves lose the chains!!

May 082014
 

A vision of freedomAbstract: This is an article about social-technological development, the rise of certain concepts of controlling peoples digital identities and the data linked to them, and about freedom and self-determination. About how we lost most of our freedom to large corporations using the power of the global network, how we may lose all of it, and how we may actually start to fight back and win this. This is a proposal however, a collection of rough ideas of a simple layman, not a guaranteed recipe, please try and do keep that in mind. It is also a slight bit utopian. Somebody has to come up with something though.

Index:

1. Preface:

Before I even start with this, I am going to say, that while I believe that the concepts I am “dreaming” about here are very well feasible, economically viable and practicable, my hopes for anything like it actually happening are extremely low. If you read this, you will learn why. Also, in the course of this article, you may be confronted with some pathos, just so you know in advance.

1a. An ancient concept reborn:

Since the many years I have seen fly by in the world of computers and networks, I have seen many trends arise, some would stay, others would fall. However, before my own time, back in the 60s and 70s there was one trend nobody thought would ever come back later. The concept of the thin client and the mainframe.

The concept of complete centralization. Originally, mainframes were large computers with large data storage systems, printers and so on. To work on them, you would use a “dumb” networked thin client, barely more than a light weight empty shell. Monitor and keyboard. Locally, you had no computational power, no storage. No data. You had no physical control over what you were working with. All the power laid with the mainframe and its operators.

1b. A brief period of relative freedom:

Later, fat clients – PCs basically – conquered the world and people ran their software locally, stored most of their data locally, played games locally etc. Also, since large corporations had not yet perfected their “predatory” skills, users were mostly in control of the Internet too. It looked like we were free and like we got control over everything. If back then somebody would’ve told me that 10 years later we’d buy most software components using microtransactions on online stores (I’m talking about stuff like Google Play or Apples AppStore here) and that corporations ruled the world and the Internet, I would’ve just laughed out loud. And yet…

2. Todays problem:

CentralizationNow, nobody is laughing about that anymore. In essence, we have reached George Orwells [1984 dystopia], only that we don’t clearly see it anymore. Our data is centralized on massive cloud servers / data warehouse clusters, we are losing control over the services we provide (images or posts we wanna share, think Facebook, Amazon cloud services) or consume (e.g.: YouTube, AppStore). We’re losing control over the software we run, as the market is making its transition from a pay-to-own concept to a rental system, see Office 365 or the former game streaming service [OnLive]. More and more applications are transformed to become Web Apps – a field were Google is strong – shifting power from us to their cloud. Clients are growing thinner and less controllable, like phones and tablets, resembling the powerless thin clients of old. Well, data mining is a big thing now, and it works perfectly with centralized systems. Experts are analyzing the desires, traits of character, social networks, behavioral patterns, etc. of every member of every part of society that is now their market, down to the youngest of children. And all is to a great extent powered by a new, more and more centralized Internet of rich web applications. So no, I’m not overly scared of the NSA. A bit, yes, but that’s not the prime danger. Never was.

Google is coming to get your data

Google is coming to eat your digital lives for breakfast!

And, on top of all that, every new generation grows up accepting the current situation as normal, no matter how dangerous it may be. They would call the older ones – I’m talking 25-40 years old now – conservative (which we can be, I surely am :( ) and not give a shit about our ravings I guess. They will just believe the new way is the right way, in all aspects. The negative part just disappears from the manipulated perception of the wider public.

The very concept of objective truth is fading out of the world. Lies will pass into history.

– George Orwell (1903-1950)

Now all that we know. We know Google is mining data from every search we commit, every click in every application of theirs we make, etc. We know Facebook is tracking us via their “Like” buttons on every website, profiling us in the process and presumably selling that data out to the highest bidder. And then, influencing us through adaptive advertisements we may not even be able to notice, AdBlockPlus or not.

We know that centralization also centralizes power. The power over us. In history, centralized power has slowly been replaced by slightly more distributed power, as we progressed from monarchies and dictatorships to our still far from well-working democracies. It was a long and bloody path though. A digital revolution giving power back to the people could achieve something similar, only without all the bleeding and dying.

3. The reasons:

So, why do we not act? Because after decades of experience, the “predators” have perfected their skills. They know what we think, what we feel and why we do so. They know who we talk with, and when and potentially even about what. They can take all our data and meta data, whether chat logs, art we may have created, personal networks we may have built or whatever and and just use it for themselves or sell it out.

But!

None of this hurts.

Which is why there is no counter-evolution by us, the “prey”. We’ve been buttered up by convenient services.

This is what we think

The world we think we live in

This is where we live

The world we actually live in

We’re sitting ducks, because we don’t feel the pain. In fact, the corporations of today have found a way to “enslave” us without ever having to crack the whip. They’re buying us with convenience. Note that, whenever a product is free, you’re not the consumer. You’re the product. What you may have thought was the product is in fact just bait. And don’t even try and say anything about any government and stricter regulations for data privacy here. Our glorious leaders, degenerated to be companies’ bootlickers, nah, forget about them.

This entire system works like a very devious disease. You can’t feel it, you just keep walking, thinking that everything’s gonna be fine and then one day, you gasp in sudden disbelief! And then you just drop dead like a fly. That’s the day when the wrong person gets access to all that centralized power.

When it hurts, it will be too late already!

4. The solution, stage I:

Where is the key to freedom and independence? Where?

Where is the key to freedom and independence? And why are we all orange? I guess because the artist, Mr. Rian Hughes decided so when he drew the comic strip “The Key” [1], which he actually did in the name of freedom. How appropriate.

4a. What needs to be done:

The first step out of this misery is to take back control of our services and data to the full extent. So basically, we need to serve everything ourselves, we need to store and provide data we’d like to share from machines that are fully under our control. Now while I’d love to say “Everybody should just run their own home server”, I know this is unrealizable bullshit. Most people are caught up in work, in founding their families, building a house or dealing with some other trouble, their attention being required elsewhere.

Most of them do not have the time. Most of them do not have the skills or resources to spare. And the rest is lazy and likes convenient life or just doesn’t care despite the clear threat. And yet, we need those home servers!

4b. Who needs to act first:

Thus, it is the freaks and nerds who have to act and take on responsibility for all of us. I assume that in what I’d call the first world, every circle of friends has at least one nerd in it. One person who is capable of and willing to run a home server, 100% under his or her control. Such a person could and should start with simple things, like a web server with FTP access, maybe virtualization of operating systems for friends with remote desktop access, or with a mail server.

Ancient XIN.at

This was an ancient version of XIN.at, the second incarnation. (1st: single Pentium PRO, 2nd: dual, 3rd: quad). This was actually sitting under my bed. Now it surely doesn’t have to look like this and it probably really shouldn’t either.

If such an operator can convince his or her family and friends that it provides a higher level of self-determination and freedom to them to migrate away from the big players to that local small box, they may follow such advice. They may actually listen.

And I do have a proof of concept for that, and that is this here server, my very own box, that family and friends are using partially for these very reasons. You can sensitize people to matters of security and especially privacy, and I was surprised that they would actually carefully listen despite having zero background in computer science. I was even more surprised that they were willing to actually act upon it, and at least partially migrate. Full migration is not possible yet, as shown below in stage II. But still, for what a single home server can provide, success is very much a practicable reality.

Loosening the chains

Starting to loosen the chains.

It is because they trust their fellow nerd. And while abuse of trust is not unthinkable in such a setup, it is highly unlikely. Most operators would feel proud to secure their friends’ and families’ digital lives and to free them from prying eyes looking for profit in tracking their behavioral patterns. Now, while trust is always a dangerous thing, it is probably by several orders of magnitude more justified in this setup, than in what we currently have, trusting opaque mega-corporations fully, with far more than we should ever want to.

5. Feasibility, economical and ecological considerations of stage I:

5a. The economical side:

Now how affordable is this? One person would need to buy a server, for stability reasons probably with redundant power supplies and storage backends. Also, an uninterruptible power supply would be a good idea to ensure availability. In certain countries, an expensive business Internet connection might be required to be allowed to host services and reach a high enough upstream bandwidth. Operating systems might cost money if the operator chooses to work with Microsoft or Apple. There are alternatives with Linux, BSD UNIX, Solaris and others though, so we’ll assume zero cost for the software.

The cost for the machine and equipment can range anywhere in between maybe 200€ or $ up to maybe 2000€ unless you go extremely cheap and use a [Raspberry Pi] for this. This depends on the services to be run and the number of users of course. Even a cheap modern machine should be able to host web sites, email and maybe OwnCloud for about 20 users though.

Then there are electricity and Internet costs.

For all this, I can see two ways of funding, of which I am currently walking the first one with some ambition to tap into the second one for the next UPS purchase:

  1. The operator pays up for it out of ideological reasons, getting respect and pride as rewards.
  2. The operator asks each of his/her users for a small amount of money to finance the entire operation.

Both would work. The more expensive running a single home server gets, the more likely the operator will need to choose path 2 to at least some extent.

5b. The ecological side:

Ecological impact

Is there any ecological impact to this?

While moving services out of housing centers would mean we can save some energy by not needing air conditioning systems, it also means we’re throwing away service consolidation. As such, the density of services per server and thus the utilization of the hardware available would likely be lower per watt. That would mean, that in the end we might have more machines running consuming more energy for the same amount of services provided. It is hard to estimate the outcome of this, but despite the more obvious savings in infrastructure and the great power saving capabilities of modern machines, I would fear that we might consume more power by running a decentralized network of many micro servers. This we would need to be aware of.

6. Other major challenges:

I would expect that propaganda would be problematic. Advertising this kind of setup from a single point like here is completely and utterly pointless. This has to be passed from operator to operator like a viral campaign, thus covering large areas fast enough to make an impact. Also, convincing people (operators and regular users alike) might pose a problem in general. This requires work for the operators and a change in thinking for everybody, especially common users. Naturally, this is extremely hard as people may fail to accept the significance and importance of the whole concept. It gets a lot harder in stage II though, as you’ll see now.

Working together

Working together is the key though.

7. The solution, stage II:

7a. Decentralization alone is not enough:

Now let us assume we have reached a stage where society has been pervaded by a loosely connected but dense enough network of home servers. While users are happily using them, taking away a few breadcrumbs from the larger, more exploitative services hosted by Google, Microsoft, Facebook and many others, it is still a chaotic, disorganized system. It cannot replace larger, centralized services as-is. So we would still depend on Google/Bing/whatever search engines, YouTube, Facebook, Twitter, etc.

Now in your neighborhood!

Now in your neighborhood too!

To solve this, we need proper software, and that software needs proper algorithms. Now bear with me, most of this stuff already exists! It’s just not being used in that very context. What I’m talking about here – only in part – could be a service the likes of “another Facebook”, just decentralized! Let’s call it “SkyNet2” for now (Haha, I know what you’re thinking, but it’ll become clearer soon! Plus, I just can’t help myself but think it sounds awesome!).

7b. How to disperse arbitrary centralized services using distributed hash tables, boosting net neutrality in the process:

In computer science, there is an algorithm or rather data structure known as the DHT or distributed hash table. Basically, it’s a huge table of keys (file names, or resource identifiers) and values (the contents, meta data or a URL) distributed amongst nodes (=our home servers). DHTs can be designed in such a way, that they represent a large, dynamic network of nodes where each node holds a small amount of data. Depending on topology – like chord [2], or binary trees – they can serve certain purposes well, like you can optimize them for certain search profiles. Also, if you choose your topology well, you can make this setup support the concept of [net neutrality] by not introducing a hierarchical structure that would serve certain content at consistently lower latency and potentially higher bandwidth and by fragmenting the network enough to make hurting net neutrality at the Internet service provider level impossible. While it has been suggested [3] that using DHT for “Google style” searching is quite hard, I believe it can still be done. What we want is equality, neutrality and freedom:

DHT protocol, Chord topology

A DHT network using the widely implemented chord topology. Shown in a distributed software development lecture [4]. Every node or home server is treated as an equal in this setup. This DHT graph also shows two typical search paths. Finger tables explained in the next diagram.

DHTs exist primarily in distributed file sharing networks like those using the Gnutella system or in recent times also Torrent. They’re also used for certain clustered database servers to optimize for very fast searching. In such contexts it has clearly shown, that it can grow or shrink at any given time by an almost arbitrary number of member nodes without failing.

If you want to learn more about distributed hash tables and associated algorithms and ideas, there are several documents and scientific papers on the topic referenced [on the Wikipedia], which may be a good place to start, should you wish to dig in deeper.

7c. Application for a hypothetical distributed social networking system:

Now say we create a DHT backend network for web servers, controlled by server-side configurations as well as script languages like PHP or Perl. Maybe even Java, if you’re into that. The frontend would just be a website. The backend is any web server and scripting language just like today, with data stored in conventional SQL servers. Each home server holds a local set of small webs, fractions of the superstructure. Say, Chuck is running his own local SkyNet2 node, and it holds the profile page of Max and Lucy, which belong to the SkyNet2 “Facebook ripoff”. The data served by those profiles are stored in a local SQL server and on local file storage for images, videos etc.

Max' basement

“Hey. Yeah, I’m at Max’ place right now, looking for his stuff. You..  What? No, it’s NOT here, there isn’t even any fuckin’ light down here, I can’t see a damn thing, it’s all dark! I guess it’s still all on freaking Facebook! You know what, Chuck, get him outta there before it’s too late!!”

But its no local-only system. When you access it, you can see and access millions of profiles, posts, images, whatever, just like on current, centralized social networking sites. Because in the background, every resource you have clicked on, every search you submit and every link leads to one or numerous resources located by a decentralized DHT protocol network! You don’t need static links that break all the time anyway, only a key that is called “Max’ profile page” will some meta data like “this is a SkyNet2 user profile site” or “this is an image”, and the sites URL as the resource that is being accessed as soon as the location of it is derived from the search across the DHT topology. Even Chucks server node – the machine itself! – can be found by using a node hash. Chucks domain name or IP address may change, the hash stays and the can be found as soon as Chucks home server (re)joins the larger superstructure.

Let me try and visualize it a bit:

Searching the DHT network

Searching for Lucys page on SkyNet2 works without Google or any centralized search engine, just by traversing the DHT structure, consisting of home server nodes. As soon as the target holding the data (Chucks server) is located, its current hostname/IP plus the relative URL of Lucys page will be returned to me. My local web browser can then fetch the page via regular HTTP or HTTPS! The search itself can either be launched by my own “SkyNet2”-DHT-enhanced web browser, or by any other node using a portal website.

7d. Flexibility and robustness:

Like the original evil SkyNet, “SkyNet2” cannot be stopped once alive. You cannot pull the plug and switch it off, and why? Because, like its namesake its sitting on no single server. It’s literally everywhere! Just like BitTorrent or good old Gnutella.

When done right, this can be enhanced and expanded easily to support multiple higher level network protocols, not just HTTP or HTTPS. Basically, the backend could return pretty much anything, an ftp+ssl:// URL to access a fileserver would be just as doable as a ssh+svn:// protocol to access a subversion software repository via SSH, or maybe magnet:// to download stuff from the BitTorrent network. It could also return SMTP/POP3/IMAP email server URLs or the contents of a public mailing list, whatever. Since we can make both machines, URLs and even content itself addressable, this is extremely flexible. When searching for an image, I don’t really care which server it comes from. But with DHT, I can search for just the file name. Even changing filenames are not problematic for certain document types. Even content can be hashed and made fully searchable in a DHT structure.

By adding content type classes like “image”, “video”, “website”, “archive file”, “server node” to the keys, we can also allow the users to narrow down their search to classes of objects, pretty much like todays centralized systems also offer it to us.

So, it sounds like one fat layer of glue code between networks and systems that already exist, right? Right. True. But it has the potential to put every service and one of the most powerful tools ever – the search engine – fully back in our own hands.

8. Feasibility and major challenges of stage II:

8a. The software:

At first, this depends on how we can deal with the software development side of things. For the Apache web server for instance, this could be solved with a loadable module maybe called mod_dht.so/mod_dht.dll.  But enabling web servers with DHT would require someone to actually write the freaking code. Luckily, the free software world does have some developers who can deal with DHT, such as the guys who built [qBittorrent] for instance. They also have cross-platform experience, which is always a bonus. Also, the guys who built [OwnCloud] or developers similar to them would be nice. As a matter of fact, a cloud storage service that you can host yourself (but don’t have to) with clients for Windows, MacOS X, Linux, BSD UNIX as well as Android and iOS? OwnCloud would be perfect for integration into such a system.

OwnCloud logo

OwnCloud is a great inspiration – turning a modern exploitation concept into a truly free one! And it actually works, phone and tablet support included.

So, we need the developers, we need a web superstructure (in fact probably a set of templates for a “Facebook-like” system), that can easily be integrated. Also, while each users profile/blog page needs to enable its user to show some artistic individuality, the system should always be streamlined enough so that you can always tell “I’m on SkyNet2, no doubt”. So no matter if you visit Max’ profile, or Lucy’s or mine, they all may look different, but it needs to show enough similarity to allow for both the users to properly express themselves and for the system to show off what it is. Like on torrent, where you may download a Linux distributions ISO file, or a free movie or music, but you always know “This is the BitTorrent system”. So we need the DHT code + glue code and also some content to put on top of it.

It shan’t all be SkyNet2 though, but let’s just list a few services that any such system would gradually need to replace:

  • Facebook
  • Twitter
  • Googles search engine and gradually all other Google services, which are enormously powerful and thus hard to replace
  • eBay (most definitely not including anything like PayPal, which can never lie in users hands)

What it can not and likely should not replace are services that can by design and purpose not be in “regular people’s” hands, such as the following examples:

  • Amazon
  • Play.com
  • PayPal (as shown above)
  • Steam, Origin, UPlay, etc.

This is actually not impossible. If OwnCloud was doable, then so would this be. But there is that other problem that I talked above before, and that’s acceptance. Convincing people is the single greatest challenge of all. One that I have still no clear concept of how to really master in practice. I have pulled off stage I in my own local environment, yes. But on a global scale? Can viral propaganda do it? Maybe, but it’s a massive risk. But there is also a lot to be gained in case of success.

8b.) Topology issues:

I have mentioned latencies before, and one of the reasons for it becoming too high is that at least the chord topology is just a logical ring / mesh. It does not take physical properties into account, so it has no location awareness. This might be problematic. Imagine I would like to search for Lucys page like shown above; In that scenario I have to query 3 nodes before reaching the correct one. Now if I am in Europe, and the first node is in Brazil, the second in Japan and the third in Russia, well, you see where this is going.

The protocols and its extensions’ (see below) efficiency might greatly improve by altering the topology so that it is formed taking geographical proximity into account. Using existing GeoIP databases may be a way to create the network in a more efficient way regarding geographical parameters. After all, people don’t like it when they have to wait for their search results. Also, the chord topology might not be the best one to begin with, it’s just one that caught my attention due to its inherent fairness.

But, there are developers many times smarter than me with lots of experience in designing DHTs, and if there is anything considerably smarter than this, they’ll find a way!

8c. Avoiding isolation:

While complete independence and freedom of digital private life is the ultimate goal, intercommunication is and will always be essential. This is especially true during the phase of initial growth, but cannot be ignored later either. While people would be well advised to abandon certain services on the Internet, shutting them out by force is not a good solution. To still be able to not have to rely on external services for critical operations, at least the search engine could be enhanced to look for data in the outside network whose servers are not members of the DHT superstructure. We can just let individual nodes crawl the outside web, index external servers and their content and thus pull in outside meta data and make it fully searchable. Thus, we do not need to rely on Google/Bing/etc. to search the outside world necessarily:

Connecting to the outside world

Connecting to the outside world without having to rely on it – is it possible?

This does come at a cost though, and that’s system memory and storage capacity. As shown in the image description, load balancing is critical. This requires a high enough density of home servers. If my home server needs to crawl through and index my entire state, this ain’t gonna end very well. Again, distribution amongst many is key!

Of course – and I would like to emphasize this clearly – the proposed network is not really isolated in any way at all. It’s just systems connected to the existing Internet after all. So with any regular computer you can use the web just as you always did. The idea here is once again not to need to rely on services we may consider untrustworthy, without having to sacrifice too much. So if this kind of crawling, indexing of and interaction with “outside” servers does not exist at some stage of development, you can still just use any other conventional service on the Net, as usual. This is no firewall after all.

When it comes down to it, the freedom to use services that impact your freedom lies with you, the individual user. ;)

8d. The social factor:

I have mentioned this several times already, but this needs to be said again, for stage II just as much as for stage I if not even a lot more so: Writing the software and defining revised, optimized algorithms and all that is comparably easy when seen in perspective. The largest challenge of all I think is the social one. People believe posting stuff on Facebook with their phones is cool and awesome. They depend on the feedback of their social peers like humans always did to define themselves.

People are using this kind of service lightheartedly. And Google has become something like the air you just breathe in. Always there, and only a problem when it’s gone or so it seems in the status quo.

People would probably just frown when hearing about a replacement technology focusing on freedom, independence and self-determination. They already consider themselves free, no matter if it’s just a golden cage in reality. Even conquering mobile devices with proper software does not mean guaranteed success, even if it’s well written and designed.

I was previously proposing to let the operators or nerds convince their friends and family. This is a process I know can take anything from weeks to years. There are surely people who would just shake their heads, considering the whole thing to be just silly and stupid, or quixotic at best.

So, to awaken people is the single greatest challenge. For a revolutionary technology to succeed, people need to be convinced of it, or even the perfect technology isn’t worth it’s weight in crap. Since we surely can’t rely on any marketing experts to help there, I wonder who might come up with the perfect concept? Maybe people like [Oliver Stone] would, he did support this too:

Let’s continue:

9. The benefits:

We were and are carrying the key with us, at all times!

We were and are carrying the key with us, at all times! We just need to remember that! We can free ourselves and our fellow friends and family!

  • All services and all data we use lies in the hands of people we can ultimately trust. And if they fuck with us, we can effectively confront them personally!
  • Less tracking, less stalking, less being monetized, less exploitation of the common user.
  • Nobody can easily attack your host physically and if it does happen (e.g.: somebody breaks into your operators home), you’ll know immediately. Physical control means less possible attack vectors and thus, more security.
  • More people with higher skill, and thus a technologically more educated society, as each user is close to his/her operator and may thus learn a lot about security, data backups, possible attacks, and general concepts of freedom, privacy and security in a networked society. That includes the home server operators themselves, as they will – through building, setting up and running those servers – gain knowledge and skill.
  • With proper adaptations, the system could be used for routing and anonymization of any Internet traffic, like the [Tor] network. A node class “INet gateway” with attribute “approximate location” or a function “location of proxy may not match location of client” could be introduced to mark home server nodes that allow you to use them as proxies for any Internet access, ensuring that the proxy is sitting outside of your local jurisdictions sphere of influence where necessary. But I guess the Tor developers have smarter ideas for that. Simply using their service in conjunction with the system proposed here? Why not.
  • Enhanced security by obscurity and service fragmentation. A decentralized network built on top of Linux, BSD UNIX, Solaris, MacOS X, Windows, maybe even exotic systems such as [OpenVMS] is so heterogeneous, so diverse, that no attack – zero-day or not – will be able to affect every single node. Even if one day all Windows machines are blown to shreds by some malware, SkyNet2 will be able to survive with ease.
  • Securing performance by load balancing of content across many nodes.
  • A formidable buildup of our understanding of democracy, as such a system would empower each single user and teach us that changes can be achieved in a relatively non-violent, initiated-by-the-masses way. It might inspire us to further change!
  • Independence.
  • Self-determination.
  • Freedom.

10. The danger:

Mr. White

Mr. White is the danger. Are we potentially too?

As much as I hate talking about this part, there is one issue that cannot be fully denied: Jobs. Now if a system like this succeeds at large, no matter how unlikely it may seem, it has the potential to destroy jobs. While I guess a lot more hardware will be sold for some time, larger organizations may begin to suffer naturally, as they’re running out of “slaves” to squeeze anything out of. While the more ingenious of them may just start tapping into the DHT network to secure their data mining business, some may run out of options, such as larger commercial web hosters, root server providers, companies which have invested in cloud storage solutions etc., basically all those people that provided what people can now provide themselves.

There may be a significant boost for the Internet service provider business though, as a lot more people would require high upstream connections to run their servers. Again, just like with the environmental concerns, it is hard for me to estimate the outcome of this, but the potential, the danger to harm at least certain businesses (whether they deserve to die or not) is real!

11. The subjective Truth:

Could we pull this stunt off? Hell yeah we could! Will we? Well, I want something like this SO MUCH, but while looking at the world and at how things work, I am losing hope. Not that hope is a good or stable thing to rely on to begin with, but at least it gives you some motivation to actually do something good!

The challenges posted are partially technical, partially economical, but mostly social. We are living in a world where most individuals have internalized the very concept of capitalism and of becoming a consumer that buys and a product that is being sold, defining our personalities through such superficial processes. Now, I’m not against all forms of capitalism, no, I’m not. If you have a brilliant idea to make the world a better place, for instance by making engines consume less fuel or by developing an idea to make nuclear fusion actually work with a positive energy balance, then by all means, you shall be rewarded with tons of our money! Or say, you build cars, or houses, or pencils for gods sake, shoes, T-shirts, toothbrushes, whatever, then YES, take my money for it! Same goes for services like the barbers’, any stand-up comedian, or the plumber who makes your toilet flush again.

Real economy

The real economy actually delivers something for your hard-earned cash, and it won’t betray you like crazy. And something like SkyNet2 will not hurt the classical real economy, not a bit.

But there are limits as to what shall be capitalized. And those limits have started to slowly disappear, not just in the fiery nine hells of the stock market, but even at home! When the very understanding of freedom and its significance starts to fade away, and brand recognition starts to replace individuality and character, we’re dealing with a situation that might deteriorate too fast to be able to pull this off.

And of course, nobody would believe in this. If I tell you now “let’s start SkyNet2!” (Or FreeWeb, or FreeNet or whatever you’d wanna call it) and then I show you some ideas, of which maybe not even half are usable, are you gonna say “Oh yeah, let’s go!”. Nah you aren’t. We’re lazy. We are adapted to a system that doesn’t let us off the leash, but at least feeds us and lets us sleep on a soft bed, metaphorically spoken. In essence, we’re all selling ourselves out either to whomever provides us with the easiest, most convenient way to reach what we were told we need by some shady advertisements or other more subliminal methods of talking us into buying something, or to whomever gives us the free stuff for whatever price it may cost us in terms of privacy or self-determination.

Even I am no exception. While I’m trying to fight as good as I believe I’m able to, I am still sitting in a Microsoft vendor lock-in (even though that lock’s starting to crumble), I am using Steam, even Origin for fucks sake, and I am using the ICQ network now controlled by the Russians plus the occasional Skype when playing multi player games, and I am not fully anonymizing my Internet communication. While I’ve been playing around a lot with Linux (10+ years of experience now), BSD UNIX, Solaris and Haiku, even Syllable Desktop, my private machines are still in a locked up state, at least partially.

Nonetheless, I somehow still have hope, that one day…

Lose the chains

…we can lose the chains…

[1] Morrison, G.; Hughes, R. (2014). “The Key“. BBC Freedom2014.

[2] Stoica, I.; Morris, R.; Karger, D.; Kaashoek, M. F.; Balakrishnan, H. (2001). “Chord: A scalable peer-to-peer lookup service for internet applications“. ACM SIGCOMM Computer Communication Review 31 (4): 149. doi:10.1145/964723.383071.

[3] Pang, J. (2004). 15-441 – Distributed Hash Tables.

[4] Tanenbaum, A. S.; Steen, M. V. CS865 – Distributed Software Development, Lecture 5.

Jan 132014
 

CentOS logo[Red Hat Enterprise Linux], being one of the commercially most successful Linux systems naturally attracts a lot of attention and forks. Now there is [Scientific Linux] on one side, catering to the needs of the academic community, then the quickly developing [Fedora Linux] which is where Red Hat is basically getting upcoming software tested, and then, finally, [CentOS], the “Community Enterprise Operating System”, which is basically just the desktop+server version of Red Hat Enterprise Linux (RHEL) copied with all Red Hat logos etc. removed and replaced. Now Fedora is already under the wings of Red Hat, and the CentOS project is now too. According to [this announcement] from the 7th of January, CentOS is joining Red Hat as its free desktop & server Linux sibling.

The CentOS project itself is advertising this major change using the slogan “New Look – New CentOS”, which is also reflected in their radically reworked web site design:

The new look of the CentOS web site

The new look of the CentOS web site

Let me quote their own announcement, when it comes to what changes we can expect now.

What won’t change:

  • The CentOS Linux platform isn’t changing. The process and methods built up around the platform however are going to become more open, more inclusive and transparent.
  • The sponsor driven content network that has been central to the success of the CentOS efforts over the years stays intact.
  • The bugs, issues, and incident handling process stays as it has been with more opportunities for community members to get involved at various stages of the process.
  • The Red Hat Enterprise Linux to CentOS firewall will also remain. Members and contributors to the CentOS efforts are still isolated from the RHEL Groups inside Red Hat, with the only interface being srpm / source path tracking, no sooner than is considered released. In summary:  we retain an upstream.

What will change:

  • Some of us now work for Red Hat, but not RHEL. This should not have any impact to our ability to do what we have done in the past, it should facilitate a more rapid pace of development and evolution for our work on the community platform.
  • Red Hat is offering to sponsor some of the buildsystem and initial content delivery resources – how we are able to consume these and when we are able to make use of this is to be decided.
  • Sources that we consume, in the platform, in the addons, or the parallel stacks such as Xen4CentOS will become easier to consume with a git.centos.org being setup, with the scripts and rpm metadata needed to create binaries being published there. The Board also aims to put together a plan to allow groups to come together within the CentOS ecosystem as a Special Interest Group (SIG) and build CentOS Variants on our resources, as officially endorsed. You can read about the proposal at [http://www.centos.org/variants/].
  • Because we are now able to work with the Red Hat legal teams, some of the contraints that resulted in efforts like CentOS-QA being behind closed doors, now go away and we hope to have the entire build, test, and delivery chain open to anyone who wishes to come and join the effort.
  • The changes we make are going to be community inclusive, and promoted, proposed, formalised, and actioned in an open community centric manner on the centos-devel mailing list. And I highly encourage everyone to come along and participate.

So there you have it. Of course there will be a lot more finer details about all this, but if you want to learn about those, I would like to point you to the CentOS community instead:

Sep 102013
 

NSA logoWith all that talk about the [National Security Agency] stealing our stuff (especially our most basic freedoms), it was time to look at a few things that Mr. Snowden and others before him have found out about how the NSA actually attempts to break certain encryption ciphers that are present in OpenSSLs and GnuTLSs cipher suites. Now that it has been clearly determined that a NSA listening post has been established in Vienna, Austria (protestors are on the scene), it may seem a good thing to look over a few details here. Especially now that the vulnerabilities are widely known and potentially exploitable by other perpetrators.

I am no cryptologist, so I won’t try to convince you that I understand this stuff. But from what I do understand, there is a side-channel attack vulnerability in certain block ciphers like for instance AES256-CBC-SHA or RSA-DES-CBC-SHA. I don’t know what it is exactly that’s vulnerable, but whoever may listen closely on one of the endpoints (client or server) of such a connection may determine crucial information by looking at the connections timing information, which is the side channel. Plus, there is another vulnerability concerning the Deflate protocol compression in TLS, which you shouldn’t confuse with stuff like mod_deflate in Apache, as this “Deflate” exists within the TLS protocol itself.

As most client systems – especially mobile operating systems like Android, iOS or Blackberry OS – are compromised and backdoored, it is quite possible that somebody is listening. I’m not saying “likely”, but possible. By hardening the server, the possibility of negotiating a vulnerable encrypted connection becomes zero – hopefully at least. :roll:

Ok, I’m not going to say “this is going to protect you from the NSA completely”, as nobody can truly know what they’re capable of. But it will make you more secure, as some vulnerable connections will no longer be allowed, and compromised/vulnerable clients are secure as long as they connect to a properly configured server. Of course you may also lock down the client by updating your browser for instance, as Firefox and Chrome have been known to be affected. But for now, the server-side.

I am going to discuss this for the Apache web server specifically, but it’s equally valid for other servers, as long as they’re appropriately configurable.Big Apache web server logoFirst, make sure your Apache is compatible with the SSL/TLS compression option SSLCompression [on|off]. Apache web servers starting from 2.2.24 or 2.4.3 should have this directive. Also, you should use [OpenSSL >=1.0] (link goes to the Win32 version, for *nix check your distributions package sources) to be able to use SSLCompression and also more modern TLSv1.1 and TLSv1.2 versions. If your server is new enough and properly SSL-enabled, please check your SSL configuration either in httpd.conf or in a separate ssl.conf included in httpd.conf, which is what some installers use as a default. You will need to change the SSLCipherSuite directive to not allow any vulnerable block ciphers, disable SSL/TLS protocol compression, and a few things more. Also make sure NOT to load mod_deflate, as this opens up similar loopholes as the default for the SSL/TLS protocols themselves do!

Edit: Please note that mixing Win32 versions of OpenSSL >=1.0 with the standard Apache version from www.apache.org will cause trouble, so a drop-in replacement is not recommended for several reasons, two being that that Apache version is linked against OpenSSL 0.9.8* (breaking TLS v1.1/1.2) and also built with a VC6 compiler, where OpenSSL >=1.0 is built with at least a VC9 compiler. Trying to run all VC9 binaries (Apache+PHP+SSL) only works on NT 5.1+ (Windows XP/2003 or newer), so if you’re on Win2000 you’ll be stuck with older binaries or you’ll need to accept stability and performance issues.

Edit 2: I now found out that the latest version of OpenSSL 0.9.8, namely 0.9.8y also supports switching off SSL/TLS deflate compression. That means you can somewhat safely use 0.9.8y which is bundled with the latest Apache 2.2 release too. It won’t give you TLS v1.1/1.2, but leaves you with a few safe ciphers at least!

See here:

SSLEngine On
SSLCertificateFile <path to your certificate>
SSLCertificateKeyFile <path to your private key>
ServerName <your server name:ssl port>
SSLCompression off
SSLHonorCipherOrder on
SSLProtocol All -SSLv2
SSLCipherSuite !aNULL:!eNULL:!EXPORT:!DSS:!DES:!DHE-RSA-AES256-SHA:!AES256-SHA:!DHE-RSA-AES128-SHA:!EDH-RSA-DES-CBC3-SHA:!DES-CBC3-SHA:!DHE-RSA-AES128-SHA:!DES-CBC3-SHA:!AES128-SHA:RC4-SHA:RC4-MD5:ALL

This could even make you eligible for a VISA/Mastercard PCI certification if need be. This disables all known vulnerable block ciphers and said compression. On top of that, make sure that you comment out the loading of mod_deflate if not already done:

# LoadModule mod_deflate modules/mod_deflate.so

Now restart your webserver and enjoy!

The same thing can of course be done for mail servers, FTP servers, IRC servers and so on. All that is required is a proper configurability and compatibility with secure libraries like OpenSSL >=1.0 or at least 0.9.8y. If your server can do that, it can also be secured against these modern side channel attacks!

If you wish to verify the safety specifically against BEAST/CRIME attack vectors, you may want to check out [this tool right here]. It’s available as a Java program, .Net/C# program and source code. For the Java version, just run it like this:

java -jar TestSSLServer.jar <server host name> <server port>

This will tell you whether your server supports deflate, which cipher suites it supports and whether it’s BEAST or CRIME vulnerable. A nice point to start! For the client side, a similar cipher suite configuration may be possible to ensure the client won’t allow the negotiation of a vulnerable connection. Just updating your software may be an easier way in certain situations of course. A good looking output of that tool might appear somewhat like this:

Supported versions: SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
  SSLv3
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_IDEA_CBC_SHA
     RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
     RSA_WITH_CAMELLIA_256_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
     TLS_RSA_WITH_SEED_CBC_SHA
     TLS_DHE_RSA_WITH_SEED_CBC_SHA
  (TLSv1.0: idem)
  (TLSv1.1: idem)
  TLSv1.2
     RSA_WITH_RC4_128_MD5
     RSA_WITH_RC4_128_SHA
     RSA_WITH_IDEA_CBC_SHA
     RSA_WITH_AES_128_CBC_SHA256
     RSA_WITH_AES_256_CBC_SHA256
     RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
     DHE_RSA_WITH_AES_128_CBC_SHA256
     DHE_RSA_WITH_AES_256_CBC_SHA256
     RSA_WITH_CAMELLIA_256_CBC_SHA
     DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
     TLS_RSA_WITH_SEED_CBC_SHA
     TLS_DHE_RSA_WITH_SEED_CBC_SHA
     TLS_RSA_WITH_AES_128_GCM_SHA256
     TLS_RSA_WITH_AES_256_GCM_SHA384
     TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
     TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
----------------------
Server certificate(s):
  2a2bf5d7cdd54df648e074343450e2942770ab6ff0: EMAILADDRESS=me@myserver.com, CN=www.myserver.com, OU=MYSERVER, O=MYSERVER.com, L=My City, ST=My County, C=COM
----------------------
Minimal encryption strength:     strong encryption (96-bit or more)
Achievable encryption strength:  strong encryption (96-bit or more)
BEAST status: protected
CRIME status: protected

Plus, as always: Using open source software may give you an advantage here, as you can at least reduce the chances of inviting a backdoor eavesdropping on your connections onto your system. As for smartphones: Better downgrade to Symbian or just throw them away altogether, just like your tablets (yeah, that’s not the most useful piece of advice, I know…).

Update: And here a little something for your SSL-enabled UnrealIRCD IRC server.

UnrealIRCD logoThis IRC server has a directive called server-cipher-list in the context set::ssl, so it’s set::ssl::server-cipher-list. Here an example configuration, all the non-SSL specific stuff has been removed:

set {
  ssl { 
    trusted-ca-file "your-ca-cert.crt";
    certificate "your-server-cert.pem";
    key "your-server-key.pem";
    renegotiate-bytes "64m";
    renegotiate-time "10h";
    server-cipher-list "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:RC4-SHA:RC4-MD5:PSK-RC4-SHA";
  };    
};

Update 2: And some more from the Gene6 FTP server, which is not open source, but still extremely configurable. Just drop in OpenSSL >=1.0 (libeay32.dll, ssleay32.dll, libssl32.dll) as a replacement, and add the following line to your settings.ini files for SSL-enabled FTP domains, you can find the files in the Accounts\yourdomainname subfolders of your G6 FTP installation:

Gene6 FTP server logo

SSLCipherList=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:RC4-SHA:RC4-MD5:PSK-RC4-SHA

With that and those OpenSSL >=1.0 libraries, your G6 FTP server is now fully TLSv1.2 compliant and will use only safe ciphers!

Finally: As I am not the most competent user in the field of connection-oriented encryption, please just post a comment if you find some incorrect or missing information, thank you!

Sep 052013
 

OpenLDAP logoWhen we migrated from CentOS Linux 5 to 6 at work, I also wanted to make our OpenLDAP clients communicate with the OpenLDAP server – named slapd – in an encrypted fashion, so nobody can grab user names or home directory locations off the wire when clients and server communicate. So yes, we are using OpenLDAP to store all user names, password hashes, IDs, home directory locations and NFS exports in a central database. Kind of like a Microsoft domain system. Only more free.

To do that, I created my own certificate authority and a server certificate with a Perl script called CA.pl. What I missed is to set the validity period of both the CA certificate and server certificate to something really high. So it was the default – 1 year. Back then I thought it would be going to be interesting when that runs out, and so it was!

All of a sudden, some users couldn’t log in anywhere anymore, other users still worked. On some systems, a user would be recognized, but no home directory mount would occur. A courtesy of two different cache levels in mixed states on different clients (nscd cache and sssd cache) with the LDAP server itself already being non-functional.

So, the log files on clients and server didn’t tell anything. ANYthing. A mystery, even at higher log levels. So after fooling around for like two hours I decided to fire up Wireshark and analyze the network traffic directly. And what I found was a “certificate invalid” message on the TLSv1/TCP protocol layer. I crafted something similar here for you to look at, as I don’t have a screenshot of the original problem:

Wireshark detecting TLS problems

This is a problem slightly different from the actual one. It would say “Certificate invalid” instead of “Unknown CA”. In this case I just used a wrong CA to kind of replay the issue and show you what it might look like. IP and Mac addresses have been removed from this screenshot.

Kind of crazy that I had to use Wireshark to debug the problem. So clearly, a new certificate authority (CA) and server certificate were required. I tried to do that again with CA.pl, but it strangely failed this time in the certificate creation process. One that I tried to create manually with OpenSSL as I usually do with HTTPS certificates didn’t work as the clients complained that there was no ciphers available. I guess I did something wrong there.

There is a nice Perl+GTK tool that you can use to build CA certs as well as go through the process of doing signing requests and then signing certs with CA certs and exporting the resulting files in different formats however. That tool is called [TinyCA2] and is based on OpenSSL, as usual. See here:

The process is all the same as it is with pure command line OpenSSL or CA.pl: First, create your own certificate authority, then a server certificate with its corresponding request to the CA. Then sign that request with the CA certificate and you’re all done and you can now export the CA cert, signed server cert and private key to *.pem files or other types of encodings if necessary. And here you can make sure, that both your CA cert and server cert have very long validity periods.

Please note, that the CA cert should have a slightly longer validity period, as the server cert is created and signed afterwards, and having the server certificate being valid for just 1 minute longer than the signing CAs own certificate is unsupported and can result in serious problems with the software. In my case, I just gave the CA certificate 10 years + 1 day and the server certificate 10 years. Easy enough.

After that, put the CA cert as well as server cert + key on your OpenLDAP server as specified in /etc/openldap/slapd.conf and the CA cert on your clients as specified by /etc/sssd/sssd.conf or if you’re not using RedHats sssd, then it’d be /etc/openldap/ldap.conf and /etc/nslcd.conf, which are however deprecated and not recommended anymore. Also make sure your LDAP server can access the files (maybe chown ldap:ldap or chown root:ldap) and furthermore please make sure nobody can steal the private key!!

Then, restart your LDAP server as well as sssd on all clients, or nslcd on all clients if you’re using the old stuff. If an NFS server is also part of your infrastructure, you will also need to restart the rpcidmapd service on all systems accessing that NFS or you’ll get a huge ownership mess when all user ids are being wrongly mapped to the unprivileged user nobody, which may happen with NFS when your idmapd is in an inconsistent state after fooling around with sssd or nslcd.

That’s of course not meant to be guide on how to set up OpenLDAP + TLS as a whole, just for replacing the certificates. So if you ever get “user does not exist” upon calling id user or su – user and people can’t log in anymore, that might be your issue if you’re using OpenLDAP with TLS or SSL. Just never ever delete root from your /etc/passwd. Otherwise even you will be locked out as soon as slapd – your LDAP server – goes down! And then you’ll need Knoppix or something similar – nobody wants to actually stand up from ones chair and to need to physically access the servers. ;)

Aug 062013
 

WebDAV logoRecently I was reading through my web servers log file, only to find some rather alarming lines that I didn’t immediately understand. It was some HTTP PUT methods obviously trying to implant PHP scripts onto my server via WebDAV. A few years back I had WebDAV implemented on my webserver so users could run subversion repositories for software development on my server, via HTTPS+SVN.

So I did a little bit of research and found out, that there were some Apache web server distributions like XAMPP, that had WebDAV switched on by default, and that included a default user and password for – now hold on – full access!

I never allowed anonymous access to the WebDAV repositories, which were isolated from the usual web directories anyway. Still, there were some lines that made me feel a bit uncomfortable, like this one, which seemed to be an attempt at doing an anonymous, unauthenticated PUT:

199.167.133.124 - - [24/Dec/2011:01:53:20 +0100] "PUT /webdav/sPriiNt...php HTTP/1.1" 200 346

Now, the part that made my eyebrows raise was that my web server seems to have responded with HTTP 200 to this PUT method right there, which means that an existing resource has been modified, or in other words overwritten even though my server never had a path “/webdav/”. For the creation of a fresh new file it should’ve responded with HTTP 201 instead. So there were many such PUT lines in the log file, but I was unable to locate the initial uploads, the one where my server should’ve responded with “PUT /webdav/sPriiNt…php HTTP/1.1” 201 346.

In the meantime, WebDAV is long switched off and my server now responds with 405 to all PUT attempts (“Method not implemented”). But I kept all the data of all directories and could still not locate any of those PHP files, nor any DELETE method calls in the log file. Now did my server really accept the PUT or not?! It seems not, especially since those scripts were never accessed via HTTP according to the logs. Well, at least that was all before I had to reinstall the server anyway, because of a major storage fuckup.

Still, I went ahead and googled for some of the PHP script names, and I found some really interesting shit. Looking for them, I found dubious lists of hundreds of web servers with both IP addresses and hostnames and the according paths to said uploaded PHP scripts. Most of them where obsolete already, but I managed to find some vulnerable servers where the PHP script was still accessible and executable.

What I found was a DDoS script that enabled you to specify a target IP/host and a time and duration of attack. That means, all those scripts are uploaded via what I assume to be a botnet to a wide range of vulnerable web servers, and then lists would be created for the human attackers to use and attack arbitrary targets in a timed fashion.

So I continued to search around and finally found some already modified full source code plus a WebDAV vulnerability exploit all packed together in a nice RAR archive, even including a tutorial written for what seems to be “non-hackers”. With that it was extremely easy to reconstruct an even more dangerous variety of the attack procedure which was targetted at Windows servers specifically:

  • Let the bundled WebDavLinkCrawler scan for WebDAV-capable servers for as long as you please, then press “stop”.
  • Copy all IP addresses collected by the crawler and presented to the user, save them in a TXT file.
  • Open the IP scanner exploit and load that TXT file. Let it scan all WebDAV capable servers for exploitable ones.
  • After that’s done, save the list of vulnerable servers.
  • Verify an arbitrary server by surfing to http://<ip>/webdav, looking for a “WebDAV testpage”. If it’s there, it worked.
  • Install the bundled BitKinex tool, press CTRL+2, enter an arbitrary name, then enter the servers IP address. Use “wampp” as the user name, “xampp” as the password. (That’s said default credentials with full access). Enter “/webdav” as the data source.
  • Click on the Http/WebDAV tab, click browse, and upload your PHP exploit!
  • Access http://<ip>/webdav/yournicephpscript.php. A shell will come up if system calls are allowed.
  • Run the following commands in that shell: net user <username> /add, then net user <username> <newpassword>, then net localgroup “Administrators” “<username>” /add. Only works if XAMPP is running with administrative privileges of course, but on Windows machines it often does.
  • Connect to the IP via RDP (remote desktop protocol, built into pretty much any Windows OS). Welcome to your free server!

Well, now this was even more dangerous and less likely to succeed, but I can imagine there where lots of successful attacks like that. And maybe even more that aimed at providing the DDoS interface I was looking for in the beginning. Good thing that my web server runs under a user without any administrative privileges, eh?

What surprised me most is how much the exploit is not just targeted at Windows servers, but also seemingly meant to be used by regular Windows operators. And there I was, thinking that people would usually attack from Linux or BSD machines. But na-ah. All EXE files, RAR archives and easy tutorials. As if the black hatters would have a large “audience” willingly helping them pull off their attacks.

An interesting thought, don’t you think?

Also funny that parts of the tutorials were written in German, more precisely the part with the WebDavLinkCrawler which seems to be a german tool.

This does offer some interesting insights into how DDoS and server hijacking attempts are pulled off. How or why people are making those exploits newbie-proof I still do not fully understand. Maybe this is also some sort of institutionalized cracking going on? I simply do not know enough to be able to tell.

But it sure doesn’t look like what I would have expected. No UNIX stuff there at all. And I still wonder whether I ever was exploitable or not, given how I had WebDAV locked down. Who knows.