May 082014

A vision of freedomAbstract: This is an article about social-technological development, the rise of certain concepts of controlling peoples digital identities and the data linked to them, and about freedom and self-determination. About how we lost most of our freedom to large corporations using the power of the global network, how we may lose all of it, and how we may actually start to fight back and win this. This is a proposal however, a collection of rough ideas of a simple layman, not a guaranteed recipe, please try and do keep that in mind. It is also a slight bit utopian. Somebody has to come up with something though.


1. Preface:

Before I even start with this, I am going to say, that while I believe that the concepts I am “dreaming” about here are very well feasible, economically viable and practicable, my hopes for anything like it actually happening are extremely low. If you read this, you will learn why. Also, in the course of this article, you may be confronted with some pathos, just so you know in advance.

1a. An ancient concept reborn:

Since the many years I have seen fly by in the world of computers and networks, I have seen many trends arise, some would stay, others would fall. However, before my own time, back in the 60s and 70s there was one trend nobody thought would ever come back later. The concept of the thin client and the mainframe.

The concept of complete centralization. Originally, mainframes were large computers with large data storage systems, printers and so on. To work on them, you would use a “dumb” networked thin client, barely more than a light weight empty shell. Monitor and keyboard. Locally, you had no computational power, no storage. No data. You had no physical control over what you were working with. All the power laid with the mainframe and its operators.

1b. A brief period of relative freedom:

Later, fat clients – PCs basically – conquered the world and people ran their software locally, stored most of their data locally, played games locally etc. Also, since large corporations had not yet perfected their “predatory” skills, users were mostly in control of the Internet too. It looked like we were free and like we got control over everything. If back then somebody would’ve told me that 10 years later we’d buy most software components using microtransactions on online stores (I’m talking about stuff like Google Play or Apples AppStore here) and that corporations ruled the world and the Internet, I would’ve just laughed out loud. And yet…

2. Todays problem:

CentralizationNow, nobody is laughing about that anymore. In essence, we have reached George Orwells [1984 dystopia], only that we don’t clearly see it anymore. Our data is centralized on massive cloud servers / data warehouse clusters, we are losing control over the services we provide (images or posts we wanna share, think Facebook, Amazon cloud services) or consume (e.g.: YouTube, AppStore). We’re losing control over the software we run, as the market is making its transition from a pay-to-own concept to a rental system, see Office 365 or the former game streaming service [OnLive]. More and more applications are transformed to become Web Apps – a field were Google is strong – shifting power from us to their cloud. Clients are growing thinner and less controllable, like phones and tablets, resembling the powerless thin clients of old. Well, data mining is a big thing now, and it works perfectly with centralized systems. Experts are analyzing the desires, traits of character, social networks, behavioral patterns, etc. of every member of every part of society that is now their market, down to the youngest of children. And all is to a great extent powered by a new, more and more centralized Internet of rich web applications. So no, I’m not overly scared of the NSA. A bit, yes, but that’s not the prime danger. Never was.

Google is coming to get your data

Google is coming to eat your digital lives for breakfast!

And, on top of all that, every new generation grows up accepting the current situation as normal, no matter how dangerous it may be. They would call the older ones – I’m talking 25-40 years old now – conservative (which we can be, I surely am :( ) and not give a shit about our ravings I guess. They will just believe the new way is the right way, in all aspects. The negative part just disappears from the manipulated perception of the wider public.

The very concept of objective truth is fading out of the world. Lies will pass into history.

– George Orwell (1903-1950)

Now all that we know. We know Google is mining data from every search we commit, every click in every application of theirs we make, etc. We know Facebook is tracking us via their “Like” buttons on every website, profiling us in the process and presumably selling that data out to the highest bidder. And then, influencing us through adaptive advertisements we may not even be able to notice, AdBlockPlus or not.

We know that centralization also centralizes power. The power over us. In history, centralized power has slowly been replaced by slightly more distributed power, as we progressed from monarchies and dictatorships to our still far from well-working democracies. It was a long and bloody path though. A digital revolution giving power back to the people could achieve something similar, only without all the bleeding and dying.

3. The reasons:

So, why do we not act? Because after decades of experience, the “predators” have perfected their skills. They know what we think, what we feel and why we do so. They know who we talk with, and when and potentially even about what. They can take all our data and meta data, whether chat logs, art we may have created, personal networks we may have built or whatever and and just use it for themselves or sell it out.


None of this hurts.

Which is why there is no counter-evolution by us, the “prey”. We’ve been buttered up by convenient services.

This is what we think

The world we think we live in

This is where we live

The world we actually live in

We’re sitting ducks, because we don’t feel the pain. In fact, the corporations of today have found a way to “enslave” us without ever having to crack the whip. They’re buying us with convenience. Note that, whenever a product is free, you’re not the consumer. You’re the product. What you may have thought was the product is in fact just bait. And don’t even try and say anything about any government and stricter regulations for data privacy here. Our glorious leaders, degenerated to be companies’ bootlickers, nah, forget about them.

This entire system works like a very devious disease. You can’t feel it, you just keep walking, thinking that everything’s gonna be fine and then one day, you gasp in sudden disbelief! And then you just drop dead like a fly. That’s the day when the wrong person gets access to all that centralized power.

When it hurts, it will be too late already!

4. The solution, stage I:

Where is the key to freedom and independence? Where?

Where is the key to freedom and independence? And why are we all orange? I guess because the artist, Mr. Rian Hughes decided so when he drew the comic strip “The Key” [1], which he actually did in the name of freedom. How appropriate.

4a. What needs to be done:

The first step out of this misery is to take back control of our services and data to the full extent. So basically, we need to serve everything ourselves, we need to store and provide data we’d like to share from machines that are fully under our control. Now while I’d love to say “Everybody should just run their own home server”, I know this is unrealizable bullshit. Most people are caught up in work, in founding their families, building a house or dealing with some other trouble, their attention being required elsewhere.

Most of them do not have the time. Most of them do not have the skills or resources to spare. And the rest is lazy and likes convenient life or just doesn’t care despite the clear threat. And yet, we need those home servers!

4b. Who needs to act first:

Thus, it is the freaks and nerds who have to act and take on responsibility for all of us. I assume that in what I’d call the first world, every circle of friends has at least one nerd in it. One person who is capable of and willing to run a home server, 100% under his or her control. Such a person could and should start with simple things, like a web server with FTP access, maybe virtualization of operating systems for friends with remote desktop access, or with a mail server.


This was an ancient version of, the second incarnation. (1st: single Pentium PRO, 2nd: dual, 3rd: quad). This was actually sitting under my bed. Now it surely doesn’t have to look like this and it probably really shouldn’t either.

If such an operator can convince his or her family and friends that it provides a higher level of self-determination and freedom to them to migrate away from the big players to that local small box, they may follow such advice. They may actually listen.

And I do have a proof of concept for that, and that is this here server, my very own box, that family and friends are using partially for these very reasons. You can sensitize people to matters of security and especially privacy, and I was surprised that they would actually carefully listen despite having zero background in computer science. I was even more surprised that they were willing to actually act upon it, and at least partially migrate. Full migration is not possible yet, as shown below in stage II. But still, for what a single home server can provide, success is very much a practicable reality.

Loosening the chains

Starting to loosen the chains.

It is because they trust their fellow nerd. And while abuse of trust is not unthinkable in such a setup, it is highly unlikely. Most operators would feel proud to secure their friends’ and families’ digital lives and to free them from prying eyes looking for profit in tracking their behavioral patterns. Now, while trust is always a dangerous thing, it is probably by several orders of magnitude more justified in this setup, than in what we currently have, trusting opaque mega-corporations fully, with far more than we should ever want to.

5. Feasibility, economical and ecological considerations of stage I:

5a. The economical side:

Now how affordable is this? One person would need to buy a server, for stability reasons probably with redundant power supplies and storage backends. Also, an uninterruptible power supply would be a good idea to ensure availability. In certain countries, an expensive business Internet connection might be required to be allowed to host services and reach a high enough upstream bandwidth. Operating systems might cost money if the operator chooses to work with Microsoft or Apple. There are alternatives with Linux, BSD UNIX, Solaris and others though, so we’ll assume zero cost for the software.

The cost for the machine and equipment can range anywhere in between maybe 200€ or $ up to maybe 2000€ unless you go extremely cheap and use a [Raspberry Pi] for this. This depends on the services to be run and the number of users of course. Even a cheap modern machine should be able to host web sites, email and maybe OwnCloud for about 20 users though.

Then there are electricity and Internet costs.

For all this, I can see two ways of funding, of which I am currently walking the first one with some ambition to tap into the second one for the next UPS purchase:

  1. The operator pays up for it out of ideological reasons, getting respect and pride as rewards.
  2. The operator asks each of his/her users for a small amount of money to finance the entire operation.

Both would work. The more expensive running a single home server gets, the more likely the operator will need to choose path 2 to at least some extent.

5b. The ecological side:

Ecological impact

Is there any ecological impact to this?

While moving services out of housing centers would mean we can save some energy by not needing air conditioning systems, it also means we’re throwing away service consolidation. As such, the density of services per server and thus the utilization of the hardware available would likely be lower per watt. That would mean, that in the end we might have more machines running consuming more energy for the same amount of services provided. It is hard to estimate the outcome of this, but despite the more obvious savings in infrastructure and the great power saving capabilities of modern machines, I would fear that we might consume more power by running a decentralized network of many micro servers. This we would need to be aware of.

6. Other major challenges:

I would expect that propaganda would be problematic. Advertising this kind of setup from a single point like here is completely and utterly pointless. This has to be passed from operator to operator like a viral campaign, thus covering large areas fast enough to make an impact. Also, convincing people (operators and regular users alike) might pose a problem in general. This requires work for the operators and a change in thinking for everybody, especially common users. Naturally, this is extremely hard as people may fail to accept the significance and importance of the whole concept. It gets a lot harder in stage II though, as you’ll see now.

Working together

Working together is the key though.

7. The solution, stage II:

7a. Decentralization alone is not enough:

Now let us assume we have reached a stage where society has been pervaded by a loosely connected but dense enough network of home servers. While users are happily using them, taking away a few breadcrumbs from the larger, more exploitative services hosted by Google, Microsoft, Facebook and many others, it is still a chaotic, disorganized system. It cannot replace larger, centralized services as-is. So we would still depend on Google/Bing/whatever search engines, YouTube, Facebook, Twitter, etc.

Now in your neighborhood!

Now in your neighborhood too!

To solve this, we need proper software, and that software needs proper algorithms. Now bear with me, most of this stuff already exists! It’s just not being used in that very context. What I’m talking about here – only in part – could be a service the likes of “another Facebook”, just decentralized! Let’s call it “SkyNet2” for now (Haha, I know what you’re thinking, but it’ll become clearer soon! Plus, I just can’t help myself but think it sounds awesome!).

7b. How to disperse arbitrary centralized services using distributed hash tables, boosting net neutrality in the process:

In computer science, there is an algorithm or rather data structure known as the DHT or distributed hash table. Basically, it’s a huge table of keys (file names, or resource identifiers) and values (the contents, meta data or a URL) distributed amongst nodes (=our home servers). DHTs can be designed in such a way, that they represent a large, dynamic network of nodes where each node holds a small amount of data. Depending on topology – like chord [2], or binary trees – they can serve certain purposes well, like you can optimize them for certain search profiles. Also, if you choose your topology well, you can make this setup support the concept of [net neutrality] by not introducing a hierarchical structure that would serve certain content at consistently lower latency and potentially higher bandwidth and by fragmenting the network enough to make hurting net neutrality at the Internet service provider level impossible. While it has been suggested [3] that using DHT for “Google style” searching is quite hard, I believe it can still be done. What we want is equality, neutrality and freedom:

DHT protocol, Chord topology

A DHT network using the widely implemented chord topology. Shown in a distributed software development lecture [4]. Every node or home server is treated as an equal in this setup. This DHT graph also shows two typical search paths. Finger tables explained in the next diagram.

DHTs exist primarily in distributed file sharing networks like those using the Gnutella system or in recent times also Torrent. They’re also used for certain clustered database servers to optimize for very fast searching. In such contexts it has clearly shown, that it can grow or shrink at any given time by an almost arbitrary number of member nodes without failing.

If you want to learn more about distributed hash tables and associated algorithms and ideas, there are several documents and scientific papers on the topic referenced [on the Wikipedia], which may be a good place to start, should you wish to dig in deeper.

7c. Application for a hypothetical distributed social networking system:

Now say we create a DHT backend network for web servers, controlled by server-side configurations as well as script languages like PHP or Perl. Maybe even Java, if you’re into that. The frontend would just be a website. The backend is any web server and scripting language just like today, with data stored in conventional SQL servers. Each home server holds a local set of small webs, fractions of the superstructure. Say, Chuck is running his own local SkyNet2 node, and it holds the profile page of Max and Lucy, which belong to the SkyNet2 “Facebook ripoff”. The data served by those profiles are stored in a local SQL server and on local file storage for images, videos etc.

Max' basement

“Hey. Yeah, I’m at Max’ place right now, looking for his stuff. You..  What? No, it’s NOT here, there isn’t even any fuckin’ light down here, I can’t see a damn thing, it’s all dark! I guess it’s still all on freaking Facebook! You know what, Chuck, get him outta there before it’s too late!!”

But its no local-only system. When you access it, you can see and access millions of profiles, posts, images, whatever, just like on current, centralized social networking sites. Because in the background, every resource you have clicked on, every search you submit and every link leads to one or numerous resources located by a decentralized DHT protocol network! You don’t need static links that break all the time anyway, only a key that is called “Max’ profile page” will some meta data like “this is a SkyNet2 user profile site” or “this is an image”, and the sites URL as the resource that is being accessed as soon as the location of it is derived from the search across the DHT topology. Even Chucks server node – the machine itself! – can be found by using a node hash. Chucks domain name or IP address may change, the hash stays and the can be found as soon as Chucks home server (re)joins the larger superstructure.

Let me try and visualize it a bit:

Searching the DHT network

Searching for Lucys page on SkyNet2 works without Google or any centralized search engine, just by traversing the DHT structure, consisting of home server nodes. As soon as the target holding the data (Chucks server) is located, its current hostname/IP plus the relative URL of Lucys page will be returned to me. My local web browser can then fetch the page via regular HTTP or HTTPS! The search itself can either be launched by my own “SkyNet2”-DHT-enhanced web browser, or by any other node using a portal website.

7d. Flexibility and robustness:

Like the original evil SkyNet, “SkyNet2” cannot be stopped once alive. You cannot pull the plug and switch it off, and why? Because, like its namesake its sitting on no single server. It’s literally everywhere! Just like BitTorrent or good old Gnutella.

When done right, this can be enhanced and expanded easily to support multiple higher level network protocols, not just HTTP or HTTPS. Basically, the backend could return pretty much anything, an ftp+ssl:// URL to access a fileserver would be just as doable as a ssh+svn:// protocol to access a subversion software repository via SSH, or maybe magnet:// to download stuff from the BitTorrent network. It could also return SMTP/POP3/IMAP email server URLs or the contents of a public mailing list, whatever. Since we can make both machines, URLs and even content itself addressable, this is extremely flexible. When searching for an image, I don’t really care which server it comes from. But with DHT, I can search for just the file name. Even changing filenames are not problematic for certain document types. Even content can be hashed and made fully searchable in a DHT structure.

By adding content type classes like “image”, “video”, “website”, “archive file”, “server node” to the keys, we can also allow the users to narrow down their search to classes of objects, pretty much like todays centralized systems also offer it to us.

So, it sounds like one fat layer of glue code between networks and systems that already exist, right? Right. True. But it has the potential to put every service and one of the most powerful tools ever – the search engine – fully back in our own hands.

8. Feasibility and major challenges of stage II:

8a. The software:

At first, this depends on how we can deal with the software development side of things. For the Apache web server for instance, this could be solved with a loadable module maybe called  But enabling web servers with DHT would require someone to actually write the freaking code. Luckily, the free software world does have some developers who can deal with DHT, such as the guys who built [qBittorrent] for instance. They also have cross-platform experience, which is always a bonus. Also, the guys who built [OwnCloud] or developers similar to them would be nice. As a matter of fact, a cloud storage service that you can host yourself (but don’t have to) with clients for Windows, MacOS X, Linux, BSD UNIX as well as Android and iOS? OwnCloud would be perfect for integration into such a system.

OwnCloud logo

OwnCloud is a great inspiration – turning a modern exploitation concept into a truly free one! And it actually works, phone and tablet support included.

So, we need the developers, we need a web superstructure (in fact probably a set of templates for a “Facebook-like” system), that can easily be integrated. Also, while each users profile/blog page needs to enable its user to show some artistic individuality, the system should always be streamlined enough so that you can always tell “I’m on SkyNet2, no doubt”. So no matter if you visit Max’ profile, or Lucy’s or mine, they all may look different, but it needs to show enough similarity to allow for both the users to properly express themselves and for the system to show off what it is. Like on torrent, where you may download a Linux distributions ISO file, or a free movie or music, but you always know “This is the BitTorrent system”. So we need the DHT code + glue code and also some content to put on top of it.

It shan’t all be SkyNet2 though, but let’s just list a few services that any such system would gradually need to replace:

  • Facebook
  • Twitter
  • Googles search engine and gradually all other Google services, which are enormously powerful and thus hard to replace
  • eBay (most definitely not including anything like PayPal, which can never lie in users hands)

What it can not and likely should not replace are services that can by design and purpose not be in “regular people’s” hands, such as the following examples:

  • Amazon
  • PayPal (as shown above)
  • Steam, Origin, UPlay, etc.

This is actually not impossible. If OwnCloud was doable, then so would this be. But there is that other problem that I talked above before, and that’s acceptance. Convincing people is the single greatest challenge of all. One that I have still no clear concept of how to really master in practice. I have pulled off stage I in my own local environment, yes. But on a global scale? Can viral propaganda do it? Maybe, but it’s a massive risk. But there is also a lot to be gained in case of success.

8b.) Topology issues:

I have mentioned latencies before, and one of the reasons for it becoming too high is that at least the chord topology is just a logical ring / mesh. It does not take physical properties into account, so it has no location awareness. This might be problematic. Imagine I would like to search for Lucys page like shown above; In that scenario I have to query 3 nodes before reaching the correct one. Now if I am in Europe, and the first node is in Brazil, the second in Japan and the third in Russia, well, you see where this is going.

The protocols and its extensions’ (see below) efficiency might greatly improve by altering the topology so that it is formed taking geographical proximity into account. Using existing GeoIP databases may be a way to create the network in a more efficient way regarding geographical parameters. After all, people don’t like it when they have to wait for their search results. Also, the chord topology might not be the best one to begin with, it’s just one that caught my attention due to its inherent fairness.

But, there are developers many times smarter than me with lots of experience in designing DHTs, and if there is anything considerably smarter than this, they’ll find a way!

8c. Avoiding isolation:

While complete independence and freedom of digital private life is the ultimate goal, intercommunication is and will always be essential. This is especially true during the phase of initial growth, but cannot be ignored later either. While people would be well advised to abandon certain services on the Internet, shutting them out by force is not a good solution. To still be able to not have to rely on external services for critical operations, at least the search engine could be enhanced to look for data in the outside network whose servers are not members of the DHT superstructure. We can just let individual nodes crawl the outside web, index external servers and their content and thus pull in outside meta data and make it fully searchable. Thus, we do not need to rely on Google/Bing/etc. to search the outside world necessarily:

Connecting to the outside world

Connecting to the outside world without having to rely on it – is it possible?

This does come at a cost though, and that’s system memory and storage capacity. As shown in the image description, load balancing is critical. This requires a high enough density of home servers. If my home server needs to crawl through and index my entire state, this ain’t gonna end very well. Again, distribution amongst many is key!

Of course – and I would like to emphasize this clearly – the proposed network is not really isolated in any way at all. It’s just systems connected to the existing Internet after all. So with any regular computer you can use the web just as you always did. The idea here is once again not to need to rely on services we may consider untrustworthy, without having to sacrifice too much. So if this kind of crawling, indexing of and interaction with “outside” servers does not exist at some stage of development, you can still just use any other conventional service on the Net, as usual. This is no firewall after all.

When it comes down to it, the freedom to use services that impact your freedom lies with you, the individual user. ;)

8d. The social factor:

I have mentioned this several times already, but this needs to be said again, for stage II just as much as for stage I if not even a lot more so: Writing the software and defining revised, optimized algorithms and all that is comparably easy when seen in perspective. The largest challenge of all I think is the social one. People believe posting stuff on Facebook with their phones is cool and awesome. They depend on the feedback of their social peers like humans always did to define themselves.

People are using this kind of service lightheartedly. And Google has become something like the air you just breathe in. Always there, and only a problem when it’s gone or so it seems in the status quo.

People would probably just frown when hearing about a replacement technology focusing on freedom, independence and self-determination. They already consider themselves free, no matter if it’s just a golden cage in reality. Even conquering mobile devices with proper software does not mean guaranteed success, even if it’s well written and designed.

I was previously proposing to let the operators or nerds convince their friends and family. This is a process I know can take anything from weeks to years. There are surely people who would just shake their heads, considering the whole thing to be just silly and stupid, or quixotic at best.

So, to awaken people is the single greatest challenge. For a revolutionary technology to succeed, people need to be convinced of it, or even the perfect technology isn’t worth it’s weight in crap. Since we surely can’t rely on any marketing experts to help there, I wonder who might come up with the perfect concept? Maybe people like [Oliver Stone] would, he did support this too:

Let’s continue:

9. The benefits:

We were and are carrying the key with us, at all times!

We were and are carrying the key with us, at all times! We just need to remember that! We can free ourselves and our fellow friends and family!

  • All services and all data we use lies in the hands of people we can ultimately trust. And if they fuck with us, we can effectively confront them personally!
  • Less tracking, less stalking, less being monetized, less exploitation of the common user.
  • Nobody can easily attack your host physically and if it does happen (e.g.: somebody breaks into your operators home), you’ll know immediately. Physical control means less possible attack vectors and thus, more security.
  • More people with higher skill, and thus a technologically more educated society, as each user is close to his/her operator and may thus learn a lot about security, data backups, possible attacks, and general concepts of freedom, privacy and security in a networked society. That includes the home server operators themselves, as they will – through building, setting up and running those servers – gain knowledge and skill.
  • With proper adaptations, the system could be used for routing and anonymization of any Internet traffic, like the [Tor] network. A node class “INet gateway” with attribute “approximate location” or a function “location of proxy may not match location of client” could be introduced to mark home server nodes that allow you to use them as proxies for any Internet access, ensuring that the proxy is sitting outside of your local jurisdictions sphere of influence where necessary. But I guess the Tor developers have smarter ideas for that. Simply using their service in conjunction with the system proposed here? Why not.
  • Enhanced security by obscurity and service fragmentation. A decentralized network built on top of Linux, BSD UNIX, Solaris, MacOS X, Windows, maybe even exotic systems such as [OpenVMS] is so heterogeneous, so diverse, that no attack – zero-day or not – will be able to affect every single node. Even if one day all Windows machines are blown to shreds by some malware, SkyNet2 will be able to survive with ease.
  • Securing performance by load balancing of content across many nodes.
  • A formidable buildup of our understanding of democracy, as such a system would empower each single user and teach us that changes can be achieved in a relatively non-violent, initiated-by-the-masses way. It might inspire us to further change!
  • Independence.
  • Self-determination.
  • Freedom.

10. The danger:

Mr. White

Mr. White is the danger. Are we potentially too?

As much as I hate talking about this part, there is one issue that cannot be fully denied: Jobs. Now if a system like this succeeds at large, no matter how unlikely it may seem, it has the potential to destroy jobs. While I guess a lot more hardware will be sold for some time, larger organizations may begin to suffer naturally, as they’re running out of “slaves” to squeeze anything out of. While the more ingenious of them may just start tapping into the DHT network to secure their data mining business, some may run out of options, such as larger commercial web hosters, root server providers, companies which have invested in cloud storage solutions etc., basically all those people that provided what people can now provide themselves.

There may be a significant boost for the Internet service provider business though, as a lot more people would require high upstream connections to run their servers. Again, just like with the environmental concerns, it is hard for me to estimate the outcome of this, but the potential, the danger to harm at least certain businesses (whether they deserve to die or not) is real!

11. The subjective Truth:

Could we pull this stunt off? Hell yeah we could! Will we? Well, I want something like this SO MUCH, but while looking at the world and at how things work, I am losing hope. Not that hope is a good or stable thing to rely on to begin with, but at least it gives you some motivation to actually do something good!

The challenges posted are partially technical, partially economical, but mostly social. We are living in a world where most individuals have internalized the very concept of capitalism and of becoming a consumer that buys and a product that is being sold, defining our personalities through such superficial processes. Now, I’m not against all forms of capitalism, no, I’m not. If you have a brilliant idea to make the world a better place, for instance by making engines consume less fuel or by developing an idea to make nuclear fusion actually work with a positive energy balance, then by all means, you shall be rewarded with tons of our money! Or say, you build cars, or houses, or pencils for gods sake, shoes, T-shirts, toothbrushes, whatever, then YES, take my money for it! Same goes for services like the barbers’, any stand-up comedian, or the plumber who makes your toilet flush again.

Real economy

The real economy actually delivers something for your hard-earned cash, and it won’t betray you like crazy. And something like SkyNet2 will not hurt the classical real economy, not a bit.

But there are limits as to what shall be capitalized. And those limits have started to slowly disappear, not just in the fiery nine hells of the stock market, but even at home! When the very understanding of freedom and its significance starts to fade away, and brand recognition starts to replace individuality and character, we’re dealing with a situation that might deteriorate too fast to be able to pull this off.

And of course, nobody would believe in this. If I tell you now “let’s start SkyNet2!” (Or FreeWeb, or FreeNet or whatever you’d wanna call it) and then I show you some ideas, of which maybe not even half are usable, are you gonna say “Oh yeah, let’s go!”. Nah you aren’t. We’re lazy. We are adapted to a system that doesn’t let us off the leash, but at least feeds us and lets us sleep on a soft bed, metaphorically spoken. In essence, we’re all selling ourselves out either to whomever provides us with the easiest, most convenient way to reach what we were told we need by some shady advertisements or other more subliminal methods of talking us into buying something, or to whomever gives us the free stuff for whatever price it may cost us in terms of privacy or self-determination.

Even I am no exception. While I’m trying to fight as good as I believe I’m able to, I am still sitting in a Microsoft vendor lock-in (even though that lock’s starting to crumble), I am using Steam, even Origin for fucks sake, and I am using the ICQ network now controlled by the Russians plus the occasional Skype when playing multi player games, and I am not fully anonymizing my Internet communication. While I’ve been playing around a lot with Linux (10+ years of experience now), BSD UNIX, Solaris and Haiku, even Syllable Desktop, my private machines are still in a locked up state, at least partially.

Nonetheless, I somehow still have hope, that one day…

Lose the chains

…we can lose the chains…

[1] Morrison, G.; Hughes, R. (2014). “The Key“. BBC Freedom2014.

[2] Stoica, I.; Morris, R.; Karger, D.; Kaashoek, M. F.; Balakrishnan, H. (2001). “Chord: A scalable peer-to-peer lookup service for internet applications“. ACM SIGCOMM Computer Communication Review 31 (4): 149. doi:10.1145/964723.383071.

[3] Pang, J. (2004). 15-441 – Distributed Hash Tables.

[4] Tanenbaum, A. S.; Steen, M. V. CS865 – Distributed Software Development, Lecture 5.

May 032014

Gravatar logoSince I used WordPress as my weblog software, it has come with Gravatar support. Actually, I’m thinking WordPress was probably not the best choice anyway, you know; The huge and heavy PHP code running at a decent speed on a quad Pentium PRO 200MHz 1MB? Not easily done. But I’m gonna talk about Gravatar here, not running modern content management systems on hardware of the mid-90s. So what is Gravatar? Essentially a service that allows you to link a centralized avatar picture of yours into every blog post you make, or any other post on any other Gravatar-enabled site. As such, it would give you a small piece of ID that stays the same across sites. And as you can read on Gravatars own weblog, they’re [tightly interwoven with WordPress] these days.

Now why would I want to get rid of that?

Mind you, I never liked the idea of Gravatars. There is just something fishy about free stuff, especially when it’s a highly centralized service. Not free software, but free services. As a colleague of mine from Malaysia always used to say: There is no free lunch!

The first time I really noticed it (again, after my concerns had faded away) was that the Ghostery plugin reported Gravatar links (Images, JavaScript, CSS etc.) pulling in content from Gravatar servers into this weblog. See [Ghosterys Gravatar report]. Now reading Ghosterys description, you’d find comforting words like these:

Data Sharing:
Data is not shared with 3rd parties.

But also more alarming ones, like:

Data Collected:
Anonymous (Browser Information, Date/Time, Demographic Data, Serving Domains)
Pseudonymous (IP Address (EU PII))

Data Retention:

Your own Gravatar picture would be linked to the email address you provide to them, and when you use it, it will also log your local IP addresses and with it your location, time etc., which makes anything they may data mine PII, personal identifiable information.

There is always an essential question as to why something is and can be free. Registering for a Gravatar does cost nothing. But how? Writing Open Source Software and giving it away for free is comparably easily explained: It only costs the time of some enthusiasts who want to make the world a better place (mostly). But hosting massive amounts of data? That requires servers, bandwidth, storage solutions, which rarely come free for NGOs or non-charity organizations. So they need to make money. How?

Like with other free services, it is quite likely, that Gravatar is not a free product. It is indeed more likely, that it turns the user, data about him or her and his or her networks into its product, selling that very data to the highest bidder, just like Facebook or maybe even Google presumably do. Or many others. Naturally, there are people who are really concerned about privacy and data leaks concerning Gravatar, like [this guy here]. Now if even lawyers are concerned…

Plus, Gravatar still does not allow account deletion. It’s just not possible. So they’ll keep tracking your email address forever, whether with your consent or without…

Luckily, I found a solution provided by the PHP coder [TheDeadMedic] on [StackOverflow], whis is supposed to be used in conjunction with the [Simple Local Avatars] plugin.. Just to make sure, I’ll copy his code over here. The first thing is the modification of your WordPress themes’ functions.php, you can just append the code at the end, and you would need to place a new, local default avatar into your themes images/ directory, called default_avatar.png:

  1. function __default_local_avatar()
  2. {
  3.     // this assumes default_avatar.png is in wp-content/themes/<active theme>/images
  4.     return get_bloginfo('template_directory') . '/images/default_avatar.png';
  5. }
  6. add_filter( 'pre_option_avatar_default', '__default_local_avatar' );

And then, create a new plugin folder like DefaultLocalAvatar/ or whatever in your WordPress plugins folder, and copy the following into a PHP script file inside that folder:

expand/collapse source code
  1. <!--?php 
  3. /**
  4.  * Plugin Name: Disable Default Avatars
  5.  * Plugin URI:
  6.  * Description: To be used alongside <a href=""-->Simple Local Avatars, disabling all default avatars and falling back to a single image. Use the filter <code>local_default_avatar</code> to set the path of the image.
  7.  * Version: 1.0
  8.  * Author: TheDeadMedic
  9.  * Author URI:
  10.  */
  12. if ( !function_exists( 'get_avatar' ) ) :
  13. /**
  14.  * Retrieve the avatar for a user who provided a user ID or email address.
  15.  *
  16.  * @since 2.5
  17.  * @param int|string|object $id_or_email A user ID,  email address, or comment object
  18.  * @param int $size Size of the avatar image
  19.  * @param string $default URL to a default image to use if no avatar is available
  20.  * @param string $alt Alternate text to use in image tag. Defaults to blank
  21.  * @return string <img alt="" /> tag for the user's avatar
  22. */
  23. function get_avatar( $id_or_email, $size = '96', $default = '', $alt = false ) {
  24.     if ( ! get_option('show_avatars') )
  25.         return false;
  27.     static $default_url; // use static vars for a little caching
  28.     if ( !isset( $default_url ) )
  29.         $default_url = apply_filters( 'local_default_avatar', get_template_directory_uri() . '/images/default_avatar.png' );
  31.     if ( false === $alt)
  32.         $safe_alt = '';
  33.     else
  34.         $safe_alt = esc_attr( $alt );
  36.     if ( !is_numeric( $size ) )
  37.         $size = '96';
  39.     $avatar = "<img class="avatar avatar-{$size} photo avatar-default" src="{$default_url}" alt="{$safe_alt}" width="{$size}" height="{$size}" />";
  40.     return apply_filters( 'get_avatar', $avatar, $id_or_email, $size, $default, $alt );
  41. }
  42. endif;
  44. function __limit_default_avatars_setting( $default )
  45. {
  46.     return 'local_default';
  47. }
  48. add_filter( 'pre_option_avatar_default', '__limit_default_avatars_setting' );
  50. if ( is_admin() ) :
  51. function __limit_default_avatars( $defaults )
  52. {
  53.     return array( 'local_default' =&gt; get_bloginfo( 'name' ) . ' Default' );
  54. }
  55. add_filter( 'avatar_defaults', '__limit_default_avatars' );
  56. endif;
  57. ?&gt;

After that, only thing left is to activate the new mini-plugin in your WordPress Dashboard. When done, all Gravatar content will be gone and nothing Gravatarish will be pulled into your weblog when users come to visit. The only downside is that if you do not have user registration enabled – it’s disabled here – all users will receive the local “default_avatar.png” you put into your themes’ images/ folder. But I think that’s a small price to pay for enhanced performance (less connections to remote servers, less JavaScript and CSS!) and enhanced privacy.

If you are allowing anyone from the Internet to register on your weblog site, you can actually enable them to just upload their avatar to your site using the Simple Local Avatars plugin. That way, everything is perfectly decentralized (My decentralization vision is a thing I’m planning to write about in the future), and people can still use their favorite avatar, no data mining included.

As soon as all server-side and client-side caches are clear for good, this here weblog will no longer serve nor allow any Gravatar content whatsoever! Gone for good!

Dec 112013

The Pirate Bay logoTorrent tracker / magnet link host The Pirate Bay has now used the domain name for quite some time. Being quasi-illegal in a wide range of countries, they’ve been moving to different top level domains in the past, and now their TLD name has been seized again. It is not entirely clear who’s responsible, but some people seem to assume it was the Dutch authorities, as the .sx domain was hosted in the southern half of the Caribbean island Saint Martin, which in turn goes by the name of [Sint Maarten] (The northern half was colonized by the French back in the old days). When the DNS servers were removed from the domain, the site became unreachable all over the world in an instant. The web servers however remain completely unaffected by this.

It seems there is no new permanent refuge yet, but for now the website has gotten a new domain name, [], hosted on the volcanic [Ascension Island], which happens to be UK territory. That means that this won’t go on for long though, as some collecting society is sure to press charges soon enough.

Now The Pirate Bay had Sweden (.se), Greenland (.gl), Iceland (.is), Sint Maarten (.sx, Dutch) and Ascension Island (.ac, UK) in use, and that’s in 2013 alone! The next step could be the Peruvian TLD .pe, where The Pirate Bay could find another domain name harbor for a longer period of time. And if that doesn’t work out, there are quite a lot of other options left according to insiders.

Source: [].

So the war of the “content mafia” against the “pirates” seems to never come to an end!

Edit: And a day later, it’s already []…

Edit 2: And we’re back to Sweden with []. Plus, it seems that their .org domain is now always pointing at the current domain, so .org should always get you there, no matter what domain name The Pirate Bay is currently using.

Aug 272013

Miranda logoOk, a quick one, just so that I can’t forget that piece of information: For my beloved old ICQ 2003a #3800, received files could be stored in folders that were named after the senders user name. Quite handy, as I also linked that against the same users with the same nick names on other services. However, file transfers with good old ICQ 2003a don’t work anymore these days, so I switched over to Miranda on Windows (and licq on Linux) quite some time ago.

Typically, Miranda stores received files in a folder named %miranda_path%\Received Files\%userid%, where %miranda_path% is some sub folder in the users profile directory, and %userid% is the ICQ UIN of the user, a simple numeric value. What I wanted was to get the old ICQ behavior back, I wanted to have Miranda store the files in a folder named after the senders nick, not the UIN. That piece of information is hard to come by, if you don’t want to read the source code. Luckily, I found the information on the Miranda forums.

Just use %nick% instead of %userid%. And don’t try stuff like %username%, as this is a Windows-internal variable that Miranda will also interpret on top of the in-core Miranda variables. Unfortunately there is no public list of all internal Miranda variables that you can use… But yeah. That one I now know at least.

Jun 202013

John McAfee logoNow you may think of John McAfee – founder of the McAfee AntiVirus company – whatever you want, you may say he’s a criminal or murderer even, but to me, I can’t help myself but think the man’s my personal fucking hero!

Accused of illegal gun possession, drug use and alleged murder, he came a long way after he left the software business. For many years he was in the small country of Belize, allegedly dealing with local crime lords, sitting in his house with tons of guns, drugs and young girls serving all his sexual desires (just google him, you’ll find some stuff). Suffice to say the drug thing and the sex thing went along hand in hand, or so they say.

It seems he pissed off the corrupt authorities of Belize one too many times, as they started raiding his estate a bit too intensely. So after his neighbor was murdered, he fled and got caught in Guatemala. He released a bunch of YouTube videos while on his flight, but I’ve never seen any of them actually, even though I was already fascinated with this crazy asshole. From Guatemala he managed to get deported to the USA instead of back to Belize by some dubious means. But now he has released a new video, dedicated to actual McAfee AntiVirus users, and it’s so fucking hilarious, just go ahead and watch:

Now somehow I wished that the software business had more people like that. Sure it’d all be chaos then, and it’d be more crime than anything else, but it’s so hilarious, how in earth can I not love this shit? ;) Awesome John! Crazy and illegal maybe, morally dubious most likely, but damn, awsome!

Oh, and UK residents beware, as rumors are indicating he may be heading your way, especially since he’s got both US and UK citizenship already. ;)

Update: And it seems he even got his [own site], where this has of course already been published! ;)

May 302013

Hack logoTonight at around 10:30PM, came under attack by what I assume to be a larger bot network. As a result, all CPU-intensive web applications on my server, including this weblog, forums, CMS’ etc. went offline because of the attacks causing excessive PHP CPU cycle usage and timeouts of all scripts. It seems the primary target of the attacks was this weblogs login page, but the largely inactive forum was also a target obviously. Log file analysis suggests, that this was a distributed brute force attack to break into web services (and probably a few thousand other web services on other servers). The attacking hosts were many machines spread across multiple IP pools, so I have to assume that this is some major attack to break into multiple web services at once, for purposes currently not known to me.

Amusingly, the attack failed on this server while remaining in its infancy. As a security measure, runtime of PHP scripts is limited on this server. The attack was so massively parallelized, that it simply overloaded everything, causing all scripts to return HTTP 503 errors to almost all attacking hosts. In laymans terms: This server was simply too SLOW to allow the attackers scripts to ever succeed. I can only laugh about that simple fact. This server was just too old and crappy for this shit. ;)

Well, still, for that reason, this weblog and most of’s web services went offline for around 2½ hours. For now, it is back online, and hopefully the brute force attacks have ceased by now, as my monitoring would suggest. We’ll see.

Apr 162013

mausmaki.netThere is this guy that one day just wrote me an eMail (I am not even remembering how it started anymore, the correspondence is really long already) that is also very much into exotic systems running his own servers at home. Actually, even rackmount stuff, pretty crazy. Guy’s called Hans-Jürgen, and he is now finally doing his own weblog!

According to him, he will be writing about technology (x86 and PPC machines, networking, audio engineering) as well as social-political topics. Note, that the blog will be in german though. HJ is also very much into open source software, so you can expect to read about Linux and maybe some server side software too.

Since I consider him a valuable source of information and a good friend, his blog will also be added to my link list. Oh yes, the link, of course, here it is:

Mar 132013

XP x64 logoAs I have already mentioned more than once I’m sure, I am often made fun of for sticking to ancient stuff like Windows XP Pro x64 Edition, my favored workstation operating system from the Microsoft corner. Just recently a guy I know who also helped me by writing 95% of the PHP code behind the [x264 benchmark results list] did the same thing by mocking me, designing the “Windows XP bouncer” and posting it in one of my threads in a forum. I found that so hilarious actually, I have even copied a small version into this weblogs banner. ;)

Ironically, that person may very well find himself in my own position if all my prophecies come true and Microsoft decides to go all the way with the Metro UI. For he now swears to stay on Windows 7 if nothing better is coming along the way in the future. Hah, I’ll be laughing my ass off if Windows 9, 10, whatever all get even crappier. ;)

Well, in the meantime, enjoy the Windows XP bouncer (using one of many silly “dancing” pics that were floating around the internet a few years ago as its base picture):

Windows XP bouncer

Hahahaaoooh…  man. Yeah, i guess it’s going to stay an insider joke. ;)