Jan 272016
 

HakuNeko logoJust yesterday I’ve showed you how to modify and compile the [HakuNeko] Manga Ripper so it can work on FreeBSD 10 UNIX, [see here] for some background info. I also mentioned that I couldn’t get it to build on CentOS 6 Linux, something I chose to investigate today. After flying blind for a while trying to fix include paths and other things, I finally got to the core of the problem, which lies in HakuNekos’ src/CurlRequest.cpp, where the code calls CURLOPT_ACCEPT_ENCODING from cURLs typecheck-gcc.h. This is critical stuff, as [cURL] is the software library needed for actually connecting to websites and downloading the image files of Manga/Comics.

It turns out that CURLOPT_ACCEPT_ENCODING wasn’t always called that. With cURL version 7.21.6, it was renamed to that from the former CURLOPT_ENCODING as you can read [here]. And well, CentOS 6 ships with cURL 7.19.7…

When running $ ./configure && make from the unpacked HakuNeko source tree without fixing anything, you’ll run into this problem:

g++ -c -Wall -O2 -I/usr/lib64/wx/include/gtk2-unicode-release-2.8 -I/usr/include/wx-2.8 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -D__WXGTK__ -pthread -o obj/CurlRequest.cpp.o src/CurlRequest.cpp
src/CurlRequest.cpp: In member function ‘void CurlRequest::SetCompression(wxString)’:
src/CurlRequest.cpp:122: error: ‘CURLOPT_ACCEPT_ENCODING’ was not declared in this scope
make: *** [obj/CurlRequest.cpp.o] Error 1

So you’ll have to fix the call in src/CurlRequest.cpp! Look for this part:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Change CURLOPT_ACCEPT_ENCODING to CURLOPT_ENCODING. The rest can stay the same, as the name is all that has really changed here. It’s functionally identical as far as I can tell. So it should look like this:

  1. void CurlRequest::SetCompression(wxString Compression)
  2. {
  3.     if(curl)
  4.     {
  5.         curl_easy_setopt(curl, CURLOPT_ENCODING, (const char*)Compression.mb_str(wxConvUTF8));
  6.         //curl_easy_setopt(curl, CURLOPT_ACCEPT_ENCODING, (const char*)memcpy(new wxByte[Compression.Len()], Compression.mb_str(wxConvUTF8).data(), Compression.Len()));
  7.     }
  8. }

Save the file, go back to the main source tree and you can do:

  • $ ./configure && make
  • # make install

And done! Works like a charm:

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux

HakuNeko fetching Haiyore! Nyaruko-san on CentOS 6.7 Linux!

And now, for your convenience I fixed up the Makefile and rpm/SPECS/specfile.spec a little bit to build proper rpm packages as well. I can provide them for CentOS 6.x Linux in both 32-bit as well as 64-bit x86 flavors:

You need to unzip these first, because I was too lazy to allow the rpm file type in my blogging software.

The naked rpms have also been submitted to the HakuNeko developers as a comment to their [More Linux Packages] support ticket which you’re supposed to use for that purpose, so you can get them from there as well. Not sure if the developers will add the files to the projects’ official downloads.

This build of HakuNeko has been linked against the wxWidgets 2.8.12 GUI libraries, which come from the official CentOS 6.7 package repositories. So you’ll need to install wxGTK to be able to use the white kitty:

  • # yum install wxGTK

After that you can install the .rpm package of your choice. For a 64-bit system for instance, enter the folder where the hakuneko_1.3.12_el6_x86_64.rpm file is, run # yum localinstall ./hakuneko_1.3.12_el6_x86_64.rpm and confirm the installation.

Now it’s time to have fun using HakoNeko on your Enterprise Linux system! Totally what RedHat intended you to use it for! ;) :roll:

Sep 052013
 

OpenLDAP logoWhen we migrated from CentOS Linux 5 to 6 at work, I also wanted to make our OpenLDAP clients communicate with the OpenLDAP server – named slapd – in an encrypted fashion, so nobody can grab user names or home directory locations off the wire when clients and server communicate. So yes, we are using OpenLDAP to store all user names, password hashes, IDs, home directory locations and NFS exports in a central database. Kind of like a Microsoft domain system. Only more free.

To do that, I created my own certificate authority and a server certificate with a Perl script called CA.pl. What I missed is to set the validity period of both the CA certificate and server certificate to something really high. So it was the default – 1 year. Back then I thought it would be going to be interesting when that runs out, and so it was!

All of a sudden, some users couldn’t log in anywhere anymore, other users still worked. On some systems, a user would be recognized, but no home directory mount would occur. A courtesy of two different cache levels in mixed states on different clients (nscd cache and sssd cache) with the LDAP server itself already being non-functional.

So, the log files on clients and server didn’t tell anything. ANYthing. A mystery, even at higher log levels. So after fooling around for like two hours I decided to fire up Wireshark and analyze the network traffic directly. And what I found was a “certificate invalid” message on the TLSv1/TCP protocol layer. I crafted something similar here for you to look at, as I don’t have a screenshot of the original problem:

Wireshark detecting TLS problems

This is a problem slightly different from the actual one. It would say “Certificate invalid” instead of “Unknown CA”. In this case I just used a wrong CA to kind of replay the issue and show you what it might look like. IP and Mac addresses have been removed from this screenshot.

Kind of crazy that I had to use Wireshark to debug the problem. So clearly, a new certificate authority (CA) and server certificate were required. I tried to do that again with CA.pl, but it strangely failed this time in the certificate creation process. One that I tried to create manually with OpenSSL as I usually do with HTTPS certificates didn’t work as the clients complained that there was no ciphers available. I guess I did something wrong there.

There is a nice Perl+GTK tool that you can use to build CA certs as well as go through the process of doing signing requests and then signing certs with CA certs and exporting the resulting files in different formats however. That tool is called [TinyCA2] and is based on OpenSSL, as usual. See here:

The process is all the same as it is with pure command line OpenSSL or CA.pl: First, create your own certificate authority, then a server certificate with its corresponding request to the CA. Then sign that request with the CA certificate and you’re all done and you can now export the CA cert, signed server cert and private key to *.pem files or other types of encodings if necessary. And here you can make sure, that both your CA cert and server cert have very long validity periods.

Please note, that the CA cert should have a slightly longer validity period, as the server cert is created and signed afterwards, and having the server certificate being valid for just 1 minute longer than the signing CAs own certificate is unsupported and can result in serious problems with the software. In my case, I just gave the CA certificate 10 years + 1 day and the server certificate 10 years. Easy enough.

After that, put the CA cert as well as server cert + key on your OpenLDAP server as specified in /etc/openldap/slapd.conf and the CA cert on your clients as specified by /etc/sssd/sssd.conf or if you’re not using RedHats sssd, then it’d be /etc/openldap/ldap.conf and /etc/nslcd.conf, which are however deprecated and not recommended anymore. Also make sure your LDAP server can access the files (maybe chown ldap:ldap or chown root:ldap) and furthermore please make sure nobody can steal the private key!!

Then, restart your LDAP server as well as sssd on all clients, or nslcd on all clients if you’re using the old stuff. If an NFS server is also part of your infrastructure, you will also need to restart the rpcidmapd service on all systems accessing that NFS or you’ll get a huge ownership mess when all user ids are being wrongly mapped to the unprivileged user nobody, which may happen with NFS when your idmapd is in an inconsistent state after fooling around with sssd or nslcd.

That’s of course not meant to be guide on how to set up OpenLDAP + TLS as a whole, just for replacing the certificates. So if you ever get “user does not exist” upon calling id user or su – user and people can’t log in anymore, that might be your issue if you’re using OpenLDAP with TLS or SSL. Just never ever delete root from your /etc/passwd. Otherwise even you will be locked out as soon as slapd – your LDAP server – goes down! And then you’ll need Knoppix or something similar – nobody wants to actually stand up from ones chair and to need to physically access the servers. ;)