04 Aug 2015, 11:47


KpxMerge - Merging KeePassX Databases

If you happend to run into the unpleasant situation of having multiple diverged KeePassX files, have a look at KpxMerge.

This is a small utility I’ve hacked to help with merging several KeePassX XML exports.

03 Aug 2015, 11:47

Going static

Moving from Wordpress to Hugo

This blog started as a place to collect and share technical tips and tricks about 10 years ago.

All that time it was powered by WordPress. But since this was constantly giving me headaches about the security issues and because I was using virtually no features of WordPress I dedided to put an end to that.

What I really wanted was a way to easily write and edit small pices of text, preferably no HTML, and a way to publish that to the web.

Static site generators (like Jekyll or Hugo) give me everything I need for that. By keeping the source files in Git I can easily edit the content anywhere I have a clone of the repo.

A simple post-commit hook will trigger a rebuild of the site. So publishing is as easy as pushing to Git.

The post-commit hook is produced below. Put it in the REPO_PATH/hooks (or custom_hooks for GitLab) and make it executeable to enable it. For debugging set the QUIET flag to an empty string.


while read oldrev newrev refname
        branch=$(git rev-parse --symbolic --abbrev-ref $refname)
        unset GIT_DIR
        if [ "x$branch" != "xmaster" ]; then
                echo "Ignoring push to non-master branches"
                exit 0
        satdir="/tmp/REPO-NAME-TEMP"  # TODO change me
        if [ -d "$satdir/.git" ]; then
                echo "Updating existing satellite"
                cd $satdir
                git pull $QUIET
                git checkout $QUIET master
                git submodule init $QUIET
                git submodule update $QUIET
                echo "Cloning new satellite"
                mkdir -p $satdir
                git clone $QUIET $REPO_PATH $satdir
                cd $satdir
                git checkout $QUIET master
                git submodule init $QUIET
                git submodule update $QUIET
        cd $satdir
        rsync $QUIET -Haxe ssh $satdir/public/ SERVER:/path/to/docroot/ # TODO change me

This setup is performant, flexible and also allows for easy collaboration. Collborators can send in their articles as text (Markdown) or given access to the git repository. Since I am using GitLab for repository access management this is very easy as well as editing articles using the GitLab webinterface.

22 Dec 2014, 17:36

rsync: include a subdirectory from an excluded directory

Messing with rsync Filter-Options tends to get a little bit messy.

Imaging you want to backup some machine holding a set of nested, rotated MySQL Backups in paths like this:

If you were to backup everything you’d get a lot of noise in your backups, since the numbered directories at the end will get rotated daily, as well weekly and monthly directories with less frequency.

I certainly did not want that, so I was look for an rsync exclude/filter rule that would exclude the whole /srv/backup/mysql folder but still include the most recent directory (daily/0). Since I did that more once now, I thought it would be good to write it down.

The following list will achieche that. Please note that it’s important to include each directory while excluding the unwanted contents.

+ srv/backup/mysql/localhost/daily/0/
+ srv/backup/mysql/localhost/daily/
- srv/backup/mysql/localhost/daily/*
+ srv/backup/mysql/localhost/
- srv/backup/mysql/localhost/*
+ srv/backup/mysql/
- srv/backup/mysql/*
- srv/backup/*

22 Dec 2014, 09:26


Ntimed by phk

02 Feb 2014, 21:20


I just gave Camlistore a try. My first impression: uber-cool, but still very rough and suuuper slow.

But I have no doubt that the guys behind that project will make it fly. They already did some other pretty cool stuff, like memcached and DJabberd.

Camlistore is a personal “Content-Management-System”, similar to git-annex.

01 Sep 2013, 17:03

Replace a failed HDD in an SW RAID

This post describes how to replace an failed HDD from an Linux Software-RAID while fixing GRUB. This is mostly for my own reference but posted here in the hope that someone may find it useful.

If one of your HDDs in a Software-RAID failed and your system is still running you should first mark all partitions of this device as failed. Let’s assume that /dev/sdb failed.

cat /proc/mdstat # check RAID status# mark the devices as failed, neccessary to remove them
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3
cat /proc/mdstat # check RAID status
# remove the failed devices
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3

Now you should replace the HDD. Usually this means shutting the system down - unless you have a very rare setup which allows for hot-plug of non-HW-RAID disks.

If the failed disk was part of the OS paritions you’ll proably need to boot into some kind of rescue system first to perform the follwing steps.

After the system is back up you have to copy the partition table from the old disk to the new one. This used to be done w/ sfdisk, however since HDDs are getting to big for MBR to handle many system are switching over to using GPT which isn’t handled by classic UNIX/Linux HDD tools. Thus we’ll be using GParted.

sgdisk -R=/dev/sdb /dev/sdb # copy partition table, be sure to get the devices right
sgdisk -G /dev/sdb # randomize GUID to avoid conflicts w/ first disk

Now you can re-assemable the RAID devices:

mdadm --manage /dev/md0 --add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdb2
mdadm --manage /dev/md2 --add /dev/sdb3

You should let the system run until the RAID arrays have re-synced. You can check the status in /proc/mdstat.

Perhaps your bootloader got corrupted as well, just in case this are the steps necessary to re-install GRUB.

mount /dev/md1 /mnt # mount your /
mount /dev/md0 /mnt/boot # mount your /boot
mount -o bind /dev /mnt/dev # mount /dev, needed by grub-install
mount -o bind /sys /mnt/sys # mount /sys, needed by grub-install
mount -t proc /proc /mnt/proc
cp /proc/mounts /mnt/etc/mtab
chroot /mnt /bin/bash
grub-install --recheck /dev/sda
grub-install --recheck /dev/sdb
exit # leave chroot

Now you should wait until all MD Arrays are back in shape (check /proc/mdstat), then reboot.

reboot # reboot into your fixed system

29 Aug 2013, 12:37

Deploying Perl Web-Apps with Puppet

Since I was asked for I’d like to show one way of how you can deploy your perl web apps using debian packages and puppet.

This post assumes that you want to install some Plack webapps, e.g. App::Standby, Monitoring::Spooler and Zabbix::Reporter.

Those are pretty straight-forward Plack Apps built using Dist::Zilla.

cd Monitoring-Spooler
dzil build
I’ve built debian packages of these CPAN modules. Using dh-make-perl this is reasonably easy, although you’ll need some switches to make dh-make-perl a little less picky:
dh-make-perl –vcs rcs –source-format “3.0 (native)” Monitoring-Spooler-0.04
After building and uploading these packages to my repository I continue to use Puppet to install and configure those.

For each of these packages there is a custom puppet module, e.g. app_standby for managing App::Standby. This module create all necessary directories, manages the configuration and installs the package from my repository. It does not, however, set up any kind of webserver or runtime wrapper. This is done by another module: starman.

For running my Plack apps I use Starman which is managed by it’s own puppet module. It will install the starman package, manage it’s config and create an apache config. The interesting part is that I use puppet to automatically create an apache config and a simple Plack/PSGI multiplexer app that allows me to handle requests to all three (or more) of these apps using a single starman instance.

If these apps were rather busy I’d use a separate starman instance for each one, but as it is I’m more than happy to avoid the overhead of running multiple starman instances.

My starman module is invoked from one of my service classes like this:

  class { ‘starman’:
    configure => {
      mounts    => {
        ‘zreporter’     => ‘/usr/bin/zreporter-web.psgi’,
        ‘monspooler’    => ‘/usr/bin/mon-spooler.psgi’,
        ‘monapi’        => ‘/usr/bin/mon-spooler-api.psgi’,
        ‘standby’       => ‘/usr/bin/standby-mgm.psgi’,
The starman module itself is pretty straight-forward, but below are the templates for the apache config and the PSGI multiplexer.


<IfModule mod_proxy.c>
<% if @configure.has_key?(“mounts”) -%>
<% @configure[“mounts”].keys.sort.each do |mount| -%>
   <Location /<%= mount %>>
      Allow from all
      ProxyPass           http://localhost:<%= @configure.has_key?(‘starman_port’) ? @configure[“starman_port”] : ‘5001’ %>/<%= mount %>
      ProxyPassReverse    http://localhost:<%= @configure.has_key?(‘starman_port’) ? @configure[“starman_port”] : ‘5001’ %>/<%= mount %>
<% end -%>
<% end -%>
use strict;
use warnings;

use Plack::Util; use Plack::Builder;

my $app = builder { <% if @configure.has_key?(“mounts”) -%> <% @configure[“mounts”].keys.sort.each do |mount| -%>    mount ‘/<%= mount %>‘  => Plack::Util::load_psgi(’<%= @configure[“mounts”][mount] %>’); <% end -%> <% end -%>    mount ‘/‘           => builder {         enable ‘Plack::Middleware::Static’, path => qr#/.+#, root => ‘/var/www’;         my $app = sub { return [ 302, [‘Location’,‘/index.html’], [] ]; };         $app;    } };


06 Jan 2013, 21:17

scp over IP6 link local address

When trying to copy some large files between two PCs I was annoyed by the slow wifi and did connect both by a direct LAN cable. But since I didn’t want to manually configure IPs I did remember that IPv6 would give each interface an unique link-local address immedeately. So I went on to figure out how to correctly pass this address to ssh/scp.

Let’s assume the link-local address of ‘fe80::f4de:ff1f:fa5e:feee’.

First try:

ssh user@fe80::f4de:ff1f:fa5e:feee

This won’t work because these link-local addresses are only unique on a given link. So you need to specify the interface as well.

Second try:

ssh user@fe80::f4de:ff1f:fa5e:feee%en0

Of course you have to change the interface identifier acording to your system. On Linux this would most probably be eth0.

This was fine for ssh, but for scp I did also need a remote source directory.

Third try:

scp -6 user@fe80::f4de:ff1f:fa5e:feee%en0:/some/path/

This won’t work because the colons confuse ssh. You need to be more explicit by using square brackets.

Forth and final try:

scp -6 user@[fe80::f4de:ff1f:fa5e:feee%en0]:/some/path/ .

Figuring out why this is no auto-generated link-local address is left as an exercise to the reader (but not relevant for this post).

04 Dec 2012, 18:04

GitLab behind a Reverse-Proxy

Running GitLab behind a (second) Reverse-Proxy over https?

In that case you should undefine the

proxy_set_header   X-Forwarded-Proto

header in the backend nginx and set it in the frontend nginx. Otherwise GitLab/RoR will redirect you to http on various ocasions. You should also set https in the config/gitlab.yml.

15 Oct 2012, 07:47

Escaping Filenames with Whitespaces for rsync

If you need to sync from a source path containing whitespaces you’ll need to escape those for the remote shell as well as for the local shell, so your command may look like this:

  rsync -ae ssh 'user@host.tld:/path/with\ some/gaps/' '/another/one with/ a gap/'

Don’t escape the whitespaces on the local side twice or you’ll end up with weired filenames!


08 Oct 2012, 18:44

git: Rebase vs. Rebase

For a some time I’ve wondered about git rebase. At some point in the past I’ve realised that there is not one use in git rebase but (at least) two distict ones.

  • On the one hand git rebase is used to rebase a branch onto an updated source branch (source like in origin, not in source code).
  • On the other hand it’s used to rewrite commit history.

What’s rebase?

A rebase makes git rewind your commit history up to a certain point and than re-apply your patches onto another starting point. The starting point depends on the exact type of rebase you do and what you tell git.

Rebase a branch

Why would you want to rebase a branch onto another one and what does it do?

If you have a feature branch and want this to be merged with your master branch you could of course merge it. If you do so and your feature branch is not based of the last commit in master git will create a new merge commit since it has to merge two branches of a tree. If you’d rebase your feature branch onto the tip of master you’d have a linear merge history w/o a merge commit instead.

Rewrite a branch

Why would you want to rewrite your commit history? What if you make mistakes while crafting a new branch? If you’re following common VCS advice you’ll probably commit often. Probably between some mistake you’ve made and the time you did correct it.

Now you’d have your shiny new feature commit and two or more commits w/ bugfixes to your initial commit in this feature branch. Some people prefer to keep the history of their mistakes. Those could just merge this new feature branch into their master branch.

Others prefer to keep their commit history clean (you wouldn’t release a book which includes all mistakes and corrections you’ve made, would you?). Here comes git rebase -i to the rescue. This rebases the current branch onto itself. This may sound a bit silly but it allows you to drop commits or more importantly to combine (squash) commits together!

Where to got from here?

This post wasn’t meant to give a complete introduction to git rebasing. There are plenty of great tutorials out there. I did just want to highlight some points which were important to me.

08 Oct 2012, 17:16

Watching 3D Movies in 2D using Mplayer

In case you’ve got an 3D Movie w/o an 3D capable output device try mplayer w/ the

-vo gl:stereo=3

switch to get it in 2D. It works at least on Linux w/ an Nvidia Card.

21 Aug 2012, 16:12

Perl Tutorials

Looking for Perl Tutorials? Go ahead: http://perl-tutorial.org/.

16 Jul 2012, 14:24

Options for running Linux-Vserver on Debian wheezy


If you’re running Debian squezze with a Linux-Vserver kernel you’ll soon have to face the fact that support for non-mainline virtualization patches will soon be dropped from Debian stable.

The Debian kernel team stated very clearly that they won’t continue to provide custom patched kernel packages. In general I think that is a very good decision. Taking the workload for the team into account and the unwillingness of the Linux-Vserver and OpenVZ maintainers to cooperate with Debian this is very much understood.

So what to do now if have vservers running your business?

These are the options I could think of so far, feel free to suggest further:

  • Stay with Squeeze
  • LXC
  • KVM w/ Squeeze VM
  • VMWare ESXi w/ Squeeze VM
  • Custom patched Kernel
  • Xen w/ Squeeze domU

Staying with Squeeze

If you plan to stay with squeeze you’re good to go for quite a while. Of course squeeze security updates will end some time after the Wheezy release, but what to do with newer hardware which is not supported by Squeeze? So not an option I think.


Linux Containers (LXC) are the preferred contextualization from Wheezy on. They are maintained within the mainline kernel are said to have a very good intregration with it. The biggest drawback however, are the userspace tools. While the team developing those used to be quite active it has slowed down a bit recently without having brought the tools anywhere close to util-vserver - which aren’t perfekt either.

KVM w/ Squeeze VM and Vserver Kernel

You could run Wheezy or Squeeze w/ an Backport Kernel on your host and run an squeeze vserver kernel inside KVM. That sounds ugly and means having to set up a network bridge on your host.


Of course you could also turn all your vservers into KVM VMs. This is very much work and means completly migrating to an entirely differnt virtualization architecture. Not very nice.


Long story short: The management of an ESXi is an PITA.

Xen w/ Squeeze VM and Vserver Kernel

Same as KVM w/ Squeeze kernel. See above.


Same as KVM. See above.

Custom patched Kernel

While the Linux-Vserver team isn’t always cheered about debian they are still very active and continue to provide patches for recent kernel. The biggest drawbacks here are, that you have to care about security update yourself and that you need to build a custom set of util-vserver. Older versions from squeeze won’t work with newer kernels.

20 May 2012, 16:02

Migrating to PSGI/Plack

Have you heard of PSGI/Plack?

It’s that awesome Perl meta-webframework you’re looking for. Unfortuneately it’s not that easy to get started with because the documentation is a bit too euphoric. They talk about “superglue” and “duct tape for the web” but fall a bit short of explaining how to get started with it.

Shortcut: It you’re using one of the supported frameworks (like Catalyst oder CGI::Application 4.5+) you shouldn’t need to work about anything of this. How to get those running w/ PSGI is explained in sufficient details in various docs.

This post is for those who need to do it w/o an framework or want to know the internals. Well, I’m not going into great detail in this post, only the essentials.

  • What is PSGI? PSGI is a specification of a protocol spoken between a PSGI compliant Server and the WebApp. The WebApp is talking PSGI to it’s executing server and this server is talking whatever he likes to his downstream (e.g. HTTP, FCGI, …)
  • What is Plack? Plack is a set of tools implementing PSGI. It provides some PSGI compliant Server implementations on its own and help others building their PSGI servers. It’s also some kind of meta-framework (“middleware”) that implements such things like Session management, authentication and compression. You don’t have to use those features! You don’t even need to known that they are there, but if you want you can have a look.
  • What is $env? This is a HashRef containing the environment for your request? Why env, you ask? Well, probably because CGI get it’s parameters passed via the OS ‘environment’ and they just adopted that for PSGI.
  • What is the PSGI-triplet? Well, that’s just the name I gave to the data structure PSGI expects to get returned after a request has been processed. It contains the HTTP-Statuscode, an ArrayRef with the HTTP-Header pairs and an ArrayRef containing the Body.
So, how do I migrate my framework-less perl web application to being PSGI compliatnt?
  • Throw out CGI, CGI::Carp and possibly FCGI. You won’t be using them anymore. You’ll be using Plack::Request instead.
  • Make your App “persistence-compatible”. That means you’ll have to abandon any global (class) variables thats only valid for one request. Every class variable must be valid throughout the entire runtime of your server  (because your class is instantiated only once, at the PSGI-Servers startup. There are excpetions to this, but keep it as a rule of thumb). Every information that is only valid for one request must be passed between the methods. If you have much pass around put if into a custom request class or a HashRef.
  • Make sure your class has a method handling the request. You probably have that already, but you should name it ‘run’. It will get passed the $env. Use that directly or create your custom per-request data structure from there.
  • Remove any direct output to STDOUT. Make your app return the PSGI-triplet to the caller. Everywhere.
  • Create a webapp.psgi, see below for it’s content and possible a webapp.pl if you want to support plain old CGI.
  • plackup webapp.psgi

use strict;
use warnings;

use lib '../lib';

use MyApp::Web::Frontend;

# The important part here is to instantiate your WebApp Class before
# the closure is defined. Everything inside the sub is executed on each
# request. If you instantiate your class inside you'll loose any benefit
# you get by not using CGI.
my $Frontend = MyApp::Web::Frontend::->new();
my $app = sub {
    my $env = shift;

    return $Frontend->run($env);


use strict;
use warnings;

# Warning: This script will only work properly when invoked with
# the correct environment. Plack::Loader tries to autodetect the
# proper server and will use CGI only if certain CGI environment
# variables are set. It will most specifically not work properly
# when run from the commandline.
use Plack::Loader;

my $app = Plack::Util::load_psgi("webapp.psgi");

02 May 2012, 13:20

Stop writing useless programs


It’s kinda rude, but they aren’t entirely wrong …

13 Apr 2012, 08:00

PHP is broken

PHP: a fractal of bad design via Fefe.

08 Apr 2012, 08:00

No CACert in Ubuntu?

It looks like that the CACert Root Certificate is not included in Ubuntu by default. Why did they choose not to support this authority?

This wiki page lists some pointers, but it’s annoying anyway.

04 Apr 2012, 08:00


If you happen to encounter strange DBI-connection-lost issues, like me, it’s worth to try out DBI_TRACE! DBI_TRACE=2=dbitrace.log perl yourdbiscript.pl

This will write a very, very, detailed and helpful logfile to the current directory. Helped me to identify a nasty fork()ing related bug.

For the record: I was experiencing some strange “connection to mysql server lost” issues with a single thread script (no forking either). After some time a colleague pointed me to the DBI_TRACE documentation and I found out that I was incorrectly using some library that did some work in a fork(). The problems began when this fork was finished since it closed all filehandles, including the one to the database server.

03 Apr 2012, 19:38

Ubuntu on ThinkPad X220

Recently I’ve got my hands on a new ThinkPad X220 (not mine, though :( ) and tried Ubuntu on it.

For various reasons I didn’t want to get rid of the pre-installed windows, so I’ve decided to go for dual-boot. When I got the ThinkPad it had three partitions. One Windows Partition, one Recovery Partition and one EFI partition I think.

I was just about the boot it with a gparted book disk when I discovered that Windows is able to resize it’s own partion. So I shrunk the Windows NTFS partition to about 50% of the initial size and made space for Ubuntu. Afterwards I’ve lauched the Precise Penguin (12.04) Installer and created a small boot partition and a large, encrypted, LVM partition. I did so using a TFTP-netboot installer image, since the graphical installer doesn’t support this. I made this experience when I did install Ubuntu on an X220 the last time (also not mine, bad luck).

After the installation was finish I’ve booted into the new system and got … nothing. Only a blank screen. However the disk was busy so I suspected that there was some issue w/ the graphics. Since the X220 is equipped w/ a Intel HD3000 Chipset (i915 Kernel Module), I thought that was unlikely. It turned out that I was able to switch to another VT using CTRL+ALT+F1, log in and install “Ubuntu Desktop” using “tasksel”. It seems the netboot installer didn’t pull in this task. Afterwards I was able to login to Gnome.

The last step was to tweak the power usage a little, head over to this post for the details on how to do it using powertop. However powertop, so far, has no way of storing the tuneable permanently. So you should install the package “laptop-mode-tools” which does most of the tweak for you.

However, this has one major drawback so far (tested on X220 and X220i with 11.10 and 12.04, same results): Shutdown is broken. Everytime I shut the ThinkPad down it immedeately reboots. This is annoying and I’m still investigating this issue. Update: It seems to be an ACPI issue in the kernel. See Ubuntu Bug #972722 for more details.

If you’re going to buy a ThinkPad be very, very alert about the little differences. Lenovo sells entirely different hardware under the ThinkPad brand with only minor variations in the naming. The ThinkPad X220 and X220i are basically the same. The X220i just got a Core i3 CPU which isn’t available for the regular X220. But the X220 Tablet is totally different and so are the other so-called “ThinkPads”. Some are what you expect from a “ThinkPad” (if you’re used to IBM ThinkPads), but some are really, really poor quality. I’ve seen one which didn’t deserve the name ThinkPad at all. If you ask me Lenovo is hurting itself by weakening this really strong brand but apparently they are happy with it …

Update: I’d suggest to get the Mini Dock 3 instead of the UltraBase. The Mini Dock comes with the 90W Power-Supply and two DVI outlets.

Update: If you wonder how to insert the SIM card for the WWAN module, see this page.

19 Mar 2012, 19:18

Pepper SCM

I’ve been looking for a decent SCM statistic tool for some time. Recently I’ve stumbled across Pepper.

Having a strong aversion against non-packaged installations I’ve created a Debian package.

You can grab my packages from packages.gauner.org (see the VBoxAdm download page for information on how to set the repo up).

apt-get install pepper
It will pull in the git, mercurial-common, asciidoc and xmlto packages.

19 Feb 2012, 22:08

DJabberd for Debian

DJabberd is a nice, small Jabber daemon written in Perl. After being fed up by ejabberd I gave DJabberd a try and it was well worth it.

OOTB it was a bit difficult to setup, but after creating proper debian packages it works pretty well.

You can grab my packages from packages.gauner.org (see the VBoxAdm download page for information on how to set the repo up).

apt-get install djabberd libdjabberd-authen-htdigest-perl libdjabberd-rosterstorage-sqlite-perl
You can find my modifications at GitHub: https://github.com/dominikschulz

13 Feb 2012, 11:29

Installing Ubuntu 11.10+ alternate via USB

If you’re not satisfied with the options the standard ubuntu installer provides you and want to more flexibility, e.g. installing on LVM and/or crypto devices, try yourself the hassle I had and do it this way: - get UNetBootIn - Attach the USB-Stick - Select “Ubuntu” -> “11.10_NetInstall_x64” - Boot the target system from the stick

Everything else, esp. using the HDMedia option does not work, as documented in several Bugreports. The NetInst Method in contrast seems to work pretty flawlessly.

06 Dec 2011, 11:07

Debian Package Dependency Graphing

While cleaning up some package dependencies I’ve stumbled upon debtree. Have a look, it’s worth it. Generates pretty picture. The policy manual may come in handy as well.

02 Oct 2011, 16:13

VBoxAdm 0.1.15

I’ve just uploaded VBoxAdm 0.1.15. It includes another set of major refactorings. Please be careful when upgrading and watch your logfiles closely for any errors that may occur. Especially the Vacation module was refactored.

The time when the project will leave it’s alpha stage is drawing closer. VBoxAdm is now running on several largish sites under my direct administrative control, so I’ve got plenty of possiblities for some real-world testing. Several other migrations/installations are planned for the near future. Once it has proven sufficiently stable on these mailservers I’ll announce the the end of the alpha phase and enter beta testing.

Stay tuned!

17 Jul 2011, 13:01

Squid - HTTPS broken w/o tcp_outgoing_address

If you happen to run a squid inside a Linux-Vserver you should make sure that you’ve set tcp_outgoing_address to the primary IP of the vserver or you’ll encounter strange issues resulting in https not working from within the vserver:

1310903352.277      0 TCP_MISS/500 0 CONNECT bugzilla.redhat.com:443 user NONE/- -

10 Jul 2011, 16:57

GIT: Rewriting commit history - change author and email

Quicklink: How to change author and email in the whole commit History: http://theelitist.net/git-change-revision-author/

10 Jul 2011, 16:56

Memcache Sessions may cause zend_mm_head corrupted in PHP5

PHP5 (5.3.3 in this case) may break in very supprising ways if the Memcached configured as a session handler goes awry.

In my case there was a webserver (Apache2 + mod_php5) w/ two memcached configured as session handler. One of those memcached got stuck and didn’t properly reply to request. This shouldn’t happen but whats even worse was that PHP5 just “died” with the following error in the syslog:

vs-www-s01 suhosin[32504]: ALERT - zend_mm_head corrupted at 0x7fc2f41a2090 (attacker ‘’, file ‘/var/www/index.php’)
This resulted in empty pages delivered to the browser.

After fixing the memcached everything was fine again.

30 Jun 2011, 08:00

Perl UTF-8 Checklist aka Surviving the Perl Unicode Madness

Some time ago, when I wrote the first version of this post I thought I had mastered UTF-8/Unicode with Perl and MySQL. Sadly I was very, very wrong. So I had to revisit the topic and I’d like to share my findings in the hope that they can save some coders from going nuts.

First you should read “Why does modern Perl avoid UTF-8 by default?” on Stackoverflow, especially the top-voted answer. It is the best ressource on UTF-8 and Perl I’ve found so far.

The next stop would be “UTF8, Mysql, Perl and PHP” on gyford.com. Pay special attention on the “utf8::decode( $var ) unless utf8::is_utf8( $var );” part. However I’d suggest using Encode::decode and Encode::is_utf8 instead. The imporant lesson to take away here is that you still may need to “decode” the bytes coming from the database into Perls internal UTF-8 representation. Once Perl knows its dealing with UTF-8 it will probably handle them correctly. Unfortunately sometimes the conditional decode doesn’t work … in this cases you can try to decode the data w/o checking if it is already in UTF-8 first. Brave new world …

If you still need more advice I suggest the following links, in this order:

29 Jun 2011, 08:00

Number of Host headers in a TCPDump

One-Liner: Get the number of Host: headers from a TCPDump:

ngrep -I /tmp/tcpdump | grep "Host:" | perl -e'while(<>){if(m/\.\.Host: (.*?)\.\./){$h{$1}++}};for $h(keys%h){print"$h - $h{$h}\n";}'

28 Jun 2011, 08:00

VMWare ESXi - Commandline Tools

Rebooting VMWare ESXi VMs from the Hypervisor Shell:

vim-cmd vmsvc/getallvms
vim-cmd vmsvc/power.reboot NN

27 Jun 2011, 08:00

Mount KVM Images

Want to mount a KVM image?

losetup /dev/loop0 foo.img
kpartx -av /dev/loop0
mount /dev/mapper/loop0p1 /mnt
unmount /mnt
kpartx -dv /dev/loop0
losetup -d /dev/loop0

via 1.

16 Apr 2011, 13:14

VBoxAdm: Mailinglist and API

A short status update regarding VBoxAdm.


Finally I’ve created a Mailinglist: http://www.vboxadm.net/support.html#mailinglist


I’ve been refactoring the Code for a while to turn it more into a MVC-Shape. This means separating the Model from the Controller (former VBoxAdm::Frontend, now VBoxAdm::Controller::Frontend). The ultimate goal of this work is to support code reuse and support for multiple ways to manipulate the data. Once the Model classes are stable I’ll finish the command line interface as well as the HTTP-API. This will provide three ways to modify the underlying data:
  • Web Frontend
  • HTTP-API (no REST for now, maybe later)
  • CLI
The Mailarchive is postponed for the time being.

Auto Configuration

Most Mailclients, like Outlook, Thunderbird and KMail, support a way of client auto-configuration. When setting up a new mail account they request a certain URL derived from the mail address and if they find an XML document with the expected information there they’ll use this information to set the correct username, mailserver and protocols. Support for this was added recently. There is even support for the weird way MS Outlook does this. However Outlook support is, so far, based solely on the documentation on Technet. Due to the lack of a Outlook license I wasn’t able to test it. Please provide feedback.

Future Work

After the refactoring, API and CLI are finished I’m going to look into the Mailarchive again. After that I’ll look into Quota Support, Mailman integration and I’d like to find a way to get the Postfix Logs into the database to ease support work. Having the Log in the database in a parsed format - no raw syslog to db - would make support request more easy to handle. No more need to log into the server and grep through the mail.log.

Further feature request are always welcome. Please direct any ideas and comments to the mailinglist at vboxadm@vboxadm.net.

10 Mar 2011, 06:37

Postfix: partial relaying to Exchange

Recently I’ve tried to migrate an Exim4 mailserver to Postfix. This was pretty straightforward, however there was one issue that took me some time to figure out.

The exim mailserver handled several domains and one of those had mailboxes as well locally as remote on some remote Exchange mailserver. The exim did accept known mailboxes, performed recipient callout verification to the exchange and if it did accept this recipient the exim did so as well.

The friendly guys at the postfix.user mailing list tried to help with that, but either I did not make my problem clear or I missed some information. Hoever they couldn’t help me with this.

After reading some while in the (german) Postfix Buch, I’ve found a setup that led me to the solution in the end: Define the (partially) local domain as a relay_domain and let the transport map decide if the mail gets relayed to some remote host or a local transport.

That did work out pretty well and this is how I did it.

First I’ve included the domain in question, let’s say ‘domain.tld’ in my relay_domains:

relay_domains = domain.tld

Then I’ve added another transport map, pointing to some SQL file:

transport = mysql:/etc/postfix/virtual_transport_maps.cf

In this file I’ve defined a rather complex SQL query that would return my local transport (dovecot) for known mailboxes and default to the remote Exchange host for unknown mailboxes.

query = SELECT DISTINCT IFNULL((SELECT 'dovecot:' FROM domains AS d LEFT JOIN  mailboxes AS m ON m.domain_id = d.id WHERE d.name = '%d' AND  m.local_part = '%u' AND d.is_active AND  m.is_active),'smtp:[exchange.domain.tld]') FROM domains WHERE  'domain.tld' = '%d' AND NOT LOCATE('+','%u');

Then I’ve added the smtpd_recipient_restriction reject_unverified_recipient. This would reject any recipient which could not be verified at the Exchange server.

It also works with VBoxAdm.

Update: I’ve added the AND NOT LOCATE(‘+’,‘%u’) part at the end of the query to avoid getting false positives if recipient_delimiter=+ is set. You should set the ‘+’ in the query to whatever your recipient delimiter is set to or just drop this part of the where clause if you haven’t set recipient_delimiter.

19 Feb 2011, 12:47

Dell iDRAC6 - Reset Password

If you happen to forget your iDRAC Password you can change it from within the running OS. First you need to install Dell OpenManage and then you can use this command to change your password:

/opt/dell/srvadmin/bin/idracadm config -g cfgUserAdmin -i 2 -o cfgUserAdminPassword “NEWPASSWORD”

06 Feb 2011, 09:51

Debian Squeeze Highlights

My personal highlights of this great Debian release are the availability of a special non-free netinst CD including the firmware needed by all those nasty Broadcom NICs and the possibility to just cat any CD image onto an USB stick for installation. No more messing about with “hd-media”.

05 Feb 2011, 10:30

VBoxAdm Setup

The latest issue of the german Linux Magazin had a nice article about VBoxAdm.

They critisized that the installation on OpenSuSE is painful and I think they are right. The development of VBoxAdm is done on Debian and I have little knowledege about other distributions. If someone would be able to help out with packaging for OpenSuSE and other distributions that’d be great.

Meanwhile I’m working on a improved setup script that will handle most of the setup and make installing and setting up VBoxAdm a breeze.

05 Feb 2011, 09:54

Debian Squeeze Release in Progress

Thanks to Liveblogging at identi.ca/debian everyone can follow the release progress during this weekend. Too sad theres no release party nearby …

24 Jan 2011, 21:38

Linux IProute - Source based routing

Source based routing is usefull if you want to divert your traffic to different outgoing network interfaces based on their source IP. Of course this is only usefull if your system has more than on IP address and network interface.

The key to source based routing is the concept of multiple routing tables. Each of these routing tables has its own set of rules, including lo and a default gateway. These tables are created using the ip route command with a table NR suffix. The packets enter these tables if they are direct there by ip rules create with the ip rule command.

If you want to handle all traffic from sourceip /netmask via interface ethX, you’d need two rules for this traffic and some default rules for the remaining traffic as show below.

ip rule flush

ip rule add prio 200 from <sourceip/netmask> lookup 250
ip rule add prio 32700 from all lookup main
ip rule add prio 32750 from all lookup default

ip route add dev lo table 250
ip route add destip/net via dev ethX table 250
ip route add default via <defaultgw> table 250 <defaultgw> <sourceip>

01 Jan 2011, 14:50

IPTables Passive FTP Connection Tracking on non-standard ports

Ever tried to run a Linux FTP Server behind a IPTables firwall on non-standard ports, i.e. not on port 21?

The problem is that the FTP connection tracking module nf_conntrack_ftp only watches port 21. If you want to use other ports the module must be loaded with the parameter


if you want to run an ftp server on port 21 and one on port 5367. The usual other iptables rules apply, too.

22 Dec 2010, 19:07

Tutorial: ISP Mail Server mit VBoxAdm

Als Dokumentation zu VBoxAdm und als Nachfolger zu meinem ISP-Mailserver Tutorial, habe ich ein neues Tutorial basierend auf Postfix, Dovecot, VBoxAdm und Debian squeeze geschrieben. Zu finden das ist ein erster Entwurf unter ISP Mail Server mit VBoxAdm, MySQL, SpamAssassin, Postfix und Dovecot auf Debian GNU/Linux Squeeze.

Im Moment gibt es das ganze nur auf Deutsch, irgendwann vielleicht auch auf Englisch.

19 Dec 2010, 17:38

Handling salted passwords in Perl

While working on VBoxAdm I’ve came to a point where I did want to support salted passwords. Salted passwords however are a bit difficult since you need to extract the salt from the existing password hash and compare it to the user supplied password to hash it and compare the result. You can’t simply hash the supplied pass and use a random salt, that would produce different hashes.

If you’re not into salted hashes, I’ll try to explain how salted hashes are generated, why they are and how you can handle them.

What are salted hashes and why do you need them?

The simplest way to store passwords in a database would be to use plaintext. Let’s assume for the remainder of this post that we’ll use the password ‘password’ and the salt ‘salt’. If you use plaintext you just store the string ‘password’ in your database. That is easy to implement since you don’t need to do anything with the password and this gives you much flexibility since you can do anything with the password, e.g. support challenge response authentication schemes. However this has a major drawback: The (database) administrator and perhaps others, as well as any intruders that gain access to your database can see all users passwords. In a perfect world this wouldn’t be a big problem since all users should use a unique password for every service they use. Yet in the real world there is small but important problem: Many users will re-use their passwords for other services. This would allow an intruder to abuse the passwords to access (maybe) much more important accounts of your users and cause much trouble to them and possibly bad PR to you.

It’d be much better if the passwords would be encrypted in some way so the cleartext is not directly visible from the database. This would help with security and give your users more privacy. At least I would not want that anybody knows what passwords I use. Not even the administrator of the site I use the password for. This is were hashing comes into play. Hashes are cryptographic one-way functions that have the ability to produce an (almost) unique output for a given input. The same input always produces and same output and it is a very rare occasion that two different inputs produce the same output (which would be called a collision).  This solves much of the initial problem, but still some issues remains: If two users use the same password this would be immediately visible because they would have the same hashes. Another issue is that you can easily lookup the cleartext for a given hash in a rainbow table. I’m sure that at least intelligence services will have huge rainbow tables.

Luckily someone came up with the idea of salted hashes. A salted hash is generated by creating a random salt for every hashing operation and appending this salt onto the cleartext. Since there must be a way to compare hashes later the salt is usually appended to the resulting hash and can be extracted from there for subsequent hashing and comparison. This makes salted hashes pretty much safe against a wide area of attacks and meets many privacy requirements.

How to deal with these salted hashes in perl?

I’ll use the salted hashes created by Dovecots dovecotpw utility as an example since these were the hashes of my concern and they aren’t to most simple ones. These hashes are created by generating a random salt, appending it to the cleartext and appending the salt to the hash.
SSHA(password, salt) = SHA(password, salt) + salt
The SHA1 hash uses 20 bytes of (binary) data for the hash. The salt is variable length and consists of binary data. This is important: The salt is not just ASCII (7-bit) or UTF-8 (variable lenght, limited value range) but uses the full eight bit of each byte to allow for maximum entropy. This is important. If you doubt it get a good book on cryptograhpy and read up on the issue of randomness, entropy and cryptanalysis. The problem here is that perl is tailored to deal with text data and numbers but not binary data. I C this would be an easy task, but perl needs special care to handle this requirements.

Generating an salted hash is pretty easy: Generate a number of valid byte values (0-254) and pack() it into a binary string, e.g.:

sub make_salt {
    my $len   = 8 + int( rand(8) );
    my @bytes = ();
    for my $i ( 1 .. $len ) {
        push( @bytes, rand(255) );
    return pack( ‘C*‘, @bytes );
However if you want to compare a user-supplied pass with the salted hash in your database you have to take some more steps. You’ll first need to retreive the old salted hash from the database, extract the salt and hash the user supplied password toegether with this salt. If you’d use another salt the resulting hashes would be different and the user would not be able to login although the passwords match.

In my implementation I did separate the code into a split_pass method that dissects the old hash and extracts the salt (and possibly the password scheme used) and a make_pass method that takes a cleartext, a password scheme and a salt to generate a hashed password. The resulting hash is them compared with the stored hash and if they match the user may login or do whatever the authorization was requested for.

The split_pass method basically strips of the password scheme stored in front of the hash between curly brackets, decodes the Base64 encoded hash, unpack()s it and packs everything after the hash again into a binary string using pack(‘C*‘,@salt). The pack() template represents a one byte char, which is what we need here.

For the actual implementation I suggest you look at the Perl Module VBoxAdm::DovecotPW distributed with VBoxAdm.

package VBoxAdm::DovecotPW;

use strict;
use warnings;

use MIME::Base64;
use Digest::MD5;
use Digest::SHA;


my %hashlen = (
    'smd5'    => 16,
    'ssha'    => 20,
    'ssha256' => 32,
    'ssha512' => 64,

# Usage      : my $hash = VBoxAdm::DovecotPW::plain_md5('pwclear');
# Purpose    : ????
# Returns    : ????
# Parameters : ????
# Throws     : no exceptions
# Comments   : none
# See Also   : http://wiki.dovecot.org/Authentication/PasswordSchemes
sub plain_md5 {
    my $pw = shift;
    return "{PLAIN-MD5}" . Digest::MD5::md5_hex($pw);

sub ldap_md5 {
    my $pw = shift;
    return "{LDAP-MD5}" . pad_base64( Digest::MD5::md5_base64($pw) );

sub smd5 {
    my $pw = shift;
    my $salt = shift || &make_salt();
    return "{SMD5}" . pad_base64( MIME::Base64::encode( Digest::MD5::md5( $pw . $salt ) . $salt, '' ) );

sub sha {
    my $pw = shift;
    return "{SHA}" . MIME::Base64::encode( Digest::SHA::sha1($pw), '' );

sub ssha {
    my $pw = shift;
    my $salt = shift || &make_salt();
    return "{SSHA}" . MIME::Base64::encode( Digest::SHA::sha1( $pw . $salt ) . $salt, '' );

sub sha256 {
    my $pw = shift;
    return "{SHA256}" . MIME::Base64::encode( Digest::SHA::sha256($pw), '' );

sub ssha256 {
    my $pw = shift;
    my $salt = shift || &make_salt();
    return "{SSHA256}" . MIME::Base64::encode( Digest::SHA::sha256( $pw . $salt ) . $salt, '' );

sub sha512 {
    my $pw = shift;
    return "{SHA512}" . MIME::Base64::encode( Digest::SHA::sha512($pw), '' );

sub ssha512 {
    my $pw = shift;
    my $salt = shift || &make_salt();
    return "{SSHA512}" . MIME::Base64::encode( Digest::SHA::sha512( $pw . $salt ) . $salt, '' );

sub make_pass {
    my $pw     = shift;
    my $scheme = shift;
    my $salt   = shift || &make_salt();
    if ( $scheme eq 'ldap_md5' ) {
        return &ldap_md5($pw);
    elsif ( $scheme eq 'plain_md5' ) {
        return &plain_md5($pw);
    elsif ( $scheme eq 'sha' ) {
        return &sha($pw);
    elsif ( $scheme eq 'sha256' ) {
        return &sha256($pw);
    elsif ( $scheme eq 'sha512' ) {
        return &sha512($pw);
    elsif ( $scheme eq 'smd5' ) {
        return &smd5( $pw, $salt );
    elsif ( $scheme eq 'ssha' ) {
        return &ssha( $pw, $salt );
    elsif ( $scheme eq 'ssha256' ) {
        return &ssha256( $pw, $salt );
    elsif ( $scheme eq 'ssha512' ) {
        return &ssha512( $pw, $salt );
    else {
        return "{CLEARTEXT}" . $pw;

sub make_salt {
    my $len   = 8 + int( rand(8) );
    my @bytes = ();
    for my $i ( 1 .. $len ) {
        push( @bytes, rand(255) );
    return pack( 'C*', @bytes );
# this method was copied from some module on CPAN, I just don't remember which one right now
sub pad_base64 {
    my $b64_digest = shift;
    while ( length($b64_digest) % 4 ) {
        $b64_digest .= '=';
    return $b64_digest;

sub verify_pass {

    # cleartext password
    my $pass = shift;

    # hashed pw from db
    my $pwentry = shift;

    my ( $pwscheme, undef, $salt ) = &split_pass($pwentry);

    my $passh = &make_pass( $pass, $pwscheme, $salt );

    if ( $pwentry eq $passh ) {
        return 1;
    else {

sub split_pass {
    my $pw       = shift;
    my $pwscheme = 'cleartext';

    # get use password scheme and remove leading block
    if ( $pw =~ s/^\{([^}]+)\}// ) {
        $pwscheme = lc($1);

        # turn - into _ so we can feed pwscheme to make_pass
        $pwscheme =~ s/-/_/g;

    # We have 3 major cases:
    # 1 - cleartext pw, return pw and empty salt
    # 2 - hashed pw, no salt
    # 3 - hashed pw with salt
    if ( !$pwscheme || $pwscheme eq 'cleartext' || $pwscheme eq 'plain' ) {
        return ( 'cleartext', $pw, '' );
    elsif ( $pwscheme =~ m/^(plain-md5|ldap-md5|md5|sha|sha256|sha512)$/i ) {
        $pw = MIME::Base64::decode($pw);
        return ( $pwscheme, $pw, '' );
    elsif ( $pwscheme =~ m/^(smd5|ssha|ssha256|ssha512)/ ) {

        # now get hashed pass and salt
        # hashlen can be computed by doing
        # $hashlen = length(Digest::*::digest('string'));
        my $hashlen = $hashlen{$pwscheme};

        # pwscheme could also specify an encoding
        # like hex or base64, but right now we assume its b64
        $pw = MIME::Base64::decode($pw);

        # unpack byte-by-byte, the hash uses the full eight bit of each byte,
        # the salt may do so, too.
        my @tmp  = unpack( 'C*', $pw );
        my $i    = 0;
        my @hash = ();

        # the salted hash has the form: $saltedhash.$salt,
        # so the first bytes (# $hashlen) are the hash, the rest
        # is the variable length salt
        while ( $i < $hashlen ) {
            push( @hash, shift(@tmp) );

        # as I've said: the rest is the salt
        my @salt = ();
        foreach my $ele (@tmp) {
            push( @salt, $ele );

        # pack it again, byte-by-byte
        my $pw   = pack( 'C' . $hashlen, @hash );
        my $salt = pack( 'C*',           @salt );

        return ( $pwscheme, $pw, $salt );
    else {

        # unknown pw scheme


07 Dec 2010, 10:44

VBoxAdm 0.0.16: Debian package and translation updates

The latest release of VBoxAdm features Debian Packaging and translation updates. Several new translations are now available. Please note that most of these are machine translations, so I’d gladly accept any suggestions for improvements.

06 Dec 2010, 19:56

debsign: clearsing failed: secret key not available

Have you ever had the problem that you could not build and sign a debian package because gpg/debsign/dpkg-buildpackage did claim that your secret key was not available although the key was there and you used the -k option to tell dpkg which key to use and the environment variable DEBFULLNAME and DEBEMAIL were set?

Well, dpkg does something very stupid: It takes the Name and Email from the last changelog entry (Ok, so far) and does a full string match (Ouch!!!)! Why is this stupid? Because my key contains an alias, and if you’re reading this yours probably, too.

I don’t want my alias in the changelog entry, but until now this is the only solution I’ve found for this issue.

So, if you get errors like this:

Now signing changes and any dsc files...
signfile package_0.1-1.dsc Firstname Lastname &lt;user@domain.tld&gt;
gpg: skipped "Firstname Lastname &lt;user@domain.tld&gt;": secret key not available
gpg: /tmp/debsign.XdvV0Yi2/package_0.1-1.dsc: clearsign failed: secret key not available
debsign: gpg error occurred!  Aborting....
debuild: fatal error at line 1246:
running debsign failed
debuild -i -I returned 29
Couldn't run 'debuild -i -I'

Then you should look at the output of gpg -K and the last debian/changelog entry:

sec   2048D/DEADBEEF 2010-01-01
uid                  Firstname Lastname (nickname) &lt;firstname.lastname@domain.tld&gt;

package (0.0.1-1) unstable; urgency=low

* Initial release

-- Firstname Lastname &lt;user@domain.tld&gt;  Mon, 06 Dec 2010 18:22:40 +0100

The problem here was the last line of the latest changelog entry. After changing it to

-- Firstname Lastname (nickname) &lt;user@domain.tld&gt;  Mon, 06 Dec 2010 18:22:40 +0100

everything worked.

If you ask me: This is a bug with dpkg which should be fixed.

30 Nov 2010, 11:58

VBoxAdm - Management-GUI for Postfix and Dovecot

Last weekend I’ve released a new web-based management GUI for Mailservers running Postfix and Dovecot. It is called VBoxAdm.

Its features:

  • All-in-one mailserver solution
  • written in Perl (despite some tiny bits of PHP for the Roundcube integration)
  • MySQL Backend
  • Sane Database schema, w/ normalized tables
  • Roundcube integration which allows users to change their vacation messages and passwords
  • ships with its own Anti-Spam Proxy (no need for AMAViS, SpamPD or others)
  • and vacation auto-responder (RFC 3834 compliant)
You can grab it directly from here or get to its page for more details and some more screenshots.

Please beware, this is ALPHA quality code. Don’t use it in production yet. Some parts of the application haven’t even been tested partly. But the code ist more or less complete so besides testing and minor fixes it is in pretty good shape.

There are still some issues on my Todo list, most important is the password issue as well as localization and the ability to export the data to CSV and/or XML.

Before anyone yells at me: The design (CSS) is a complete ripoff of Postfix.Admin, but since both apps are open source (GPL2+) and I give full credit to Postfix.Admin, I think that’ll be ok. The reason for that is that, while investigating web-based management solutions for Postfix, I stumbled over Postfix.Admin but were rather unsatisfied with some of its features (Language its written in, Database Layout). So I’ve started my own. Since I was very happy with their design and I’m pretty bad at webdesign I’ve just borrowed most of their CSS.

Trivia: Somehow the (german) Admin-Magazin wrote about it before I did. Kudos.

11 Nov 2010, 10:17

Installing Dell OpenManage on Debian squeeze

Recently Dell made their own OMSA packages available for Ubuntu. Unfortunately they didn’t provide a version for Debian and the dependencies can not be fulfilled with Debian package only. To be able to install OMSA from Dell you need to follow the steps on their page but before you proceed with aptitude install srvadmin-all you need to download two packages from Ubuntu and install them.

Afterwards you’ll be able to install and use OMSA in Debian squeeze. No lenny, sorry. The binary versions don’t add up.

28 Sep 2010, 21:25

Resize a Xen disk image

Its as easy as appending zeros to the disk image.

Here I append 10 GB to disk.img. Please note that resize2fs will, of course, only work if your disk contains a ext2/3 partition.

cd /var/lib/xend/domains/domain/
dd if=/dev/zero bs=1024 count=10000000 >> disk.img
resize2fs -f disk.img

25 Aug 2010, 18:16

Lightning strikes

Lightning sucks. A thunderblot struck our house and toasted my dsl splitter and my router. Did you ever want to know how a splitter that has been hit by a thunderblot looks? Here are the pictures.

07 Jul 2010, 11:39

mini-buildd and Linux-Vserver

After discovering mini-buildd, a tiny Debian buildd, I’ve tried to set it up inside some vservers. mini-buildd uses LVM-Snapshots to avoid duplicate work when creating build chroots. I will setup a base debian chroot once and create a snapshot each time a build chroot is needed. This, however, is where the fun begins. The problem is that Linux-Vserver prevents its guests from performing most of the syscalls and ioctls needed by lvm2.

I didn’t manage to get the setup fully working so far but I wanted to share my experience in case anyone tries the same!

The basic setup of mini-buildd is quiet easy. I’ve decided for a setup with three versers, one for the repository, one buildd for amd64 and ond buildd for i386.

vs-buildd-rep -
vs-buildd-amd64 -
vs-buildd-i386 -
First we need to create these vservers:
newvserver –ip –domain localdomain –hostname vs-buildd-rep –vsroot /srv/vserver/ –mirror http://ftp.de.debian.org/debian –interface eth0 –dist lenny –arch amd64
newvserver –ip –domain localdomain –hostname vs-buildd-amd64 –vsroot /srv/vserver/ –mirror http://ftp.de.debian.org/debian –interface eth0 –dist lenny –arch amd64
newvserver –ip –domain localdomain –hostname vs-buildd-i386 –vsroot /srv/vserver/ –mirror http://ftp.de.debian.org/debian –interface eth0 –dist lenny –arch i386
After installing the vservers we need make some adjustments for LVM and set some capabilities:
cp -a /dev/loop* /srv/vserver/vs-buildd-amd64/dev/
cp -a /dev/mapper/control /srv/vserver/vs-buildd-amd64/dev/mapper/
echo “SECURE_MOUNT” >> /etc/vserver/vs-buildd-amd64/ccapabilities
echo “SECURE_REMOUNT” >> /etc/vserver/vs-buildd-amd64/ccapabilities
echo “ADMIN_CLOOP” >> /etc/vserver/vs-buildd-amd64/ccapabilities
echo “ADMIN_MAPPER” >> /etc/vserver/vs-buildd-amd64/ccapabilities
echo “MKNOD” >> /etc/vserver/vs-buildd-amd64/bcapabilities
echo “SYS_RESOURCE” >> /etc/vserver/vs-buildd-amd64/bcapabilities
echo “SYS_ADMIN” >> /etc/vserver/vs-buildd-amd64/bcapabilities
cp -a /dev/loop* /srv/vserver/vs-buildd-i386/dev/
cp -a /dev/mapper/control /srv/vserver/vs-buildd-i386/dev/mapper/
echo “SECURE_MOUNT” >> /etc/vserver/vs-buildd-i386/ccapabilities
echo “SECURE_REMOUNT” >>  /etc/vserver/vs-buildd-i386/ccapabilities
echo “ADMIN_CLOOP” >> /etc/vserver/vs-buildd-i386/ccapabilities
echo “ADMIN_MAPPER” >> /etc/vserver/vs-buildd-i386/ccapabilities
echo “MKNOD” >> /etc/vserver/vs-buildd-i386/bcapabilities
echo “SYS_RESOURCE” >> /etc/vserver/vs-buildd-i386/bcapabilities
echo “SYS_ADMIN” >> /etc/vserver/vs-buildd-i386/bcapabilities
Please note that these capabilities give the vs-buildd guests pretty much control over the host, so be careful how far you trust your buildds!

Now start the vservers, fix some broken defaults, upgrade to squeeze and install the requried packages.

When editing the sources.list you should replace all occurences of lenny by squeeze like so:

The reason for not installing squeeze directly with newvserver is that, at the time of this writing, newvserver can not handle squeeze.

For the repository vserver:

vserver vs-buildd-rep start
vserver vs-buildd-rep enter
update-rc.d cron defaults; /etc/init.d/cron start; vi /etc/apt/sources.list; aptitude update && aptitude install aptitude apt && aptitude updgrade; aptitude dist-upgrade; aptitude install mini-buildd-rep vim htop screen ssh exim4
a2endmod userdir
dpkg-reconfigure tzdata
dpkg-reconfigure locales
dpkg-reconfigure exim4-config
The Vservers need some way of sending mail, so the best would be to setup a mailserver somewhere, e.g. in the host, and then confiure it as a smarthost inside the vservers.

The next one is the amd64 buildd:

vserver vs-buildd-amd64 start
vserver vs-buildd-amd64 enter
dpkg-reconfigure locales; update-rc.d cron defaults; /etc/init.d/cron start; vi /etc/apt/sources.list; aptitude update && aptitude install aptitude apt && aptitude updgrade; aptitude dist-upgrade; aptitude install mini-buildd-bld vim htop screen ssh exim4
a2endmod userdir
dpkg-reconfigure tzdata
dpkg-reconfigure exim4-config
And the same for i386:
vserver vs-buildd-i386 start
vserver vs-buildd-i386 enter
dpkg-reconfigure locales; update-rc.d cron defaults; /etc/init.d/cron start; vi /etc/apt/sources.list; aptitude update && aptitude install aptitude apt && aptitude updgrade; aptitude dist-upgrade; aptitude install mini-buildd-bld vim htop screen ssh exim4
a2endmod userdir
dpkg-reconfigure tzdata
dpkg-reconfigure exim4-config
The setup of the mini-buildd is a bit tricky since the repository and the buildds depend on each other. So after performing the above steps you’ll probably need to reconfigure each.
  • in vs-buildd-rep: dpkg-reconfigure mini-buildd-rep
  • in vs-buildd-amd64 and vs-buildd-i386: dpkg-reconfigure mini-buildd-bld

To be continued!

Thats it for now. Howevery, so far I didn’t manage to get the buildds working. There are still some issues with the LVs. I’ll update this post as soon as I figure out how to fix this.

29 Jun 2010, 14:56

Perl Best Practices

Recently I’ve read a really interesting book. A book every perl developer should read. At least have a look at Appendix B which lists all guidelines in a brief summary.

23 Jun 2010, 11:51

Speedport W722V - Features? We ain't need no Features!

Dear Deutsche Telekom, I’ve just got to love your great CPE products. The Speedport W722V ist a great product, a impressive piece of german engineering! It provides lots and lots of usefull features. For example you can us it as a doorstop, paperweight or to prettify your home.

But the point is: It is absolutely useless as a Internet-Router for me! It doesn’t allow incoming ICMP (Ping), it got no internal S0 (ISDN) Bus and it doesn’t allow VPN-Passthrough (GRE, Protocol 21). I even suspect it to have a severely broken QoS, but I can’t proof that right now. And this is only after a few days of playing around with this device. Not to think of what I’d find if I gave it some more time.

Die Konfiguration unserer Speedports ist auf Einfachheit getrimmt. Der Reichtum an einstellbaren Funktionen und Konfigurationsoptionen ist nicht das Ziel der Entwicklung, eher schon eine Reduktion auf das Wesentliche.
[Telekom Team @ T-Online Foren]

The quote says that they aim for simplicity and not for features, and they’re good at it. Very good. This device is so simplistic that it is basically useless for all but the most basic users.

I’ll look for a better CPE and try to return this device as soon as possible. Maybe they find somebody who can make better use of it than me.

Update: It looks like the Speedport is killing long running connections after a while (a few hours at most). I’ve heard about that one … that’s very disappointing when working over SSH.

12 May 2010, 16:38


Denic fails to maintain the german nameservers

This is was it looks like when most of the german nameservers are down. This image shows the traffic of popular .de site. The drop isn’t remotely as sharp as I thought it would be.

29 Apr 2010, 15:16

Perl: Wide character in print

Wanna get rid of these annoying “Wide character in print” warning perl gives you sometimes when dealing with unicode/UTF-8?


binmode(STDOUT, “:utf8”);
on STDOUT or the appropriate filehandle, and perl will treat it as UTF-8 capable.

You could also use the “-CSDA” option to tell perl that.


#!/usr/bin/perl -w
use charnames ‘:full’;


Wide character in print at ./wide_char.pl line 9.


#!/usr/bin/perl -w
use charnames ‘:full’;
binmode(STDOUT, “:utf8”);

01 Apr 2010, 21:15

Threads in Perl are broken

Ok, for most experienced perl programmers this is not new, but let me repeat it:

Threads in Perl are broken. Really, really, severe broken. Do not use threads with Perl.
Thread async is probably ok for smaller computations but for anything else use fork().

Not only are IPC-Signals really dangerous with threads and DBI can’t work with threads but also the memory usage is magnitudes higher with threads than with fork(). At the moment I’m hacking a perl app that uses a lot ressouces but I was impressed how fast I could kill my system with fair amout of concurrent threads. From Java I was used to threads being very lightweight, but with Perl this is the other way round. My app has its core part modularized and I’ve wrote it once using threads and once using fork(). The threads-version uses about 300MB RSS while the fork()ing version uses no more the 30MB RSS for the same workload. Quiet a difference.

16 Mar 2010, 18:09

virt-manager: Error starting domain

Using KVM/virt-manager in Debian sid is interesing. You’ll get nice and fresh errors from time to time. KVM is constantly improving but you have to deal with unexpected changes from time to time which tend to break existing VMs.

With the latest version I’ve got this error:

Error starting domain: internal error unable to reserve PCI address 0:0:3

The long text:

Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/engine.py", line 589, in run_domain
File "/usr/share/virt-manager/virtManager/domain.py", line 1208, in startup
File "/usr/lib/python2.5/site-packages/libvirt.py", line 317, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error unable to reserve PCI address 0:0:3

The solution was to edit the /etc/libvirt/qemu/<domain>.xml and change the conflicting PCI id. The line looked like this before:

<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

and like that after the change:

<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

Don’t forget to reload libvirt after this change.

14 Mar 2010, 12:44

KDE4.4: Getting rid of Akonadi-MySQL Startup Errors

Since upgrading to KDE4.4 I’ve got startup errors each time Akonadi was started due to some missing MySQL system tables.

It’s easy to fix this:

akonadictl stop
mysql_install_db  –datadir=$HOME/.local/share/akonadi/db_data
akonadictl start
And don’t forget to install akonadi-kde-resource-googledata. Thanks to Trumpton.

14 Mar 2010, 12:22

udev Notes

udevinfo was renamed to/replaced by udevadm in Debian sid. Must tutorials still refer to udevinfo.

A udev rule that works on sid w/o warnings would be something like this for a garmin gps device:

# cat /etc/udev/rules.d/51-garmin.rules
ATTR{idVendor}==“091e”, ATTR{idProduct}==“0003”, MODE=“666”, SYMLINK+=“GarminGPS”

25 Feb 2010, 20:02

Voyage Linux on an ALIX.2D13

I’ve just spent far too much time trying to install voyage linux on my new ALIX.2D13. Everything was fine, the only problem was that I did try to use GRUB and that wasn’t working. After changing to LILO it works like a charm. The problem is probably caused by a huge version gap between etch and sid. Etch has some something like 0.9x and sid 1.9x. I thought that the Voyage installer would use the shipped grub inside a chroot. Anyway, LILO works and this is fine. I have no special requirements for this box’s bootmanager. As soon as everything is set up and tested the box is going to be deployed.

The installation of voyage linux itself is covered in detail in the Getting Started guide.

Very usefull information can be found at networksoul and this chaos wiki.

I recommend picocom to connect to the serial console:

picocom –baud 38400 –flow n –databits 8 /dev/ttyUSB0
If your computer doesn’t have a serial port anymore, like mine, I recommend the LogiLink “USB2.0 to Serial Adapter” (UA0043 v.2.0). It’s cheap and works flawlessly. Another great LogiLink product I can reommend in this context is the LogiLink “USB2.0 Aluminum All-in-one Card Reader” (CR0001B v.2.0). Why I mention these two here? I find it hard to find cheap linux compatible adapters of which I know that they work on linux, so here is the information I would have like had before I bought those. The USB-Serial-Adapter is recognized as “Prolific Technology, Inc. PL2303 Serial Port”. The Card-Reader is shown as four separate drives.

24 Feb 2010, 21:50

DS18S20: CRC Error

I just got me some DS18S20 (1-wire Temperature sensors) and a DS2940 (1-wire to USB adapter). The first two did work like a charm, but the third one gave me CRC errors.

CRC Failed. CRC is 63 instead of 0x00
The reason was just that, after running the first two for a while, I did just disconnect them and attached the thrid one. My mistake was not to delete/re-initialize the .digitemprc. After moving the .digitemprc out of the way and re-initializing the new one the thrid one did also work.

Show all devices on the 1-wire bus:

digitemp_DS2490 -sUSB -w
Initialize the .digitemprc:
digitemp_DS2490 -sUSB -i
Read all sensors:
digitemp -sUSB -a -r750
Thanks to Marc for the hint.

Some usefull links:

Here are some pictures of my 1-wire bus:

22 Feb 2010, 23:35

Unpacking initramfs

Quick-Note: cat /boot/initrd.img | gzip -d | cpio -i -H newc

18 Feb 2010, 12:35

Qt 4.6.2 ready to upload

It looks like Qt 4.6.2 is ready to upload. That means the upload of KDE 4.4 to Debian unstable should be very close.

12 Feb 2010, 21:25


Is the EMV PIN check really so bad, eh I mean BAD (Broken as Designed)?

Perhaps I should lock up my cards somewhere safe …

09 Feb 2010, 21:05

Moving Root-FS to Crypto-Raid

Have you ever tried to move your Debian root filesystem to a RAID? Ok, no problem so far. What about LVM-on-RAID? Still no trouble? Then what about Root-on-LVM-on-Crypto-on-RAID? Sounds funny. Debian has several helpscripts which are able to create a suitable initrd file for this kind of setup. This is good and bad at the same time. The good thing is that they can detect a correct setup and create an appropriate initrd. The bad about this is that it won’t work if you just moving your system to this kind of setup. Imagine you’re still on an ordinary partition without all this fancy crypto, raid and LVM stuff. If you just execute update-initramfs -k <kernel> -u/-c the initramfs tools and the supplied hook scripts won’t know about your intentions. So you’ll have to create a full equipped chroot, set everything up like it would be on a realy Root-on-LVM-on-Crypto-on-RAID-System and run update-initramfs there. Of course you could build the initrd by hand, but I’m not going this Gentoo way.

So, what do you have to do? First you’ll have to create your RAID, luks Volume and LVM on top of each other. See the Gentoo tutorial above for these steps. This should be pretty straight forward. The interesting part starts as soon as you try to boot from your new root. If you did follow the tutorial you should have a working Grub but it won’t be able to boot your system since it can’t unlock your root fs.

So, after you’re back into your good ol’ system setup the chroot. This includes assembling the RAID, unlocking your luks Volume and mounting the LV. So these are the steps, assuming sane defaults for folders, partitions and device names:

mdadm –assemble /dev/md2 /dev/sdc2 /dev/sdd2
cryptsetup luksOpen /dev/md2 cryptoroot
vgchange -ay vg
mount -t ext3 /dev/mapper/vg-root /mnt
mount -t ext2 /dev/sdc1 /mnt/boot
mount -t proc proc /mnt/proc
mount –bind /dev /mnt/dev
LANG=C chroot /mnt /bin/bash
So, now you’re inside your proper chroot. You could just run update-initramfs, but that’ll probably fail. You need to setup mdadm first and create your crypttab.

Your /etc/mdadm/mdadm.conf should at least contain the partitions to scan and your array.

The command mdadm –detail –scan >>  /etc/mdadm/mdadm.conf should do it. But verify the file yourself!

Next you have to tell the mdadm-initramfs-script to start this array on boot. This is set in the file /etc/default/mdadm. Insert the full name of your array (e.g. /dev/md2) into the INITRDSTART variable in this file.

Now define a proper crypttab and you should be ready to create a working initrd. Make your crypttab look something like this:

cryptoroot   /dev/md2  none  luks,tries=3
Just generate a new initramfs, update grub (if necessary) and reboot.
update-initramfs -k all -u
In case you encounter any error let me know, I’ll try to help.

04 Feb 2010, 20:16

KDE 4.4 is ready!

A few minutes ago KDE 4.4 was tagged in the SVN. Soon the package maintainers will start building binary packages and hopefully they will arive in Debian sid soon. I can’t wait to try out the new Akonadi-powered KDE-PIM packages and the improved plasma shell.

Let’s see which other features will be presented in the release notes.

Update: The Announcement on dot.kde.org is out:  KDE Software Compilation 4.4.0 Introduces Netbook Interface, Window Tabbing and Authentication Framework

04 Feb 2010, 13:04

Opt-Out from Ad-Networks

Just two links where you can opt-out from popular advertising networks: Google and NAI.

03 Feb 2010, 16:46

mount.nfs: Operation not permitted on Debian sid

When I tried to mount an NFSv3 share from an Debian etch host I’ve got the error “mount.nfs: Operation not permitted”.

The solution was to force mount.nfs to NFSv3: mount -t nfs -o nfsvers=3 server:/share /mnt

Thanks to this post.

03 Feb 2010, 13:34

Linux Containers vs. Linux Vservers

Since Linux-Vservers seem to be having a hard time in Debian and the Vserver maintainers probably aren’t going to go the same way as the OpenVZ maintainers who promissed to get OpenVZ in shape for Debian, it’s time to look for alternatives. If you want to stay with contextualization, a lightweight form of virtualization, there is only a limited set of options. According to KernelNewbies TechComparison of virtualization techniques there are only a mere three approaches which go for contextualization (also called containers).

There three are

Since I have objections to OpenVZ, which, despite its cool features like live migration, are keeping me away from it, it’s time to look for LXC.

The one killer-argument of LXC is that it is mainline, meaning that is has been submitted to and accepted by the linux official kernel tree and doesn’t need any patches. So you can expect LXC to be fully usable starting with Kernel 2.6.29, which should be available in most stable distributions by now. To make full use of LXC you’ll need the userland tools as well. They are available from Sourceforge and as a Debian Package in squeeze (currently testing). However backporting them to lenny (currently stable) shouldn’t be hard since lenny fullfills all dependencies and it should only be a matter of installing the package from squeeze by hand.

So far LXC looks very promising but still a bit rough about the edges. I’m not going to present a more detailed howto here, yet. Please have a look at this five minute guide to LXC instead.

I’m working on the lxc-debian tools to improve them; have look at my git repository. I’m planning to write a Vserver to LXC conversion tool. Hopefully I can push my work upstream sometime. I really like to try to concentrate the work into one coordinated project.

If your curios about the development of LXC, you should subscribe to the LXC mailinglists lxc-devel and lxc-users at sf.net.

Update: Two more links regarding LXC. The LXC HOWTO and “LXC containers or extremely fast virtualization”.

03 Feb 2010, 12:00

Setting up Gitweb on Debian

There are many blog posts and howtos about setting up gitweb on the web. Unfortunately none of those seems to work for me. Either you can’t check out via http or it’s just not working.

Here is my configuration, which works for me on Debian Lenny. If you follow this instructions you’ll get a working gitweb Webinterface, checkouts via HTTP and git URLs.

First create a new directory to hold the git repositories. I’ll use the FHS compliant /srv/git:

mkdir /srv/git

Then create a new Apache 2 Vhost:

<virtualhost *:80>
ServerName git.example.net
ServerAdmin you@example.net
SetEnv  GITWEB_CONFIG   /etc/gitweb.conf
DocumentRoot    /srv/git
ErrorLog /var/log/apache2/git.example.net-error.log
CustomLog /var/log/apache2/git.example.net-access.log combined
HostnameLookups On
UseCanonicalName Off
ServerSignature Off

Alias /gitweb.css /usr/share/gitweb/gitweb.css
Alias /git-favicon.png /usr/share/gitweb/git-favicon.png
Alias /git-logo.png /usr/share/gitweb/git-logo.png
Alias /git /srv/git

ScriptAlias /gitweb.cgi /usr/lib/cgi-bin/gitweb.cgi
DirectoryIndex gitweb.cgi
<directory /srv/git>
AllowOverride None
Options Indexes FollowSymlinks
Order Allow,Deny
Allow From All

RewriteEngine On
RewriteCond %{REQUEST_FILENAME}         !-f
#RewriteCond %{REQUEST_FILENAME}        !-d
RewriteRule ^.* /gitweb.cgi/$0          [L,PT]
# for debugging rewrite rules
#RewriteLog /srv/www/git.gauner.org/logs/rewrite.log
#RewriteLogLevel 9

In most tutorials you’re told to include the RewriteCond matching directories (!-d), but for me that broke pretty URLs, so I disabled it.

If you use lighttpd, check out this FAQ entry.

Now you’ll need to make some adjustments to /etc/gitweb.conf:

$projectroot = "/srv/git";

# turn off potentially CPU-intensive features
$feature{'search'}{'default'} = [undef];
$feature{'blame'}{'default'} = [undef];
$feature{'pickaxe'}{'default'} = [undef];
$feature{'grep'}{'default'} = [undef];
$feature{'snapshot'}{'default'} = [undef];

# nicer-looking URLs (req. apache rewrite rules)
$feature{'pathinfo'}{'default'} = [1];

$site_name = "git.example.net";
$my_uri = "http://git.example.net";

# target of the home link on top of all pages
$home_link = $my_uri || "/";

@git_base_url_list = ("git://git.example.net");

Restart the Apache and checkout if your Vhost works.

Finally you can setup the OpenBSD inetd in /etc/inetd.conf add this line:

git     stream  tcp     nowait  nobody  /usr/bin/git-daemon git-daemon --inetd --verbose --base-path=/srv/git /srv/git

Now you’re done. Of course you need to install apache2, gitweb and openbsd-inetd but that should be clear ;)

Go here or here for more information.

02 Feb 2010, 10:52

A380 at the Matterhorm

The Swiss Luftwaffe has released some impressing fotos of the A380 in front of the Matterhorn. Here’s a teaser:

If you click on the image you’ll get a larger image. In the far right is the well known Matterhorn. In front is the A380 just above a F/A-18C Hornet from the Swiss Luftwaffe. In the background you see a part of the ski-region Zermatt-Cervinia. Just off the image on the left side would be the Klein-Matterhorn and the Gobba di Rollin. This is the starting point of the Theodulglacier. Nearby you can see some ski lifts and ski tracks leading to and from the Pleatau Rosa/Testa Grigia, which is the border to Italy.

23 Jan 2010, 20:02

Resize a LUKS Partition on LVM

Resizing LVM LVs is great, but how to do this with an encrypted partition?

umount /myfs
fsck.ext3 -C 0 -f /dev/mapper/myfs
cryptsetup luksClose myfs
lvextend -l+10 /dev/myvg/mylv
cryptsetup luksOpen /dev/myvg/mylv myfs
cryptsetup –verbose resize myfs
mount  /dev/mapper/myfs /myfs
umount /myfs
fsck.ext3 -f /dev/mapper/myfs
resize2fs /dev/mapper/myfs
fsck.ext2 -f /dev/mapper/myfs
mount /dev/mapper/myfs /myfs
via linux.kernel.device-mapper.dm-crypt.

30 Nov 2009, 13:41

Groupware with Kontact

Kontact, a part of the Kolab project, has some very nice Groupware features that were presented on the MK09.

Fortunately most of these are very well usable even without a Kolab server.

Unfortunately these are not so well documented (or the documentation is not very easy to find).

When trying out these features in noticed that the groupware features will only work properly (at least with Kontact from KDE 4.3) if you access your mailbox via “Disconnected IMAP”. If you fail to do so you’ll probably get an “Write access denied” error.

Please note, that Kontact is very unstable sometimes, but again, this depends heavily on the version of Kontact/KDE you use. My experience is based on KDE 4.3 from Debian unstable.

Right, it is called unstable for a reason …

20 Oct 2009, 17:32

OpenStreetMap on Garmin with mkgmap

Due to simplicity I’ve always used pre-compiled OSM maps for my Garmin. Since they are only available for certain pre-defined areas there is a slight lack of comfort, since I did need to change the whole gmapsupp.img every time I’ve moved out of the loaded area. Today I tried mkgmap to comile my own set of tiles into a gmapsupp.img and it did work great. I took a map of germany and added some areas I did also want to visit. At first I couldn’t believe how fast mkgmap had processed the tiles, but the output was a valid gmapsupp.img and after transfering to the Garmin it did work flaslessly.

How to create your own gmapsupp.img:

This will result in a gmapsupp.img file that you can transfer to your Garmin.

23 Sep 2009, 10:01

Upgrading Debian from i386 to AMD64

I just stumbled across a really nice tutorial on how to upgrade Debian from i386 to AMD64. Go, read it.

12 Sep 2009, 12:36

IPv6 Revisited

I’ve been following the IPv6 development for a while now and have looked at most IPv6 stuff. Today I did take a look at the Teredo proctoll and I’m impressed how well it works. Really nice. Try out Miredo on Linux or BSD.

11 Jul 2009, 18:16

Fonic, UMTS and Debian lenny on the EeePC

Recently I’ve got an EeePC 1000HE and lately also an UMTS-Stick, the Fonic Surf-Stick (a Huawei E160). I’ve read before that many people got this UMTS-Stick running on Linux, some even with Fonic, but none of them - it seems - used Debian lenny. Most were using Ubuntu which ships the Network-Manager > 0.7 which is capable of connecting to 3G-Networks. Unfortunately this version of Network-Manager is not available for Debian stable. There are Backports but they are not available for i386 at the time of this writing.

There are several way of getting your Netbook “on-the-air”:

  • pppd / Kppp / wvdial - I didn’t succeed with those. But the best choice seems to be wvdial which almost got me connected. PPPd failed without given any helpful output.
  • UMTSMon - This looks like a great tool, but it didn’t manage to get the connection running. It seemed to PPPd’s fault, though.
  • Network-Manager 0.7 and newer - Many people report to have no Problem whatsoever with this application, but it was not available for my system. Update: I did just verify my assumption: The Stick works flawlessly on Ubuntu with the Network Manager It took less than 30 seconds to get a working connection.
  • Vodafone Mobile Connect - This is very nice tool which works great. Not only is it able to detect the E160 Stick, it also has a very nice UI.
Here is a quick step-by-step instruction how to get online with your “Fonic Surf-Stick” on Debian lenny:
  • Boot your system, don’t connect the Stick yet.
  • When logged in connect the Stick.
  • Start Vodafone Mobile Connect, grab it from betavine before.
  • It will recognize your Stick as an Huawei E220 although it is a E160, but this doesn’t seem to matter.
  • Enter pinternet.interkom.de as APN, Username and Password can be left empty or use “fonic” for both. You should set the preferend connection type to “3G prefered” and use static DNS ( and
  • Mobile Connect will ask for your PIN it is needed.
  • Get a sufficient connection strength and click “Connect” in the UI. You should be online within a few seconds. If the connection fails, first try to get a better connection (just move the UMTS-Stick a bit around).
Please note that I’m running a custom Kernel (2.6.30) and thus didn’t need to use usb_modeswitch. If you’re running an older Kernel you’ll probably need to install an configure usb_modeswitch to get access to the modem part of the stick.

30 Jun 2009, 08:24

Perl DBI is not thread-safe!?

Recently I encountered the warning “Use of uninitialized value in null operation during global destruction” when working with DBI and threads, although DBI was not used inside the threaded code it did apparently have side-effects. The solution to this was to disconnect from the DB before entering the threaded code and reconnecting after all threads were launched.

15 Jun 2009, 08:00

VMWare on Debian (64bit)

Since VMWare made their Server available at no cost this has risen to be an highly interesting alternative for virtualisation. Anyone can download it from their website and get serials for free. This is highly mature software and is rather easy to install. However, on 64-bit Systems there are a few caveats. If you happen to have the problem that VMWare won’t accept your serials, than you have to install the ia32-libs package. Also see the comments on this howto.

When you did successfully install VMWare, you probably want to provide your VMs with network connecitivity. There are several ways to achieve this and the approach depends on how you use your servers.

Interface Aliases: http://www.derkeiler.com/Mailing-Lists/securityfocus/focus-linux/2002-01/0094.html

Routing: Use Host-only network

iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE
iptables -A INPUT -i vmnet1 -s -j ACCEPT
iptables -A FORWARD -i eth0 -o vmnet1 -j ACCEPT
iptables -A FORWARD -o eth0 -i vmnet1 -j ACCEPT
iptables -t nat -I PREROUTING -p tcp -d  -i eth0 –dport  -j DNAT –to :

14 Jun 2009, 08:00

Connection Tracking Stats

Get a list of ips with the most tracked connections:

cat /proc/net/ip_conntrack | awk ‘{ print $4; }’ | sort | uniq -c -d | sort -n -r

11 Jun 2009, 20:29

Sum the size of your logfiles

Find out how much space is occupied by your logfiles. Usefull in /var/log et al.

find . -type f -name “*.log” -size +2048 -exec du -s {} \; | awk ‘{ SUM += $1; } END { print SUM/1024/1024 }’

01 Jun 2009, 17:55

Synchronizing Amarok and Rhythmbox Metadata

Sometimes the world is cruel. Although most of the applications I use from day to day are open source its not easy to exchange metadata between those programs. Sometimes I switch my music player. Usually I switch between Amarok and Rhythmbox. But since I heavily rely on the Metadata, especially my song ratings, this can get very frustrating. Most of the metadata is stored within the MP3 files but the most important ones are not. After some searching around it looked like there is no proper solution to synchronize the metadata of my favourite music players. There are some approaches to import the music library from iTunes but it looks like nobody did ever want to do a two-way synchronization between Amarok and Rhythmbox.

After looking at the metadata formats (XML for Rhytmbox, MySQL for Amarok) I did start my own synchronization script. Right now it is working but needs some more polishing or it could eat your kittens.

Grab the script from Bifröst.

01 Jun 2009, 17:48

A day with GIT

After using CVS a short time and SVN for several year I’ve finally seem to have found a proper SCM/RCS. I’ve spent some hours reading the Git User’s Manual, the Gitturorial, the Git SVN Crash Course as well as several Man pages. It really looks like I’ve found an SCM that is cappable of what I expect from it. Of course time has to proove if it is less problematic than SVN - at least - sometimes is.

22 Apr 2009, 11:09

Beware the Four Horsemen

Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.
[1] - Bruce Schneier

19 Apr 2009, 00:01

tcpdump and IPv6

To monitor IPv6 traffic using tcpdump you can use the option “-i ip6”:

tcpdump -i ethX ip6

18 Apr 2009, 08:00

reStructured Text

I’ve discovered reStructured Text some time ago and have been using it since then for several tutorials, like the ISP Mailserver or the Rootserver ones.


I’d suggest to have a look at those if LaTeX would be too much of an overkill.

17 Apr 2009, 12:00

Prevent Debian from corrupting your manual Nvidia driver installation

Upgrades of the xserver packages have been corrupting my manual Nvidia driver (the proprietary blob version) for a long time. It looks like there is a feasible alternative: dpkg-divert. I’d saved myself from much hassle if I’d RTFM, but this is how live goes.

This two commands should help:

dpkg-divert –local –add /usr/lib/libGL.so dpkg-divert –local –add /usr/lib/libGL.so.1

17 Apr 2009, 08:00

rm Inode

Some files are hard to “catch” because of strange filenames. In this cases it’s best to remove them by their inodes.

Get the inode with “ls -i” and remove the file with this command:

find . -inum [inode-number] -exec rm -i {} \;

16 Apr 2009, 18:02

gpsbabel on kernel 2.6.28

It seems that gpsbabel was broken with recent linux kernels until Luckily they did fix it.

08 Apr 2009, 11:58

T-Online Hijacking

It seems like T-Online is hijacking unused domains and providing it’s users with “Navigationshilfe”.


24 Mar 2009, 09:00

Linux-Vserver: Restore lost context

To restore the context of a vserver, in case it is lost, use this command:

vuname –xid <XID> -s -t context=“/etc/vservers/”
Replace <XID> with a valid context id.

16 Mar 2009, 22:52

This Blog via IPv6

This blog should be available via IPv6 again thanks to HE and lightttpd.

09 Mar 2009, 11:46

Heise.de via IPv6

I’ve just discovered that heise.de is available over IPv6 at www.six.heise.de. Really nice.

Update: Now they’ve officially announced this feature.

26 Feb 2009, 12:38

KDE4.2 and Nepomuk on Debian

Since the features of KDE4 were announced I was eager to try out the promised semantik desktop. Since I’ve been working on onologies once, I know that it is a great idea, but sometimes difficult to implement. So I wanted to see how the KDE guys did implement this. Sadly it didn’t work on Debian very well since the Debian packagers, did not - for some very good reasons - package the sesame2 backend for nepomuk which is required to get the most out of strigi/nepomuk. Today I was thinking about nepomuk again and was lucky to find a very good explaination how to get sesame2 for nepomuk running on Debian sid with KDE4.2 form experimental. The post I found is in german, so I’ll give a quick summary of what I’ve done here:

  • First you should get your system up-to-date: apt-get update && apt-get upgrade
  • Then install the required packages: apt-get install libqt4-dev build-essential cmake subversion qmake strigi-utils sun-java6-jdk sun-java6-jre
  • I prefer sun-java6, but you could also use openjdk-6-jre and openjdk-6-jdk instead.
  • Now checkout the kdesupport sources: svn co svn://anonsvn.kde.org/home/kde/tags/kdesupport-for-4.2/kdesupport
  • Change to the source directory: cd kdesupport/soprano
  • Create the build directory for cmake: mkdir build
  • Go into the build dir: cd build
  • Now you have to properly set JAVA_HOME: export JAVA_HOME=/usr/lib/jvm/java-6-sun, use /usr/lib/jvm/java-6-openjdk for openjdk
  • Create the makefiles: cmake ..
  • This should emit something like this:
    – Soprano Components that will be built:
       * Sesame2 storage backend (java-based)

    – Soprano Components that will NOT be built: * Redland storage backend * Raptor RDF parser * Raptor RDF serializer

    * The CLucene-based full-text search index library

    – Configuring done – Generating done

  • Now compile using make: make
  • And install: sudo make install. Please note that this will install the backend into /usr/local.
  • Then you should (re-)move your old redland repository: rm -rf ~/.kde4/share/apps/nepomuk. You’ll loose all your existing metadata (tags, ratings, etc.) this way. So maybe you want to backup this information instead.
  • You should stop strigi and nepomuk now in the system settings dialog of KDE.
  • Then edit the configuration file in ~/.kde4/share/config/nepomukserverrc with your favourite editor and change it like this:
    [Basic Settings]
    Configured repositories=main
    Start Nepomuk=true

    [Service-nepomukmigration1] autostart=false

    [Service-nepomukstrigiservice] autostart=true

    [main Settings] Storage Dir[$e]=$HOME/.kde4/share/apps/nepomuk/repository/main/ Used Soprano Backend=sesame2 rebuilt index for type indexing=true

  • Now you can enabled nepomuk and strigi again. Strigi should display an icon in the system area and you should see /usr/bin/nepomukservicestub nepomukstorage eating up a lot ressources.
Of course you can use aptitude instead of apt-get if you like.

24 Feb 2009, 17:39

iotop on custom kernels

You need to enable these options to be able to run iotop on custom built linux kernels: CONFIG_TASKSTATS=y CONFIG_TASK_DELAY_ACCT=y CONFIG_TASK_XACCT=y CONFIG_TASK_IO_ACCOUNTING=y See the Arch Linux Forum for more details.

23 Feb 2009, 20:49

Broadband funding only for IPv6-enabled Connections? Yes, please!

Finally some good ideas. Some guys from the Hasso-Plattner-Institut in Potsdam, suggests that ISPs should only receive funding from the federal government if they offer IPv6-enabled connection to their customers. Now that is really a great idea. Since Windows Vista every major operation system supports IPv6 as well as the majority of the internet does except for the end-customer connections which are not capable of IPv6. If the government provides a financial appeal, there will be IPv6 connection very, very soon.

So, valued politicians, please do the right thing: support this propsal. It sounds great to me.

23 Feb 2009, 20:42

Google sponsors OpenStreetMaps

I was quite astonished to hear that Google, yeah those “Gooogle Maps” guys, did support OpenStreetMaps with £5000. I thought they are competitors? Perhaps Google does understand what Microsoft doesn’t: You can’t compete with OpenSource - you can only adapt to it.

23 Feb 2009, 20:42

GTA4 - Story completed

Did I mention that I’m playing GTA 4 from time to time? Yesterday I managed to complete that last mission which I had to try again and again because Nico didn’t manage to pull himself into the helicopter. After the third or forth attempt I did some googl’ing and found the hint to “mash” the space bar. I’ve never heard that word before, but in the end I really did what it means: Hit your space button as fast as possible, repeadedly for some seconds until he managed to get into this forsaken helicopter. The rest of the mission was pretty staight forward. Follow the boat, get taken down by the AA missile, land hard on liberty island and catch the villain - Mission completed. All in all a really great game.

16 Feb 2009, 22:13

On the effect of cabling

I did, for a very long time, underestimate the effect of cabling on sound. I’ve never been an audiophile person, but today I’ve spent a (very small) fortune on cabling and I’m very impressed how good these old speakers sound. Who does need subwoofers when he’s got real stereo speakers? Long live Hi-Fi!

16 Feb 2009, 20:43

Upgrading etch to lenny

Upgrading etch to lenny is as easy a typing:

apt-get update && apt-get upgrade

Modifying your sources.list and the typing:

apt-get update && apt-get install apt && apt-get upgrade; apt-get dist-upgrade

Thats it! :)

Update: Don’t forget to install a new kernel image.

02 Feb 2009, 11:49

Debian 5.0 'lenny' release planned for Mid-February

It seems as if the release-cylce of the next stable release of Debian, called lenny, will soon come to an end. The planned release date is the weekend around the 14th februar.

Let’s hope everything works out as planned …

27 Jan 2009, 16:31

KDE 4.2

The next release of KDE, 4.2, should be released today. I’m already awaiting the annoncement. The packages are still underway to Debian experimental (some are stuck in NEW) and should arrive shortly.