Camlistore

I just gave Camlistore a try. My first impression: uber-cool, but still very rough and suuuper slow.

But I have no doubt that the guys behind that project will make it fly. They already did some other pretty cool stuff, like memcached and DJabberd.

Camlistore is a personal “Content-Management-System”, similar to git-annex.

Replace a failed HDD in an SW RAID

This post describes how to replace an failed HDD from an Linux Software-RAID while fixing GRUB. This is mostly for my own reference but posted here in the hope that someone may find it useful.

If one of your HDDs in a Software-RAID failed and your system is still running you should first mark all partitions of this device as failed. Let’s assume that /dev/sdb failed.

cat /proc/mdstat # check RAID status# mark the devices as failed, neccessary to remove them
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3
cat /proc/mdstat # check RAID status
# remove the failed devices
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3

Now you should replace the HDD. Usually this means shutting the system down – unless you have a very rare setup which allows for hot-plug of non-HW-RAID disks.

If the failed disk was part of the OS paritions you’ll proably need to boot into some kind of rescue system first to perform the follwing steps.

After the system is back up you have to copy the partition table from the old disk to the new one. This used to be done w/ sfdisk, however since HDDs are getting to big for MBR to handle many system are switching over to using GPT which isn’t handled by classic UNIX/Linux HDD tools. Thus we’ll be using GParted.

sgdisk -R=/dev/sdb /dev/sdb # copy partition table, be sure to get the devices right
sgdisk -G /dev/sdb # randomize GUID to avoid conflicts w/ first disk

Now you can re-assemable the RAID devices:
mdadm --manage /dev/md0 --add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdb2
mdadm --manage /dev/md2 --add /dev/sdb3

You should let the system run until the RAID arrays have re-synced. You can check the status in /proc/mdstat.

Perhaps your bootloader got corrupted as well, just in case this are the steps necessary to re-install GRUB.
mount /dev/md1 /mnt # mount your /
mount /dev/md0 /mnt/boot # mount your /boot
mount -o bind /dev /mnt/dev # mount /dev, needed by grub-install
mount -o bind /sys /mnt/sys # mount /sys, needed by grub-install
mount -t proc /proc /mnt/proc
cp /proc/mounts /mnt/etc/mtab
chroot /mnt /bin/bash
grub-install --recheck /dev/sda
grub-install --recheck /dev/sdb
update-grub
exit # leave chroot

Now you should wait until all MD Arrays are back in shape (check /proc/mdstat), then reboot.

reboot # reboot into your fixed system

Deploying Perl Web-Apps with Puppet

Since I was asked for I’d like to show one way of how you can deploy your perl web apps using debian packages and puppet.

This post assumes that you want to install some Plack webapps, e.g. App::Standby, Monitoring::Spooler and Zabbix::Reporter.

Those are pretty straight-forward Plack Apps built using Dist::Zilla.

cd Monitoring-Spooler
dzil build

I’ve built debian packages of these CPAN modules. Using dh-make-perl this is reasonably easy, although you’ll need some switches to make dh-make-perl a little less picky:

dh-make-perl --vcs rcs --source-format "3.0 (native)" Monitoring-Spooler-0.04

After building and uploading these packages to my repository I continue to use Puppet to install and configure those.

For each of these packages there is a custom puppet module, e.g. app_standby for managing App::Standby. This module create all necessary directories, manages the configuration and installs the package from my repository. It does not, however, set up any kind of webserver or runtime wrapper. This is done by another module: starman.

For running my Plack apps I use Starman which is managed by it’s own puppet module. It will install the starman package, manage it’s config and create an apache config. The interesting part is that I use puppet to automatically create an apache config and a simple Plack/PSGI multiplexer app that allows me to handle requests to all three (or more) of these apps using a single starman instance.

If these apps were rather busy I’d use a separate starman instance for each one, but as it is I’m more than happy to avoid the overhead of running multiple starman instances.

My starman module is invoked from one of my service classes like this:

  class { 'starman':
    configure => {
      mounts    => {
        'zreporter'     => '/usr/bin/zreporter-web.psgi',
        'monspooler'    => '/usr/bin/mon-spooler.psgi',
        'monapi'        => '/usr/bin/mon-spooler-api.psgi',
        'standby'       => '/usr/bin/standby-mgm.psgi',
      },
    }
  }

The starman module itself is pretty straight-forward, but below are the templates for the apache config and the PSGI multiplexer.

apache.conf.erb:

<IfModule mod_proxy.c>
<% if @configure.has_key?("mounts") -%>
<% @configure["mounts"].keys.sort.each do |mount| -%>
   <Location /<%= mount %>>
      Allow from all
      ProxyPass           http://localhost:<%= @configure.has_key?('starman_port') ? @configure["starman_port"] : '5001' %>/<%= mount %>
      ProxyPassReverse    http://localhost:<%= @configure.has_key?('starman_port') ? @configure["starman_port"] : '5001' %>/<%= mount %>
   </Location>
<% end -%>
<% end -%>
</IfModule>

mount.psgi.erb:

#!/usr/bin/perl
use strict;
use warnings;

use Plack::Util;
use Plack::Builder;

my $app = builder {
<% if @configure.has_key?("mounts") -%>
<% @configure["mounts"].keys.sort.each do |mount| -%>
   mount '/<%= mount %>'  => Plack::Util::load_psgi('<%= @configure["mounts"][mount] %>');
<% end -%>
<% end -%>
   mount '/'           => builder {
        enable 'Plack::Middleware::Static', path => qr#/.+#, root => '/var/www';
        my $app = sub { return [ 302, ['Location','/index.html'], [] ]; };
        $app;
   }
};

 

scp over IP6 link local address

When trying to copy some large files between two PCs I was annoyed by the slow wifi and did connect both by a direct LAN cable. But since I didn’t want to manually configure IPs I did remember that IPv6 would give each interface an unique link-local address immedeately. So I went on to figure out how to correctly pass this address to ssh/scp.

Let’s assume the link-local address of ‘fe80::f4de:ff1f:fa5e:feee’.

First try: ssh user@fe80::f4de:ff1f:fa5e:feee

This won’t work because these link-local addresses are only unique on a given link. So you need to specify the interface as well.

Second try: ssh user@fe80::f4de:ff1f:fa5e:feee%en0

Of course you have to change the interface identifier acording to your system. On Linux this would most probably be eth0.

This was find for ssh, but for scp I did also need a remote source directory.

Third try: scp -6 user@fe80::f4de:ff1f:fa5e:feee%en0:/some/path/

This won’t work because the colons confuse ssh. You need to be more explicit by using square brackets.

Forth and final try: scp -6 user@[fe80::f4de:ff1f:fa5e:feee%en0]:/some/path/ .

Figuring out why this is no auto-generated link-local address is left as an exercise to the reader (but not relevant for this post).

GitLab behind a Reverse-Proxy

Running GitLab behind a (second) Reverse-Proxy over https?

In that case you should undefine the “proxy_set_header X-Forwarded-Proto” header in the backend nginx and set it in the frontend nginx. Otherwise GitLab/RoR will redirect you to http on various ocasions. You should also set https in the config/gitlab.yml.

Escaping Filenames with Whitespaces for rsync

If you need to sync from a source path containing whitespaces you’ll need to escape those for the remote shell as well as for the local shell, so your command may look like this:

rsync -ae ssh ‘user@host.tld:/path/with\ some/gaps/’ ‘/another/one with/ a gap/’

Don’t escape the whitespaces on the local side twice or you’ll end up with weired filenames!

Via

git: Rebase vs. Rebase

For a some time I’ve wondered about git rebase. At some point in the past I’ve realised that there is not one use in git rebase but (at least) two distict ones.

  • On the one hand git rebase is used to rebase a branch onto an updated source branch (source like in origin, not in source code).
  • On the other hand it’s used to rewrite commit history.

What’s rebase?

A rebase makes git rewind your commit history up to a certain point and than re-apply your patches onto another starting point. The starting point depends on the exact type of rebase you do and what you tell git.

Rebase a branch

Why would you want to rebase a branch onto another one and what does it do?

If you have a feature branch and want this to be merged with your master branch you could of course merge it. If you do so and your feature branch is not based of the last commit in master git will create a new merge commit since it has to merge two branches of a tree. If you’d rebase your feature branch onto the tip of master you’d have a linear merge history w/o a merge commit instead.

Rewrite a branch

Why would you want to rewrite your commit history? What if you make mistakes while crafting a new branch? If you’re following common VCS advice you’ll probably commit often. Probably between some mistake you’ve made and the time you did correct it.

Now you’d have your shiny new feature commit and two or more commits w/ bugfixes to your initial commit in this feature branch. Some people prefer to keep the history of their mistakes. Those could just merge this new feature branch into their master branch.

Others prefer to keep their commit history clean (you wouldn’t release a book which includes all mistakes and corrections you’ve made, would you?). Here comes git rebase -i to the rescue. This rebases the current branch onto itself. This may sound a bit silly but it allows you to drop commits or more importantly to combine (squash) commits together!

Where to got from here?

This post wasn’t meant to give a complete introduction to git rebasing. There are plenty of great tutorials out there. I did just want to highlight some points which were important to me.

Watching 3D Movies in 2D using Mplayer

In case you’ve got an 3D Movie w/o an 3D capable output device try mplayer w/ the “-vo gl:stereo=3″ switch to get it in 2D. It works at least on Linux w/ an Nvidia Card.

Perl Tutorials

Looking for Perl Tutorials? Go ahead: http://perl-tutorial.org/.

Options for running Linux-Vserver on Debian wheezy

 

If you’re running Debian squezze with a Linux-Vserver kernel you’ll soon have to face the fact that support for non-mainline virtualization patches will soon be dropped from Debian stable.

The Debian kernel team stated very clearly that they won’t continue to provide custom patched kernel packages. In general I think that is a very good decision. Taking the workload for the team into account and the unwillingness of the Linux-Vserver and OpenVZ maintainers to cooperate with Debian this is very much understood.

So what to do now if have vservers running your business?

These are the options I could think of so far, feel free to suggest further:

  • Stay with Squeeze
  • LXC
  • KVM w/ Squeeze VM
  • VMWare ESXi w/ Squeeze VM
  • Custom patched Kernel
  • Xen w/ Squeeze domU

Staying with Squeeze

If you plan to stay with squeeze you’re good to go for quite a while. Of course squeeze security updates will end some time after the Wheezy release, but what to do with newer hardware which is not supported by Squeeze? So not an option I think.

LXC

Linux Containers (LXC) are the preferred contextualization from Wheezy on. They are maintained within the mainline kernel are said to have a very good intregration with it. The biggest drawback however, are the userspace tools. While the team developing those used to be quite active it has slowed down a bit recently without having brought the tools anywhere close to util-vserver – which aren’t perfekt either.

KVM w/ Squeeze VM and Vserver Kernel

You could run Wheezy or Squeeze w/ an Backport Kernel on your host and run an squeeze vserver kernel inside KVM. That sounds ugly and means having to set up a network bridge on your host.

KVM

Of course you could also turn all your vservers into KVM VMs. This is very much work and means completly migrating to an entirely differnt virtualization architecture. Not very nice.

VMWare ESXi

Long story short: The management of an ESXi is an PITA.

Xen w/ Squeeze VM and Vserver Kernel

Same as KVM w/ Squeeze kernel. See above.

Xen

Same as KVM. See above.

Custom patched Kernel

While the Linux-Vserver team isn’t always cheered about debian they are still very active and continue to provide patches for recent kernel. The biggest drawbacks here are, that you have to care about security update yourself and that you need to build a custom set of util-vserver. Older versions from squeeze won’t work with newer kernels.