Tuesday, June 9, 2015

Timemachine backups to a linux server

If your organization uses Apple laptops, you want to support a way to provide backups. Timemachine is the obvious choice. But what if you do not want to run Mac OS X, but only Linux. Then you need to have an AFP service. This is provided by the "netatalk" package. This is similar to "samba". Online you can find enough tutorials on how to set it up. And since a few years it supports timemachine out of the box.

But is has a few thing you need fix, before it can be used to backup a larger number of clients. They are caused by the fact that everything related to the timemachine backup is setup automatically and for this case the defaults are wrong. Timemachine uses "sparse bundles", a sort of file container, resembling something like zip files. The first problem is that the sparse bundle that is create by is setup to fill the entire partition. This means that you can only have one backup per partition. This is not what you want if you have a 4TB disc in your server, you want to give a few 100 GB to each backup. This can be solved by using LVM. Each backup gets its own volume and by configuring netatalk you can give a different share to each individual user.

The second problem is caused by having the Apple sparse bundle on a linux filesystem. By default, the sparse bundle saves the data in files of 8MB. If you are using the default settings of ext4, you over 40,000 files in one directory and that generates strange error messages. There is a command to change settings of the sparse bundle, but unfortunately that is only available on the mac. So you have to change the "band size" of the sparse bundle. These are the commands to use:

cd /Volumes/TimeCapsule
hdiutil convert hostname.sparsebundle -format UDSB -tgtimagekey sparse-band-size=2097152 -o new_hostname.sparsbundle
mv hostname.sparsebundle hostname.sparsebundle.old
mv new_hostname.sparsebundle hostname.sparsebundle
cp hostname.sparsebundle.old/com.apple.TimeMachine.MachineID* hostname.sparsebundle/

To be precise, before you can change the settings, you need a finished backup. You can do this by excluding almost all directories from the backup. This saves time when you convert the sparsebundle. If the first backup is finished, you mount the AFS share on the mac. Then you open a terminal window and go to the location where the share is mounted, in the "/Volumes" directory. With the "hdiutil" command you change the sparse bundle. For some reason, the size must be set in 512K blocks. If this is finished, you rename the old one in a temporary name. Later you can just delete it. But make sure it does not have the ".sparsebundle" extension. Then you rename the newly created bundle in the original filename. The last step is to copy some configuration files, relating to the timemachine backup, from the old bundle to the new one.

Also make sure you disable the timemachine service when you convert the sparse bundle. When the conversion is finished, you can enable timemachine again and run another backup. You can check the timestamps to see if it is now using the new sparse bundle. If this has succeeded, you can remove the excludes and run a full backup.

Saturday, April 4, 2015

MySQL authentification using PAM

In certain cases you want to use the accounts that already exist on a system to login to a mysql or mariadb database. You can do this by enabling pam in the mysql server configuration. First you need to create a pam configuration file that looks like this:

cat /etc/pam.d/mariadb
#%PAM-1.0
# Use password-auth common PAM configuration for the daemon
auth        include     password-auth
account     include     password-auth

Depending on the distribution you use, the file might be a little different, but basically you just add the default settings. Then you change the mysql configuration file using your favorite editor: emacs /etc/my.cnf

[mysqld]
plugin-load=auth_pam.so

You should check that this plugin is installed on your system. Then you (re)start the mysql servide:
systemctl start mysqld.service

Next you login to the database as the root user
mysqladmin -u root password

Then you can grant permissions on users that are available through pam:

GRANT ALL ON *.* TO username@localhost IDENTIFIED VIA pam USING 'mariadb';
flush privileges;

After that, the user can now login to mysql with:
mysql -u username -p

Note that the string "mariadb" after USING in the grant query refers to the pam configuration filename that you used.

Sunday, March 29, 2015

MonetDB docker image on Google Cloud Platform

We want to run the monetdb-r-docker image in the google cloud. There is lots of documentation on the google website on how to set up your environment, so i will not cover that here in detail.
$ gcloud preview container clusters create monetdb-r-docker
  --num-nodes 1 
  --machine-type g1-small

Waiting for cluster creation...done.
Create cluster succeeded!
Using gcloud compute copy-files to fetch ssl certs from cluster master...
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
kubecfg.key                                   100% 1704     1.7KB/s   00:00    
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
kubecfg.crt                                   100% 4423     4.3KB/s   00:00    
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
ca.crt                                        100% 1224     1.2KB/s   00:00    
clusterApiVersion: 0.13.2
containerIpv4Cidr: 10.24.0.0/14
creationTimestamp: '2015-03-29T09:27:12+00:00'
enableCloudLogging: false
endpoint: 104.155.58.68
masterAuth:
  password: ***********
  user: admin
name: monetdb-r-docker
network: default
nodeConfig:
  machineType: g1-small
  serviceAccounts:
  - email: default
    scopes:
    - https://www.googleapis.com/auth/compute
    - https://www.googleapis.com/auth/devstorage.read_only
  sourceImage: https://www.googleapis.com/compute/v1/projects/google-containers/global/images/container-vm-v20150317
nodeRoutingPrefixSize: 24
numNodes: 1
selfLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
servicesIpv4Cidr: 10.27.240.0/20
status: running
zone: europe-west1-b

Next we will create the container configuration file monetdb.json:
{
  "id": "monetdb-r-docker",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "containers": [{
        "name": "monetdb",
        "image": "monetdb/monetdb-r-docker",
        "ports": [{
          "containerPort": 50000,
          "hostPort": 50000
        }]
      }]
    }
  }
}

With this file, we create the container:
gcloud preview container kubectl create -f monetdb.json
Then we can see that the container is being created:
$ gcloud preview container kubectl get pod monetdb-r-docker
POD                 IP                  CONTAINER(S)        IMAGE(S)                   HOST                                                                  LABELS              STATUS              CREATED
monetdb-r-docker    10.24.1.3           monetdb             monetdb/monetdb-r-docker   k8s-monetdb-r-docker-node-1.c.my-project-id.internal/130.211.82.116   <none>              Pending             Less than a second
It will take a few minutes before the container is created.
$ gcloud preview container kubectl get pod monetdb-r-docker
POD                 IP                  CONTAINER(S)        IMAGE(S)                   HOST                                                                  LABELS              STATUS              CREATED
monetdb-r-docker    10.24.1.3           monetdb             monetdb/monetdb-r-docker   k8s-monetdb-r-docker-node-1.c.my-project-id.internal/130.211.82.116   <none>              Running             3 minutes

Then you could login to the node and use the docker commandline tool to check the running container:
$ gcloud compute ssh k8s-monetdb-r-docker-node-1
Warning: Permanently added '130.211.82.116' (ECDSA) to the list of known hosts.
Linux k8s-monetdb-r-docker-node-1 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt4-3~bpo70+1 (2015-02-12) x86_64

=== GCE Kubernetes node setup complete ===

Before we can connect to the database, we need to add a firewall rule:
$ gcloud compute firewall-rules create monetdb-r-node-50000 --allow tcp:50000 --target-tags k8s-monetdb-r-docker-node
Created [https://www.googleapis.com/compute/v1/projects/my-project-id/global/firewalls/monetdb-r-node-50000].
NAME                 NETWORK SRC_RANGES RULES     SRC_TAGS TARGET_TAGS
monetdb-r-node-50000 default 0.0.0.0/0  tcp:50000          k8s-monetdb-r-docker-node
And then we can connect to the database using the mclient tool:
$ mclient -h 130.211.82.116 -u monetdb -ddb
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (unreleased)
Database: MonetDB v11.19.9 (Oct2014-SP2), 'mapi:monetdb://monetdb-r-docker:50000/db'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>\q

And once we are done using the database, we cleanup the container to prevent additional costs:
gcloud preview container clusters delete monetdb-r-docker
Waiting for cluster deletion...done.
name: operation-1427629115685-c7f2c2d7
operationType: deleteCluster
selfLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/operations/operation-1427629115685-c7f2c2d7
status: done
target: /projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
targetLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
zone: europe-west1-b
And the last step is to remove the firewall rule as well:
gcloud compute firewall-rules delete monetdb-r-node-50000
The following firewalls will be deleted:
 - [monetdb-r-node-50000]

Do you want to continue (Y/n)?  y

Deleted [https://www.googleapis.com/compute/v1/projects/my-project-id/global/firewalls/monetdb-r-node-50000].

Thursday, May 29, 2014

Porting a Preseed file to Debian 7

I script the installation of Linux of my personal computers and today it was time to start upgrading one of them from Debian 6 (Codename "squeeze") to Debian 7 (Codename "wheezy"). For this Debian has "preseed" available. So the first task was to change the preseed file i used for debian 6 to work with debian 7. When you change to a new version of a distribution it is not unreasonable to change some settings in such a configuration system. But as always, in practice it was more work than expected and also more than needed. And one reason is the lack of documentation.

 The preseed file contains  settings that are needed for the debconf system of the installer. In order to get an up-to-date version of the available settings, i did a manual installation. After the installation is finished, you can install the debconf-utils package. This contains some script to work with the debconf system.
If you run the command: debconf-get-selections --installer , you get a list of all available settings and their values. Unfortunately, the list is more than 600 lines, including comments. And it contains all setting, also the default ones and the ones you will never use. So selecting the ones you actually want to include in your preseed file is tricky.

The simple change was replacing:

d-i console-keymaps-at/keymap select us

with:

d-i keyboard-configuration/xkb-keymap select us 

But since i did not find a complete list of available settings and their meanings, it is an educated guess. But then the real problems begin.

During the manual install i encountered one problem, missing firmware for the network card. The debian repositories do not contain firmware that is "non-free", according to the very strict definition of the debian organisation. You get the option of loading this from some media, but if you ignore it and just continue, the networkcard works just fine. But doing this automatically in an unattended installation turned out to be a bit of a problem.  

To solve these kinds of problems, you could really benefit from proper documentation. And although i really like opensource software, that is lacking in a lot of situations. In the end you end up (almost) reverse engineering the system in order to determine what happens in what order. The advantage of opensource software is that you actually have the sourcecode, so you can see what is going on. But this takes a lot of time specially if you want to do something non-trivial. And looking for answers online does not help much either. If something is not documented online, it is not indexed by search-engines, so it will never end up in the search results. The only thing you can hope for is a hint in the right direction.

And i had one of these hints in this case.  Someone mentioned that you could add debconf parameters to the kernel that is loaded during the automatic install. The way the unattended install works is that you use pxeboot to download the installation kernel from a server. You need to setup a tftp server and based on the mac address of the machine you want to install and the tftp configuration, the machine gets a kernel and some parameters to start it with. One of them is the location of the preseed file that has to be used.

This was the line in the file i used for the squeeze installation:

     append vga=788 initrd=debian-installer/amd64/initrd.gz 
     auto=true 
     url=http://{hostname_of_webserver}/d-i/squeeze/preseed.cfg 
     priority=critical -- quiet   

I added the following setting (in bold):

     append vga=788 initrd=debian-installer/amd64/initrd.gz 
     auto=true 
     url=http://{hostname_of_webserver}/d-i/wheezy/preseed.cfg 
     hw-detect/load_firmware=false 
     priority=critical -- quiet   

(In the pxeboot config file it should be on one line, here i added newlines for readability)

Adding this line prevents an error that causes the installer to prompt for manual intervention. Since i could not find some document that contains a detailed description of the debian installer, i have to deduce it from the logging that is generated during the installation. All the output goes to the file: /var/log/installer/syslog If you read this file you still see that it detects the fact that it is  missing the fireware, but it will continue anyway. I tried adding the setting in the preseed file, but that did not work. Which can be explained off-course by the fact that you have to setup a network connection before you can download the preseed file. But this is speculation on my part, since i could not find exactly which programs runs during the installation in which order and what they exactly do and how you can configure that using preseed.  

The syslog file also gave the information i needed to solve the other problem with the preseed file, Language and Country selection. This also changed in the new version and i kept getting the selection dialogs, despite having settings in my preseed file. It turns out that there are a lot of "localechooser" settings available, but they are probably only used inside the dialogs. But it you look into the syslog file, you see that certain other values are set. And if you add these to the preseed file, everything works as expected.

When i was finished i closed most of the tabs in my browser. When searching for the solution for these type of problems you end up with dozens of sites that are related, but in the end not relevant to the problem. But one of them was http://www.debian.org/releases/squeeze/example-preseed.txt This is an example of a preseed file for the old version, one i used to create my version with. Feeling lucky, i decided to see the new version http://www.debian.org/releases/wheezy/example-preseed.txt And indeed, this had some of the changed setting and even some comments. Somehow this version had never come up in any of the search results.

Then i looked further and saw that another interesting document http://www.debian.org/releases/stable/amd64/index.html.en which is the installation manual. This one also did not end up on the top of the search results. But it contains an appendix about preseed, with some valuable information on how preseed is supposed to work. I should have read them first, specifically now that i know what the problem was, but it interesting to notice that these documents did not end up in the top of the search results. But still it would not have been enough to make porting the preseed file a simple task.