Tuesday, June 9, 2015

Timemachine backups to a linux server

If your organization uses Apple laptops, you want to support a way to provide backups. Timemachine is the obvious choice. But what if you do not want to run Mac OS X, but only Linux. Then you need to have an AFP service. This is provided by the "netatalk" package. This is similar to "samba". Online you can find enough tutorials on how to set it up. And since a few years it supports timemachine out of the box.

But is has a few thing you need fix, before it can be used to backup a larger number of clients. They are caused by the fact that everything related to the timemachine backup is setup automatically and for this case the defaults are wrong. Timemachine uses "sparse bundles", a sort of file container, resembling something like zip files. The first problem is that the sparse bundle that is create by is setup to fill the entire partition. This means that you can only have one backup per partition. This is not what you want if you have a 4TB disc in your server, you want to give a few 100 GB to each backup. This can be solved by using LVM. Each backup gets its own volume and by configuring netatalk you can give a different share to each individual user.

The second problem is caused by having the Apple sparse bundle on a linux filesystem. By default, the sparse bundle saves the data in files of 8MB. If you are using the default settings of ext4, you over 40,000 files in one directory and that generates strange error messages. There is a command to change settings of the sparse bundle, but unfortunately that is only available on the mac. So you have to change the "band size" of the sparse bundle. These are the commands to use:

cd /Volumes/TimeCapsule
hdiutil convert hostname.sparsebundle -format UDSB -tgtimagekey sparse-band-size=2097152 -o new_hostname.sparsbundle
mv hostname.sparsebundle hostname.sparsebundle.old
mv new_hostname.sparsebundle hostname.sparsebundle
cp hostname.sparsebundle.old/com.apple.TimeMachine.MachineID* hostname.sparsebundle/

To be precise, before you can change the settings, you need a finished backup. You can do this by excluding almost all directories from the backup. This saves time when you convert the sparsebundle. If the first backup is finished, you mount the AFS share on the mac. Then you open a terminal window and go to the location where the share is mounted, in the "/Volumes" directory. With the "hdiutil" command you change the sparse bundle. For some reason, the size must be set in 512K blocks. If this is finished, you rename the old one in a temporary name. Later you can just delete it. But make sure it does not have the ".sparsebundle" extension. Then you rename the newly created bundle in the original filename. The last step is to copy some configuration files, relating to the timemachine backup, from the old bundle to the new one.

Also make sure you disable the timemachine service when you convert the sparse bundle. When the conversion is finished, you can enable timemachine again and run another backup. You can check the timestamps to see if it is now using the new sparse bundle. If this has succeeded, you can remove the excludes and run a full backup.

Saturday, April 4, 2015

MySQL authentification using PAM

In certain cases you want to use the accounts that already exist on a system to login to a mysql or mariadb database. You can do this by enabling pam in the mysql server configuration. First you need to create a pam configuration file that looks like this:

cat /etc/pam.d/mariadb
#%PAM-1.0
# Use password-auth common PAM configuration for the daemon
auth        include     password-auth
account     include     password-auth

Depending on the distribution you use, the file might be a little different, but basically you just add the default settings. Then you change the mysql configuration file using your favorite editor: emacs /etc/my.cnf

[mysqld]
plugin-load=auth_pam.so

You should check that this plugin is installed on your system. Then you (re)start the mysql servide:
systemctl start mysqld.service

Next you login to the database as the root user
mysqladmin -u root password

Then you can grant permissions on users that are available through pam:

GRANT ALL ON *.* TO username@localhost IDENTIFIED VIA pam USING 'mariadb';
flush privileges;

After that, the user can now login to mysql with:
mysql -u username -p

Note that the string "mariadb" after USING in the grant query refers to the pam configuration filename that you used.

Sunday, March 29, 2015

MonetDB docker image on Google Cloud Platform

We want to run the monetdb-r-docker image in the google cloud. There is lots of documentation on the google website on how to set up your environment, so i will not cover that here in detail.
$ gcloud preview container clusters create monetdb-r-docker
  --num-nodes 1 
  --machine-type g1-small

Waiting for cluster creation...done.
Create cluster succeeded!
Using gcloud compute copy-files to fetch ssl certs from cluster master...
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
kubecfg.key                                   100% 1704     1.7KB/s   00:00    
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
kubecfg.crt                                   100% 4423     4.3KB/s   00:00    
Warning: Permanently added '104.155.58.68' (ECDSA) to the list of known hosts.
ca.crt                                        100% 1224     1.2KB/s   00:00    
clusterApiVersion: 0.13.2
containerIpv4Cidr: 10.24.0.0/14
creationTimestamp: '2015-03-29T09:27:12+00:00'
enableCloudLogging: false
endpoint: 104.155.58.68
masterAuth:
  password: ***********
  user: admin
name: monetdb-r-docker
network: default
nodeConfig:
  machineType: g1-small
  serviceAccounts:
  - email: default
    scopes:
    - https://www.googleapis.com/auth/compute
    - https://www.googleapis.com/auth/devstorage.read_only
  sourceImage: https://www.googleapis.com/compute/v1/projects/google-containers/global/images/container-vm-v20150317
nodeRoutingPrefixSize: 24
numNodes: 1
selfLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
servicesIpv4Cidr: 10.27.240.0/20
status: running
zone: europe-west1-b

Next we will create the container configuration file monetdb.json:
{
  "id": "monetdb-r-docker",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "containers": [{
        "name": "monetdb",
        "image": "monetdb/monetdb-r-docker",
        "ports": [{
          "containerPort": 50000,
          "hostPort": 50000
        }]
      }]
    }
  }
}

With this file, we create the container:
gcloud preview container kubectl create -f monetdb.json
Then we can see that the container is being created:
$ gcloud preview container kubectl get pod monetdb-r-docker
POD                 IP                  CONTAINER(S)        IMAGE(S)                   HOST                                                                  LABELS              STATUS              CREATED
monetdb-r-docker    10.24.1.3           monetdb             monetdb/monetdb-r-docker   k8s-monetdb-r-docker-node-1.c.my-project-id.internal/130.211.82.116   <none>              Pending             Less than a second
It will take a few minutes before the container is created.
$ gcloud preview container kubectl get pod monetdb-r-docker
POD                 IP                  CONTAINER(S)        IMAGE(S)                   HOST                                                                  LABELS              STATUS              CREATED
monetdb-r-docker    10.24.1.3           monetdb             monetdb/monetdb-r-docker   k8s-monetdb-r-docker-node-1.c.my-project-id.internal/130.211.82.116   <none>              Running             3 minutes

Then you could login to the node and use the docker commandline tool to check the running container:
$ gcloud compute ssh k8s-monetdb-r-docker-node-1
Warning: Permanently added '130.211.82.116' (ECDSA) to the list of known hosts.
Linux k8s-monetdb-r-docker-node-1 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt4-3~bpo70+1 (2015-02-12) x86_64

=== GCE Kubernetes node setup complete ===

Before we can connect to the database, we need to add a firewall rule:
$ gcloud compute firewall-rules create monetdb-r-node-50000 --allow tcp:50000 --target-tags k8s-monetdb-r-docker-node
Created [https://www.googleapis.com/compute/v1/projects/my-project-id/global/firewalls/monetdb-r-node-50000].
NAME                 NETWORK SRC_RANGES RULES     SRC_TAGS TARGET_TAGS
monetdb-r-node-50000 default 0.0.0.0/0  tcp:50000          k8s-monetdb-r-docker-node
And then we can connect to the database using the mclient tool:
$ mclient -h 130.211.82.116 -u monetdb -ddb
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (unreleased)
Database: MonetDB v11.19.9 (Oct2014-SP2), 'mapi:monetdb://monetdb-r-docker:50000/db'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>\q

And once we are done using the database, we cleanup the container to prevent additional costs:
gcloud preview container clusters delete monetdb-r-docker
Waiting for cluster deletion...done.
name: operation-1427629115685-c7f2c2d7
operationType: deleteCluster
selfLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/operations/operation-1427629115685-c7f2c2d7
status: done
target: /projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
targetLink: https://www.googleapis.com/container/v1beta1/projects/123456789/zones/europe-west1-b/clusters/monetdb-r-docker
zone: europe-west1-b
And the last step is to remove the firewall rule as well:
gcloud compute firewall-rules delete monetdb-r-node-50000
The following firewalls will be deleted:
 - [monetdb-r-node-50000]

Do you want to continue (Y/n)?  y

Deleted [https://www.googleapis.com/compute/v1/projects/my-project-id/global/firewalls/monetdb-r-node-50000].