Archive

Archive for the ‘Amazon s3’ Category

Mount Amazon S3 bucket as a local filesystem in Linux RHEL5

May 12, 2011 6 comments

Hi,

The steps to mount an s3 bucket as a local filesystem is given below. This has been tested in i386 machine running RHEL 5.6 (Tikanga). There are two restrictions which cannot be overridden,

ONE: Maximum file size=64GB (limited by s3fs, not Amazon).
TWO: Bucket name shouldn’t have upper case characters.

1) Install current latest FUSE( Filesystem in Userspace) package.

The rpmbuild command will create all the fuse rpms inside “/root/rpm/RPMS/i386/”. Then using rpm command install those packages

rpm -ivh /root/rpm/RPMS/i386/fuse-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-libs-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-devel-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-debuginfo-2.8.5-99.vitki.01.el5.i386.rpm

2) Install S3FS package

wget http://s3fs.googlecode.com/files/s3fs-1.35.tar.gz
cd s3fs-1.35
mkdir /usr/local/s3fs
./configure –prefix=/usr/local/s3fs
make && make install

3) Create a symbolic link to “s3fs” binary

ln -s /usr/local/s3fs/bin/s3fs /usr/local/bin/s3fs
mkdir /mnt/s3drive

4) Activate an account in s3. You will get an access key and secret_key after the activation.

You can create a new s3 account by following the url,

5) Install s3 client for linux. The package name is “s3cmd-1.0.0-4.1”.

$ yum install s3cmd

Alternatively, you can download it from the url pasted below:

6) Configure s3 client using the command,

$ s3cmd –configure

It will ask for the access key and secret key that we got during our account activation. This process reports failure, if we provide the wrong key values. Once this step is completed, the configuration will be stored inside the file “/root/.s3cfg”.

7) We need to create buckets in s3 for mounting it locally

eg: creating a bucket named “dailybackup”,

$ s3cmd mb s3://dailybackup

For additional options refer the url,

8) List all buckets

$ s3cmd ls
2011-02-20 23:13 s3://backup1
2009-12-15 10:50 s3://backup2
2011-03-22 06:38 s3://dailybackup
$

9) Create s3fs password file. The s3fs password file has this format (use this format if you have only one set of credentials):

`accessKeyId:secretAccessKey`

If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well:

bucketName:accessKeyId:secretAccessKey

$ cat > /root/.s3fs.cfg
youraccesskey:yoursecretkey
$ chmod 600 /root/.s3fs.cfg
$

10) Mount the bucket “dailybackup” on directory “/amazonbackup”

$ s3fs -o passwd_file=/root/.s3fs.cfg dailybackup /amazonbackup
$ df -Th /mnt/s3drive
Filesystem Type Size Used Avail Use% Mounted on
fuse fuse 256T 0 256T 0% /amazonbackup
$

I configured this setup and used it for weekly cpanel backup uploads. As the s3 bucket is mounted as a local drive, we can use rsync to move directories or files to amazon. eg:

rsync -av –progress /backup/cpbackup/weekly /amazonbackup/

Ref:
http://s3fs.googlecode.com/svn/wiki/FuseOverAmazon.wiki
http://code.google.com/p/s3fs/wiki/FuseOverAmazon

NOTE:

1) s3fs has a caching mechanism: You can enable local file caching to minimize downloads, e.g., :

$ s3fs mybucket /mnt -ouse_cache=/tmp

Advertisements
Categories: Amazon s3, Cpanel/WHM

Linux Cpanel Backup to Amazon S3

March 22, 2011 70 comments

In this article i will explain how to take cpanel backup to amazon s3(with backup rotation enabled). The step by step procedure is explained below,

Step1) Activate an account in s3. You will get an access key and secret_key after the activation.

You can create a new s3 account by following the url,

Step2) Install s3 client for linux. The package name is “s3cmd-1.0.0-4.1”.

root@heuristics:~# apt-get install s3cmd

On redhat or centos based machines(using rpm packages), you can install “s3cmd” as follows,

cd /etc/yum.repos.d
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd

Alternatively, you can download it from the url pasted below:

Step3) Configure s3 client using the command,

root@heuristics:~# s3cmd --configure

It will ask for the access key and secret key that we got during our account activation. This process reports failure, if we provide the wrong key values. Once this step is completed,  the configuration will be stored inside the file “/root/.s3cfg”.

During configuration you will be asked whether to enable encryption or not. Enabling encryption will improve the security of transfer but will make the upload a little bit slower.

Step4) We need to create buckets in s3 for storing the backup.

eg: creating a bucket named “Backup_daily”,

root@heuristics:~# s3cmd mb s3://Backup_daily

For additional options refer the url,

Step5) Enable daily backup from WHM. Refer the url pasted below for reference,

If backup is already configured, then we can know the location of the backup using the command,

root@heuristics:~#grep BACKUPDIR /etc/cpbackup.conf
BACKUPDIR /backup
root@heuristics:~#

Inside “/backup” there will be another directory named “cpbackup”, which will be holding the daily,weekly and monthly backup’s. In my case,

root@heuristics:~# ls /backup/cpbackup/
./  ../  daily/  monthly/  weekly/
root@heuristics:~#

Step6) Create log directories,

root@heuristics:~# mkdir /var/log/backuplogs
root@heuristics:~#

Step7) Write a script to automate the backup, and save it as “/root/dailybackup.sh” . In the script pasted below, the backup rotation degree is set as 3(“DEGREE=3” , line16). This means that, 3 days old backup will be deleted automatically. You can increase this backup retention period by adjusting the “DEGREE” variable in line16.

#!/bin/bash

##Notification email address
_EMAIL=your_email@domain.com

ERRORLOG=/var/log/backuplogs/backup.err`date +%F`
ACTIVITYLOG=/var/log/backuplogs/activity.log`date +%F`

##Directory which needs to be backed up
SOURCE=/backup/cpbackup/daily

##Name of the backup in bucket
DESTINATION=`date +%F`

##Backup degree
DEGREE=3

#Clear the logs if the script is executed second time
:> ${ERRORLOG}
:> ${ACTIVITYLOG}

##Uploading the daily backup to Amazon s3
/usr/bin/s3cmd -r put ${SOURCE} s3://Backup_daily/${DESTINATION}/ 1>>${ACTIVITYLOG} 2>>${ERRORLOG}
ret2=$?

##Sent email alert
msg="BACKUP NOTIFICATION ALERT FROM `hostname`"

if [ $ret2 -eq 0 ];then
msg1="Amazon s3 Backup Uploaded Successfully"
else
msg1="Amazon s3 Backup Failed!!\n Check ${ERRORLOG} for more details"
fi
echo -e "$msg1"|mail -s "$msg" ${_EMAIL}

#######################
##Deleting backup's older than DEGREE days
## Delete from both server and amazon
#######################
DELETENAME=$(date  --date="${DEGREE} days ago" +%F)

/usr/bin/s3cmd -r --force del s3://Backup_daily/${DELETENAME} 1>>${ACTIVITYLOG} 2>>${ERRORLOG}

Step8) Grant execute privilege for the script and schedule it to run everyday,

root@heuristics:~# chmod u+x /root/dailybackup.sh
root@heuristics:~# cp -p /root/dailybackup.sh /etc/cron.daily/
root@heuristics:~#

NOTE:

Or if you wish to start the amazon s3 backup script right after the cpanel backup process, then create a cpanel post backup hook named “/scripts/postcpbackup” with the following contents,

#!/usr/bin/perl
system(“/root/dailybackup.sh”);

The post backup hook will start the amazon s3 backup script right after every cpanel backup completion.

In case of disasters we can download the backup from the bucket using the same s3cmd tool.

root@heuristics:~# mkdir restore
root@heuristics:~# s3cmd -r get s3://Backup_daily/2011-02-32  restore
Categories: Amazon s3, Backup, Cpanel/WHM