Home > Amazon s3, Cpanel/WHM > Mount Amazon S3 bucket as a local filesystem in Linux RHEL5

Mount Amazon S3 bucket as a local filesystem in Linux RHEL5

Hi,

The steps to mount an s3 bucket as a local filesystem is given below. This has been tested in i386 machine running RHEL 5.6 (Tikanga). There are two restrictions which cannot be overridden,

ONE: Maximum file size=64GB (limited by s3fs, not Amazon).
TWO: Bucket name shouldn’t have upper case characters.

1) Install current latest FUSE( Filesystem in Userspace) package.

The rpmbuild command will create all the fuse rpms inside “/root/rpm/RPMS/i386/”. Then using rpm command install those packages

rpm -ivh /root/rpm/RPMS/i386/fuse-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-libs-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-devel-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-debuginfo-2.8.5-99.vitki.01.el5.i386.rpm

2) Install S3FS package

wget http://s3fs.googlecode.com/files/s3fs-1.35.tar.gz
cd s3fs-1.35
mkdir /usr/local/s3fs
./configure –prefix=/usr/local/s3fs
make && make install

3) Create a symbolic link to “s3fs” binary

ln -s /usr/local/s3fs/bin/s3fs /usr/local/bin/s3fs
mkdir /mnt/s3drive

4) Activate an account in s3. You will get an access key and secret_key after the activation.

You can create a new s3 account by following the url,

5) Install s3 client for linux. The package name is “s3cmd-1.0.0-4.1″.

$ yum install s3cmd

Alternatively, you can download it from the url pasted below:

6) Configure s3 client using the command,

$ s3cmd –configure

It will ask for the access key and secret key that we got during our account activation. This process reports failure, if we provide the wrong key values. Once this step is completed, the configuration will be stored inside the file “/root/.s3cfg”.

7) We need to create buckets in s3 for mounting it locally

eg: creating a bucket named “dailybackup”,

$ s3cmd mb s3://dailybackup

For additional options refer the url,

8) List all buckets

$ s3cmd ls
2011-02-20 23:13 s3://backup1
2009-12-15 10:50 s3://backup2
2011-03-22 06:38 s3://dailybackup
$

9) Create s3fs password file. The s3fs password file has this format (use this format if you have only one set of credentials):

`accessKeyId:secretAccessKey`

If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well:

bucketName:accessKeyId:secretAccessKey

$ cat > /root/.s3fs.cfg
youraccesskey:yoursecretkey
$ chmod 600 /root/.s3fs.cfg
$

10) Mount the bucket “dailybackup” on directory “/amazonbackup”

$ s3fs -o passwd_file=/root/.s3fs.cfg dailybackup /amazonbackup
$ df -Th /mnt/s3drive
Filesystem Type Size Used Avail Use% Mounted on
fuse fuse 256T 0 256T 0% /amazonbackup
$

I configured this setup and used it for weekly cpanel backup uploads. As the s3 bucket is mounted as a local drive, we can use rsync to move directories or files to amazon. eg:

rsync -av –progress /backup/cpbackup/weekly /amazonbackup/

Ref:

http://s3fs.googlecode.com/svn/wiki/FuseOverAmazon.wiki

http://code.google.com/p/s3fs/wiki/FuseOverAmazon

NOTE:

1) s3fs has a caching mechanism: You can enable local file caching to minimize downloads, e.g., :

$ s3fs mybucket /mnt -ouse_cache=/tmp

About these ads
Categories: Amazon s3, Cpanel/WHM
  1. Nate
    December 19, 2011 at 9:09 am | #1

    Hi I follow your instructions, it works fine. But I only am be able to access the mounted bucket by using root, is that part of limitations?

    Thanks a lot

    Nate

  2. Amit
    January 17, 2012 at 6:10 pm | #2

    no you can mount as follow to enable access for all other users

    s3fs -o passwd_file=/root/.s3fs.cfg -o allow_other dailybackup /amazonbackup

  3. Nate
    February 27, 2012 at 5:39 am | #3

    Nice! that did the trick, Thank you!!!

  4. March 4, 2013 at 10:42 pm | #4

    Any chance you could help me with a problem? I made it through all the steps down to #10 but when mounting I keep getting:

    Need to specify AWS_ACCESS_KEY_ID
    Need to specify AWS_SECRET_ACCESS_KEY
    Traceback (most recent call last):
    File “/usr/local/bin/s3fs”, line 1532, in
    main()
    File “/usr/local/bin/s3fs”, line 1517, in main
    if fs.setup() == False:
    File “/usr/local/bin/s3fs”, line 915, in setup
    self.blockdev = S3Drive(self.bucket, self.host)
    File “/usr/local/bin/s3fs”, line 370, in __init__
    self.connection = S3Connection()
    File “/usr/lib/python2.6/site-packages/boto/s3/connection.py”, line 104, in __init__
    path=path)
    File “/usr/lib/python2.6/site-packages/boto/connection.py”, line 184, in __init__
    self.hmac = hmac.new(self.aws_secret_access_key, digestmod=sha)
    AttributeError: S3Connection instance has no attribute ‘aws_secret_access_key’

    I have a .s3fs.cfg password file created as you said and chmodded it. I did not, however, install s3fs and the other dependencies the way you said to, I used Yum install s3fs and yum install fuse

  5. May 17, 2013 at 6:39 am | #5

    maestroc, try to run this before starting s3fs:

    export AWS_ACCESS_KEY_ID=youraccesskey
    export AWS_SECRET_ACCESS_KEY=yoursecretkey

  6. lee
    September 29, 2013 at 11:31 pm | #6

    is there any chance that it can be mount to /home directory so that a new account can be create and store at aws S3?

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: