Archive

Archive for the ‘Storage’ Category

Adjust RAID rebuild rate

May 29, 2015 Leave a comment

 
 
Steps to adjust hardware RAID rebuild rate using ‘megacli’. Most often after a disk replacement in case of disk failure, we want to increase the RAID rebuild rate to speed up the process. Also if RAID rebuild is causing performance issues with host, then we might need to consider reducing the rebuild rate. The command lines pasted below helps to control it,

 
1) Get current RAID rebuild rate,
 

host100:~# megacli -AdpGetProp RebuildRate -a0
                                     
Adapter 0: Rebuild Rate = 15%

Exit Code: 0x00
host100:~# 

 
2) Set RAID rebuild rate to 25%,
 

host100:~# megacli -AdpSetProp RebuildRate 25 -a0
                                     
Adapter 0: Set rebuild rate to 25% success.

Exit Code: 0x00
host100:~#

 
 

Categories: RAID, Storage

Create RAID 0 on Dell PERC 5/i from Linux command line using MegaCli

June 16, 2013 2 comments

 

SCENARIO: Customer have Dell PERC 5/i raid controller with RAID 1 already configured on two drives. His DC added two new drives each with two different sizes(a 1TB and a 2TB size) to the RAID controller but it wasn’t visible inside the server. The fdisk command didn’t listed the two new drives.

The PERC controller will only show the OS the drives that are configured as RAID volumes. If we want the new drive to be seeing by Windows/Linux and not be part of the existing RAID 1 we already have, we can create a new RAID 0 volumes with only the new drives.

 

 

SOLUTION:

 

1) Download the MegaCli and Lib_Utils rpm to the server from the rapidshare urls pasted below(you cannot wget it to the server :P),

https://rapidshare.com/files/3230206587/Lib_Utils-1.00-08.noarch.rpm
http://rapidshare.com/files/565005303/MegaCli-8.01.06-1.i386.rpm

 

2) Install the Lib_Utils and MegaCli rpm packages inside the server,

rpm -ivh Lib_Utils-1.00-08.noarch.rpm
rpm -ivh MegaCli-8.01.06-1.i386.rpm

 

3) Retrieve the physical drive information using MegaCli command,

root@jackal777[/opt/MegaRAID/MegaCli]# ./MegaCli64 -PdList -a0| egrep 'Device|Firm|Inq|Coer'
Enclosure Device ID: 8
Device Id: 0
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
Inquiry Data:      WD-WMC300310248WDC WD20EFRX-68AX9N0                    80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 1
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
Inquiry Data:      WD-WMC300410955WDC WD20EFRX-68AX9N0                    80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 4
Non Coerced Size: 931.012 GB [0x74606db0 Sectors]
Coerced Size: 931.0 GB [0x74600000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:      WD-WCAV5E944009WDC WD10EARS-00Y5B1                     80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 5
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:       MJ0251YMG06ZAAHitachi HUA5C3020ALA640                 ME0KR5A0
Device Speed: Unknown 
Media Type: Hard Disk Device
root@jackal777[/opt/MegaRAID/MegaCli]# 

 

4) We are going to use the last two drives for creating RAID 0 array. The firmware state of these two drives is “Unconfigured(good), Spun Up“. The first two drives is already configured as RAID 1. Details of the two disks is pasted below,

Disk 1:

Enclosure Device ID: 8
Device Id: 4
Non Coerced Size: 931.012 GB [0x74606db0 Sectors]
Coerced Size: 931.0 GB [0x74600000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:      WD-WCAV5E944009WDC WD10EARS-00Y5B1                     80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device

Disk 2:

Enclosure Device ID: 8
Device Id: 5
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:       MJ0251YMG06ZAAHitachi HUA5C3020ALA640                 ME0KR5A0
Device Speed: Unknown 
Media Type: Hard Disk Device

 

The general format for creating raid array 0,1 or 5 using MegaCli is as follows,

MegaCli -CfgLdAdd -r(0|1|5) [E:S, E:S, ...] -aN

 

Where E refers to Enclosure Deivce ID and S refers to Device Id.

Now create the RAID 0 array using drives [8:4] and [8:5] as follows,

 

root@jackal777[/opt/MegaRAID/MegaCli]# ./MegaCli64 -CfgLdAdd -r0[8:4,8:5] -a0
                                     
Adapter 0: Created VD 1

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
root@jackal777[/opt/MegaRAID/MegaCli]# 

 

 

 

NOTE:

Use the MegaCli-8.01.06-1.i386.rpm from url. And use the package Lib_Utils-1.00-08.noarch.rpm from 8.00.29_Linux_MegaCli.zip (downloaded from official website). Dont use the MegaCli-8.00.29-1.i386.rpm from 8.00.29_Linux_MegaCli.zip because the MegaCli version contained inside zip file doesn’t support Logical drive creation. And we have to use version 8.01

 

 

REFERENCES:

http://tools.rapidsoft.de/perc/perc-cheat-sheet.html
http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
http://blog.nexcess.net/2010/12/28/managing-hardware-raid-with-megacli/
http://community.spiceworks.com/how_to/show/8781-configuring-virtual-disks-on-a-perc-5-6-h700-controller
http://tools.rapidsoft.de/perc/perc-cheat-sheet.html
http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS
http://artipc10.vub.ac.be/wordpress/2011/09/12/megacli-useful-commands/
http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS
http://preston4tw.blogspot.in/2013/03/megacli-80216-breaks-dell-perc-5i.html
https://code.google.com/p/fastvps/downloads/detail?name=MegaCli-8.01.06-1.i386.rpm&can=2&q=

Categories: Linux Command Line, RAID

Mount Amazon S3 bucket as a local filesystem in Linux RHEL5

May 12, 2011 6 comments

Hi,

The steps to mount an s3 bucket as a local filesystem is given below. This has been tested in i386 machine running RHEL 5.6 (Tikanga). There are two restrictions which cannot be overridden,

ONE: Maximum file size=64GB (limited by s3fs, not Amazon).
TWO: Bucket name shouldn’t have upper case characters.

1) Install current latest FUSE( Filesystem in Userspace) package.

The rpmbuild command will create all the fuse rpms inside “/root/rpm/RPMS/i386/”. Then using rpm command install those packages

rpm -ivh /root/rpm/RPMS/i386/fuse-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-libs-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-devel-2.8.5-99.vitki.01.el5.i386.rpm
rpm -ivh /root/rpm/RPMS/i386/fuse-debuginfo-2.8.5-99.vitki.01.el5.i386.rpm

2) Install S3FS package

wget http://s3fs.googlecode.com/files/s3fs-1.35.tar.gz
cd s3fs-1.35
mkdir /usr/local/s3fs
./configure –prefix=/usr/local/s3fs
make && make install

3) Create a symbolic link to “s3fs” binary

ln -s /usr/local/s3fs/bin/s3fs /usr/local/bin/s3fs
mkdir /mnt/s3drive

4) Activate an account in s3. You will get an access key and secret_key after the activation.

You can create a new s3 account by following the url,

5) Install s3 client for linux. The package name is “s3cmd-1.0.0-4.1”.

$ yum install s3cmd

Alternatively, you can download it from the url pasted below:

6) Configure s3 client using the command,

$ s3cmd –configure

It will ask for the access key and secret key that we got during our account activation. This process reports failure, if we provide the wrong key values. Once this step is completed, the configuration will be stored inside the file “/root/.s3cfg”.

7) We need to create buckets in s3 for mounting it locally

eg: creating a bucket named “dailybackup”,

$ s3cmd mb s3://dailybackup

For additional options refer the url,

8) List all buckets

$ s3cmd ls
2011-02-20 23:13 s3://backup1
2009-12-15 10:50 s3://backup2
2011-03-22 06:38 s3://dailybackup
$

9) Create s3fs password file. The s3fs password file has this format (use this format if you have only one set of credentials):

`accessKeyId:secretAccessKey`

If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well:

bucketName:accessKeyId:secretAccessKey

$ cat > /root/.s3fs.cfg
youraccesskey:yoursecretkey
$ chmod 600 /root/.s3fs.cfg
$

10) Mount the bucket “dailybackup” on directory “/amazonbackup”

$ s3fs -o passwd_file=/root/.s3fs.cfg dailybackup /amazonbackup
$ df -Th /mnt/s3drive
Filesystem Type Size Used Avail Use% Mounted on
fuse fuse 256T 0 256T 0% /amazonbackup
$

I configured this setup and used it for weekly cpanel backup uploads. As the s3 bucket is mounted as a local drive, we can use rsync to move directories or files to amazon. eg:

rsync -av –progress /backup/cpbackup/weekly /amazonbackup/

Ref:
http://s3fs.googlecode.com/svn/wiki/FuseOverAmazon.wiki
http://code.google.com/p/s3fs/wiki/FuseOverAmazon

NOTE:

1) s3fs has a caching mechanism: You can enable local file caching to minimize downloads, e.g., :

$ s3fs mybucket /mnt -ouse_cache=/tmp

Categories: Amazon s3, Cpanel/WHM

Linux Cpanel Backup to Amazon S3

March 22, 2011 70 comments

In this article i will explain how to take cpanel backup to amazon s3(with backup rotation enabled). The step by step procedure is explained below,

Step1) Activate an account in s3. You will get an access key and secret_key after the activation.

You can create a new s3 account by following the url,

Step2) Install s3 client for linux. The package name is “s3cmd-1.0.0-4.1”.

root@heuristics:~# apt-get install s3cmd

On redhat or centos based machines(using rpm packages), you can install “s3cmd” as follows,

cd /etc/yum.repos.d
wget http://s3tools.org/repo/CentOS_5/s3tools.repo
yum install s3cmd

Alternatively, you can download it from the url pasted below:

Step3) Configure s3 client using the command,

root@heuristics:~# s3cmd --configure

It will ask for the access key and secret key that we got during our account activation. This process reports failure, if we provide the wrong key values. Once this step is completed,  the configuration will be stored inside the file “/root/.s3cfg”.

During configuration you will be asked whether to enable encryption or not. Enabling encryption will improve the security of transfer but will make the upload a little bit slower.

Step4) We need to create buckets in s3 for storing the backup.

eg: creating a bucket named “Backup_daily”,

root@heuristics:~# s3cmd mb s3://Backup_daily

For additional options refer the url,

Step5) Enable daily backup from WHM. Refer the url pasted below for reference,

If backup is already configured, then we can know the location of the backup using the command,

root@heuristics:~#grep BACKUPDIR /etc/cpbackup.conf
BACKUPDIR /backup
root@heuristics:~#

Inside “/backup” there will be another directory named “cpbackup”, which will be holding the daily,weekly and monthly backup’s. In my case,

root@heuristics:~# ls /backup/cpbackup/
./  ../  daily/  monthly/  weekly/
root@heuristics:~#

Step6) Create log directories,

root@heuristics:~# mkdir /var/log/backuplogs
root@heuristics:~#

Step7) Write a script to automate the backup, and save it as “/root/dailybackup.sh” . In the script pasted below, the backup rotation degree is set as 3(“DEGREE=3” , line16). This means that, 3 days old backup will be deleted automatically. You can increase this backup retention period by adjusting the “DEGREE” variable in line16.

#!/bin/bash

##Notification email address
_EMAIL=your_email@domain.com

ERRORLOG=/var/log/backuplogs/backup.err`date +%F`
ACTIVITYLOG=/var/log/backuplogs/activity.log`date +%F`

##Directory which needs to be backed up
SOURCE=/backup/cpbackup/daily

##Name of the backup in bucket
DESTINATION=`date +%F`

##Backup degree
DEGREE=3

#Clear the logs if the script is executed second time
:> ${ERRORLOG}
:> ${ACTIVITYLOG}

##Uploading the daily backup to Amazon s3
/usr/bin/s3cmd -r put ${SOURCE} s3://Backup_daily/${DESTINATION}/ 1>>${ACTIVITYLOG} 2>>${ERRORLOG}
ret2=$?

##Sent email alert
msg="BACKUP NOTIFICATION ALERT FROM `hostname`"

if [ $ret2 -eq 0 ];then
msg1="Amazon s3 Backup Uploaded Successfully"
else
msg1="Amazon s3 Backup Failed!!\n Check ${ERRORLOG} for more details"
fi
echo -e "$msg1"|mail -s "$msg" ${_EMAIL}

#######################
##Deleting backup's older than DEGREE days
## Delete from both server and amazon
#######################
DELETENAME=$(date  --date="${DEGREE} days ago" +%F)

/usr/bin/s3cmd -r --force del s3://Backup_daily/${DELETENAME} 1>>${ACTIVITYLOG} 2>>${ERRORLOG}

Step8) Grant execute privilege for the script and schedule it to run everyday,

root@heuristics:~# chmod u+x /root/dailybackup.sh
root@heuristics:~# cp -p /root/dailybackup.sh /etc/cron.daily/
root@heuristics:~#

NOTE:

Or if you wish to start the amazon s3 backup script right after the cpanel backup process, then create a cpanel post backup hook named “/scripts/postcpbackup” with the following contents,

#!/usr/bin/perl
system(“/root/dailybackup.sh”);

The post backup hook will start the amazon s3 backup script right after every cpanel backup completion.

In case of disasters we can download the backup from the bucket using the same s3cmd tool.

root@heuristics:~# mkdir restore
root@heuristics:~# s3cmd -r get s3://Backup_daily/2011-02-32  restore
Categories: Amazon s3, Backup, Cpanel/WHM