Create RAID 0 on Dell PERC 5/i from Linux command line using MegaCli

June 16, 2013 2 comments


SCENARIO: Customer have Dell PERC 5/i raid controller with RAID 1 already configured on two drives. His DC added two new drives each with two different sizes(a 1TB and a 2TB size) to the RAID controller but it wasn’t visible inside the server. The fdisk command didn’t listed the two new drives.

The PERC controller will only show the OS the drives that are configured as RAID volumes. If we want the new drive to be seeing by Windows/Linux and not be part of the existing RAID 1 we already have, we can create a new RAID 0 volumes with only the new drives.





1) Download the MegaCli and Lib_Utils rpm to the server from the rapidshare urls pasted below(you cannot wget it to the server :P),


2) Install the Lib_Utils and MegaCli rpm packages inside the server,

rpm -ivh Lib_Utils-1.00-08.noarch.rpm
rpm -ivh MegaCli-8.01.06-1.i386.rpm


3) Retrieve the physical drive information using MegaCli command,

root@jackal777[/opt/MegaRAID/MegaCli]# ./MegaCli64 -PdList -a0| egrep 'Device|Firm|Inq|Coer'
Enclosure Device ID: 8
Device Id: 0
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
Inquiry Data:      WD-WMC300310248WDC WD20EFRX-68AX9N0                    80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 1
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Online, Spun Up
Inquiry Data:      WD-WMC300410955WDC WD20EFRX-68AX9N0                    80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 4
Non Coerced Size: 931.012 GB [0x74606db0 Sectors]
Coerced Size: 931.0 GB [0x74600000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:      WD-WCAV5E944009WDC WD10EARS-00Y5B1                     80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device
Enclosure Device ID: 8
Device Id: 5
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:       MJ0251YMG06ZAAHitachi HUA5C3020ALA640                 ME0KR5A0
Device Speed: Unknown 
Media Type: Hard Disk Device


4) We are going to use the last two drives for creating RAID 0 array. The firmware state of these two drives is “Unconfigured(good), Spun Up“. The first two drives is already configured as RAID 1. Details of the two disks is pasted below,

Disk 1:

Enclosure Device ID: 8
Device Id: 4
Non Coerced Size: 931.012 GB [0x74606db0 Sectors]
Coerced Size: 931.0 GB [0x74600000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:      WD-WCAV5E944009WDC WD10EARS-00Y5B1                     80.00A80
Device Speed: Unknown 
Media Type: Hard Disk Device

Disk 2:

Enclosure Device ID: 8
Device Id: 5
Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
Coerced Size: 1.818 TB [0xe8d00000 Sectors]
Firmware state: Unconfigured(good), Spun Up
Inquiry Data:       MJ0251YMG06ZAAHitachi HUA5C3020ALA640                 ME0KR5A0
Device Speed: Unknown 
Media Type: Hard Disk Device


The general format for creating raid array 0,1 or 5 using MegaCli is as follows,

MegaCli -CfgLdAdd -r(0|1|5) [E:S, E:S, ...] -aN


Where E refers to Enclosure Deivce ID and S refers to Device Id.

Now create the RAID 0 array using drives [8:4] and [8:5] as follows,


root@jackal777[/opt/MegaRAID/MegaCli]# ./MegaCli64 -CfgLdAdd -r0[8:4,8:5] -a0
Adapter 0: Created VD 1

Adapter 0: Configured the Adapter!!

Exit Code: 0x00





Use the MegaCli-8.01.06-1.i386.rpm from url. And use the package Lib_Utils-1.00-08.noarch.rpm from (downloaded from official website). Dont use the MegaCli-8.00.29-1.i386.rpm from because the MegaCli version contained inside zip file doesn’t support Logical drive creation. And we have to use version 8.01




Categories: Linux Command Line, RAID

One-liners for troubleshooting Virtuozzo load issues

June 9, 2013 1 comment


I wish to introduce the various one-liners that can be used to troubleshoot, load or performance issues, within a Virtuozzo node.

To begin with, we will discuss the various situations that can degrade performance of a Virtuozzo node. As is obvious, heavy usage of either/all of CPU, Memory, Disk or Network will degrade the performance of node. The same case applies with Virtuozzo nodes too, but with an additional complexity; it could also happen as a result of the processes running inside a container. In this case we need to identify the problem container and deal with the processes inside that container.

I have found these one-liners quite useful for finding out the problem container while troubleshooting Virtuozzo alerts.


1) Troubleshooting load issues caused by high CPU activity


=> Display the list of containers sorted based on cpu usage,

/usr/sbin/vzlist -o ctid,laverage


/usr/sbin/vzstat -t -s cpu|awk ‘NF==10{print $0}’

=> Sometimes the above one-liners will not show the actual cpu usage inside the containers(possibly due to some delay in updating the stats), but still, the load inside node will be high. In this situation, running the command pasted below will help find out the cpu intensive containers.

for i in `/usr/sbin/vzlist -H -o ctid`; do echo "CTID: ${i} `/usr/sbin/vzctl exec ${i} cat /proc/loadavg`"; done

=> List out all containers for which the status is not in “OK” status. This is quite helpful while troubleshooting load issues when the load average in the node is super-high(above 1000)

/usr/sbin/vzstat -t|awk ‘{if(NF==10 && $2!=”OK” && $1!=”CTID”)print $0}’

=> Lists the top 10 containers based on number of processes running inside the container.

/usr/sbin/vzlist -H -o ctid,numproc|sort -r -n -k2|head


2) Troubleshooting load issues caused by n/w activity


=> Sorts containers based on socket usage

/usr/sbin/vzstat -t -s sock|awk 'NF==10{print $0}'

=> Sorts containers based on TCP sender buffer usage,

/usr/sbin/vzlist -H -o ctid,tcpsndbuf |sort -r -n -k2

=> Sorts containers based on TCP receive buffer usage,

/usr/sbin/vzlist -H -o ctid,tcprcvbuf |sort -r -n -k2

=> Sorts containers based on the highest inbound traffic(quite useful while troubleshooting n/w related attacks),

/usr/sbin/vznetstat -r |awk '$3 ~ /G/ {print $0}'|sort -r -nk3

=> Sorts containers based on the highest oubound traffic(quite useful while troubleshooting n/w related attacks) ,

/usr/sbin/vznetstat -r |awk '$5 ~ /G/ {print $0}'|sort -r -nk5


3) Troubleshooting performance issues caused by memory utilization


The ‘dmesg‘ command displays containers which has resource shortages. There is possibility that it could be the abusive one. So there is a need to check the process inside that container.

[root@virtuozzo ~]# dmesg|egrep -v '(SMTP-LOG|INPUT-DROP|LIMIT-PPS-DROP|FORWARD-DROP)'
[1101732.300833] __ratelimit: 44 callbacks suppressed
[1101732.310531] Fatal resource shortage: kmemsize, UB 12215.
[1101742.294179] Fatal resource shortage: kmemsize, UB 12215.
[1101752.277368] Fatal resource shortage: kmemsize, UB 12215.
[1101752.393226] Fatal resource shortage: kmemsize, UB 12215.
[1105092.458621] __ratelimit: 101 callbacks suppressed
[1105092.468411] Fatal resource shortage: kmemsize, UB 12215.
[root@virtuozzo ~]#


4) Troubleshooting load issues caused by high disk I/O activity.


You can install the ‘atop’ command and spot problem processes at the top of the list when sorting by disk usage (‘D’). To get more information on using ‘atop’ refer url.

The above one-liners will help you identify the problem CTID or the process(PID) responsible for performance issue. In the second case after finding the process-id, you can use ‘vzpid’ command to spot the container inside which the process is running and either renice or stop that process. And in the first case, you can view the processes running inside the container using either ‘vzps’ or ‘vztop’ command. The usage of which is given below,

vztop -b -c -n 1 -E 


vzps auxfww -E 

So, that is it guys. I sincerely hope you get to take away something helpful from all this.

Happy Hunting 😀

Comprehensive Analysis of /proc/user_beancounters : Virtuozzo

June 8, 2013 Leave a comment

While troubleshooting issues related to a Virtuozzo VPS, we usually come across the ‘user_beancounters’ file in the “/proc directory”. This file is of importance only if we use UBC or Combined SML+UBC memory mode for our Virtuozzo VPS. The resource control information about running virtual environments is contained in this file. So basically, ‘/proc/user_beancounters’ represents your VPS’s allocation of various system resources (memory). It thus is a main indicator of how well our VPS works, how stable it is, or whether there is a resource shortage. So, if you face any trouble while running or installing applications on your VPS, one good way to find the source of the problem is to take a look at this file.

Let’s dig deeper into the details of this file.

In Parallels Virtuozzo containers, virtualization technology resource control settings for each virtual machine are stored in the configuration file “/etc/vz/conf/XXX.conf” (where XXX is the ID of the given CT). These settings are loaded and applied to the containers during the VPS’s startup or on some events such as execution of “vzctl set CTID”. For running containers the resource control settings are exposed via “/proc/user_beancounters”. One such file “/proc/user_beancounters” exists in the node and one inside the VPS too. The file in the hardware node contains the resource control settings of all running VPSs. A pictorial representation of the file “/proc/user_beancounters” inside a VPS is shown below:


A brief description of the various columns are given below,

UID: Indicates the ID of the container. In Virtuozzo each container is given a unique ID for the ease of management.

RESOURCE: This field indicates the primary,secondary and auxiliary parameters in Virtuozzo. In order to get more details of these resources refer url

HELD: Indicates the current usage of the various resources.

MAXHELD: Indicates the maximum usage of the resource since VPS startup.

BARRIER & LIMIT: Gives us the values of the softlimit and hardlimit of the virtozzo resource controls. Resource requests above that particular limit gets denied.

FAILCNT: It shows the number of refused or rather denied resource allocations of VPS right from the start up stage of the VPS. A non-zero value in this column indicates resource shortage and we need to either increase that particular resource or find the process responsible for it and optimize it. Otherwise it can cause weird issues with services running inside the container. Eg: unexpected service down, intermittent website issues,etc.

The following awk script can be used to list out all the containers with a non-zero values for the column “failcnt”. This script will print out all the containers with a non-zero failcnt value, along with their resource name and the corresponding failcnt value. Save the script as “/root/failcnt.awk” or any name that you like.


if(NF==7 && index($1,":") >0 ){
printf "\nCTID=%s",arr1[1];
for(j=1;j<=i;j++){ printf " %s ",vector[j];delete vector[j];}

if($NF!=0) {
i = i+1;
vector[i] = $2" "$NF;



if (NF==6 && $NF!=0){
i = i+1;
vector[i] = $1" "$NF;
END{ printf "\n" }

Now run the script from node as follows,

[root@adminahead ~]# awk -f /root/failcnt.awk /proc/user_beancounters

CTID=10592 lockedpages 13
CTID=13917 kmemsize 357 shmpages 4 physpages 5 oomguarpages 1 tcprcvbuf 755
CTID=13904 kmemsize 528 numothersock 1
CTID=13905 kmemsize 73 numothersock 1
CTID=13897 kmemsize 1 shmpages 4 tcprcvbuf 4751
CTID=10000000 numothersock 1986
CTID=10594 kmemsize 27 physpages 7 oomguarpages 1 tcpsndbuf 295136
CTID=12435 shmpages 4
CTID=12437 kmemsize 2 shmpages 2 tcprcvbuf 690
CTID=12441 shmpages 3
CTID=12438 shmpages 1 physpages 712 oomguarpages 73 tcpsndbuf 63
CTID=10651 physpages 15 oomguarpages 8
CTID=10611 physpages 24 oomguarpages 11
CTID=10623 numothersock 14
CTID=10570 physpages 6 oomguarpages 3
CTID=10578 physpages 517 oomguarpages 33
CTID=10603 physpages 49 oomguarpages 40
CTID=10633 physpages 87 oomguarpages 24
CTID=10610 numproc 71 physpages 2250 oomguarpages 472
[root@ adminahead ~]#

As you can see from the above output, container “13917” shows the highest number of ‘failcnt’ for resources. For this VPS, “kmemsize”,”shmpages”,”physpages”,”oomguarpages” and “tcprcvbuf” show non-zero failcnt values and among them the first four resources are related to RAM. Upgrading the RAM inside that VPS is a good suggestion, but that should be considered only after finding out the resource intensive process inside the container and optimizing it.

You can use the following commands to list out the memory intensive processes inside the container.

* Lists top 3 memory intensive processes,

ps -auxf | sort -nr -k 4 | head -3


wget -O /root/
python /root/ |tail -3

The “/proc/user_beancounters” in the node can be monitored continuously to find out the VPSs that are short of resources and the corresponding VPS owner can be contacted for resource upgrade or optimization.

Sync svn repo commits to website documentroot in Cpanel Server

June 8, 2013 Leave a comment


SITUATION: Customer has a cpanel server with one domain hosted on a shared ip and wants to setup svn repository for this domain in such a way that whenever the svn commit operation takes place, the contents of the repository is exported to the documentroot. Thus all updates to files inside documentroot can be done over svn instead of via ftp. Also use ‘svnserve’ daemon for setup and don’t use ‘mod_dav’.


1) Cpanel Server is used.
2) Domain is setup on a shared ip
3) Suphp is the php handler used
4) ‘username’ is the username of the website



1) Install subversion in cpanel server

yum install subversion.x86_64 -y

2) Create a directory named ‘repos’ inside the default documentroot of apache(ie, /usr/local/apache/htdocs/) and start the ‘svnserve’ daemon from that directory. Also make sure that the port 3690 is opened in firewall and you are starting the service as root user.

mkdir /usr/local/apache/htdocs/repos
svnserve -d -r /usr/local/apache/htdocs/repos

3) Create a repository named ‘username’ and import its home directory(/home/username/public_html) to the repository,

cd /usr/local/apache/htdocs/repos
svnadmin create username
cd ~
svn import /home/username/public_html file:///usr/local/apache/htdocs/repos/username -m "username"

4) Now open up the svn repository configuration file “/usr/local/apache/htdocs/repos/username/conf/svnserve.conf” and disable anonymous access and specify the user authentication and authorization files,

anon-access = none
auth-access = write

password-db = /usr/local/apache/htdocs/repos/username/conf/passwd
authz-db = /usr/local/apache/htdocs/repos/username/conf/authz
realm = Project
logfile = /tmp/svn.log

5) Create a new user in user database file “/usr/local/apache/htdocs/repos/username/conf/passwd”

jackal777 = pnity29#@I

6) Set authorization for user created in password file via “/usr/local/apache/htdocs/repos/username/conf/authz”

jackal777 = rw

7) Now finally create post commit hook inside the repository directory “/usr/local/apache/htdocs/repos/username/hooks/post-commit” and set execute permission for that file. Paste the following contents to that file,

svn export --force file:///usr/local/apache/htdocs/repos/username/ /home/username/public_html/
chown -R username:username /home/username/public_html/

The post commit script will export the contents of the repository to the website documentroot and assigns the proper permission to the directory.




Now checkout the repository to your local directory,

svn co svn:// --username=jackal777

Make modifications with the files and then commit to the repository,

cd username
svn commit

Now login to the server and check whether the commit made to repository are shown inside “/home/username/public_html”.

That’s it 🙂

Apache proxy redirect

June 7, 2013 1 comment

SITUATION: Customer has a single website with four different web applications installed under four sub directories of the website. Now configure apache to serve all these four applications from four different ports.


1) OS – Ubuntu 11

2) Website name and documentroot,


DocumentRoot:  /home/jackal/public_html

3) Web application sub-directories and the ports going to be used,

/home/jackal/public_html/app1 : Port 7001
/home/jackal/public_html/app2 : Port 7002
/home/jackal/public_html/app3 : Port 7003
/home/jackal/public_html/app4 : Port 7004

4) Apache mod_proxy module is installed. You can install it using,

apt-get install libapache2-mod-proxy-html -y


1) Open up /etc/apache2/ports.conf and add the following directives,

Listen 80

2) Enable mod_proxy by copying the configuratons from ‘mods-available’ directory to ‘mods-enabled’

cp -pr /etc/apache2/mods-available/*proxy* /etc/apache2/mods-enabled/

3) Create a virtualhost file “/etc/apache2/sites-enabled/” for website with the following contents,

<VirtualHost *:80>
DocumentRoot /home/jackal/public_html

ProxyPass /app1/
ProxyPass /app2/
ProxyPass /app3/
ProxyPass /app4/


DocumentRoot /home/jackal/public_html/app1

DocumentRoot /home/jackal/public_html/app2

DocumentRoot /home/jackal/public_html/app3

DocumentRoot /home/jackal/public_html/app4

4) Test configuration and gracefully restart apache.

apache2ctl -t
apache2ctl -k graceful

5) Now access the url’s,


SCOPE: Using mod_proxy, we could forward requests coming to different servers and make the applications running from several different servers.


Hope this info will be somewhat useful 🙂

Script to Monitor file creation under all cpanel users documentroot

May 11, 2013 1 comment


SITUATION: Customer wants to get the list of all newly created files under all cpanel users documentroot(/home/*/public_html).


ASSUMPTIONS: The ‘inotifywait’ command is installed. This command comes with the inotify-tools package.


SOLUTION: The following script spawns multiple ‘inotifywait’ processes into background, with each of these processes recursively monitoring and recording file creation events in each cpanel users documentroot. Newly created files under each users home directory are saved in location “/root/monitor/” with the filenames as each users name.

Save the script as “/etc/init.d/inotifywaitd” and grant execute permission to this script.





if [ $# != 1 ];then
   echo "Usage: /etc/init.d/inotifywaitd {start|stop}"
   exit 1

if [ ! -d ${DESTDIR} ];then
   mkdir ${DESTDIR}

case $1 in


      for i in `ls -d /home/*/public_html`
         user=$(echo "${i}"|cut -d\/ -f3)
         ${INOTIFY_CMD} -m -r -e create --format '%f' ${i} > ${DESTDIR}/${user}&


   stop) pkill inotifywait ;;

   *) echo "Usage: /etc/init.d/inotifywaitd {start|stop}" ;;



ERRORS: Sometime you may get the following error while running this script,

Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify/max_user_watches'.

To resolve this issue, increase the filesystem inotify maximum user watches system variable as follows,

1) Get the current value of max_user_watches,

# sysctl -e fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288

2) Open up /etc/sysctl.conf and set value of “fs.inotify.max_user_watches” higher than 524288.

fs.inotify.max_user_watches = 924288

3) Reload sysctl configuration,

# sysctl -p /etc/sysctl.conf
Categories: Cpanel/WHM, Scripts

Renaming Virtuozzo Container CTID

March 18, 2013 Leave a comment



SITUATION: I wan’t to rename a virtuozzo container CTID from CTID 14383000 to 14383.


SOLUTION: Use the ‘vzmlocal‘ command to rename the container’s CTID.




[root@node ~]# /usr/sbin/vzlist -a |grep
14383000 68 running
[root@node ~]# 
[root@node ~]# /usr/sbin/vzmlocal 14383000:14383
vzctl_conf_get_param(VZ_TOOLS_BCID) return 5
vzctl_conf_get_param(VZ_TOOLS_IOLIMIT) return 1048576
Moving/copying CT#14383000 -> CT#14383, [], [] ...
Moving private area '/vz/private/14383000'->'/vz/private/14383'
Copying/modifying config scripts of CT#14383000 ...
OfflineManagement CT#14383000 ...
OfflineManagement CT#14383 ...
Successfully completed
[root@node ~]# /usr/sbin/vzctl start 14383
[root@node ~]# /usr/sbin/vzlist -a |grep
14383 68 running
[root@node ~]# 



Categories: Openvz and Virtuozzo