Archive for the ‘HA’ Category

Apache proxy redirect

June 7, 2013 1 comment

SITUATION: Customer has a single website with four different web applications installed under four sub directories of the website. Now configure apache to serve all these four applications from four different ports.


1) OS – Ubuntu 11

2) Website name and documentroot,


DocumentRoot:  /home/jackal/public_html

3) Web application sub-directories and the ports going to be used,

/home/jackal/public_html/app1 : Port 7001
/home/jackal/public_html/app2 : Port 7002
/home/jackal/public_html/app3 : Port 7003
/home/jackal/public_html/app4 : Port 7004

4) Apache mod_proxy module is installed. You can install it using,

apt-get install libapache2-mod-proxy-html -y


1) Open up /etc/apache2/ports.conf and add the following directives,

Listen 80

2) Enable mod_proxy by copying the configuratons from ‘mods-available’ directory to ‘mods-enabled’

cp -pr /etc/apache2/mods-available/*proxy* /etc/apache2/mods-enabled/

3) Create a virtualhost file “/etc/apache2/sites-enabled/” for website with the following contents,

<VirtualHost *:80>
DocumentRoot /home/jackal/public_html

ProxyPass /app1/
ProxyPass /app2/
ProxyPass /app3/
ProxyPass /app4/


DocumentRoot /home/jackal/public_html/app1

DocumentRoot /home/jackal/public_html/app2

DocumentRoot /home/jackal/public_html/app3

DocumentRoot /home/jackal/public_html/app4

4) Test configuration and gracefully restart apache.

apache2ctl -t
apache2ctl -k graceful

5) Now access the url’s,


SCOPE: Using mod_proxy, we could forward requests coming to different servers and make the applications running from several different servers.


Hope this info will be somewhat useful 🙂


Linux: Live Sync directories in two remote machines

December 26, 2011 Leave a comment

I came across an article at cyberciti which explains the steps to monitor directories for changes and take action when a new inode event occurs. The author mentions “inotify” for monitoring directories. One limitation of this method was, it doesn’t monitor the sub-directories. On searching I could find a python module named “pyinotify” which supports monitoring sub-directories recursively. This article mentions the steps to keep directories in two remote machines in Live sync using “pyinotify”.

Machine1 ==> Source ==>
Machine2 ==> Destination ==>
Folder to kept in sync: “/root/testing”

1) Install “pyinotify” python module in source machine

cd /usr/local/src/
unzip master
cd seb-m-pyinotify-d5d471e/
python install

2) Enable ssh passwordless login from source( to destination(

[root@user1 testing]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
[root@user1 testing]#
[root@user1 testing]# ssh-copy-id -i /root/.ssh/ root@
The authenticity of host ‘ (’ can’t be established.
RSA key fingerprint is f4:cd:cd:e9:51:08:11:68:1c:90:b5:84:9a:c4:6b:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘’ (RSA) to the list of known hosts.
root@’s password:
Now try logging into the machine, with “ssh ‘root@′”, and check in:


to make sure we haven’t added extra keys that you weren’t expecting.

[root@user1 testing]# ssh root@
Last login: Wed Dec 28 08:33:13 2011 from
[root@user2 ~]#

3) Run “” to sync the source and destination directory.

cd /usr/local/src/seb-m-pyinotify-d5d471e/python2/
python -v -r -s /root/testing -c "rsync -r -e \"ssh\" -v /root/testing/ root@"


-v : displaying verbose messages
-r : recursively monitor the directories
-s : source directory
-c : command to execute when an inode notification orrcurs

Use the “–delete” option in rsync to remove files/folders in destination when they get deleted in source.

cd /usr/local/src/seb-m-pyinotify-d5d471e/python2/
python -v -r -s /root/testing -c "rsync -r --delete -e \"ssh\" -v /root/testing/ root@"

Add the above command in /etc/rc.local to start it during system start-up.

Categories: HA


August 13, 2011 Leave a comment

Setup two ubuntu10.10 nodes. The details of which are pasted below,

Primary ——-> heuristics
IP’s: ——-> ,
Block device: —-> /dev/sda3 ( 10GB size in my case)

Secondary ——-> heuristics2
IP’s: ——-> ,
Block device: —-> /dev/sda3 ( 10GB size in my case)

1) Install drbd8-utils package on both servers.

apt-get install drbd8-utils

2) Create a configuration file named “/etc/drbd.conf” with exactly the same contents on both the machines.

global { usage-count no; }
common { syncer { rate 1000M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        on heuristics {
                device /dev/drbd0;
                disk /dev/sda3;
                meta-disk internal;
        on heuristics2 {
                device /dev/drbd0;
                disk /dev/sda3;
                meta-disk internal;

3) In my machines, the “/dev/sda3” partition was previously being used by “/home”. So, I have to unmount the “/home” and then destroy the filesystem. If any important data is already present in your machines, then take backup of it before proceeding 🙂

root@heuristics2:~# dd if=/dev/zero bs=512 count=512 of=/dev/sda3
512+0 records in
512+0 records out
262144 bytes (262 kB) copied, 0.0142205 s, 18.4 MB/s

4) After destroying the filesystem initialize the meta datat storage on both the server as follows,

root@heuristics2:~# drbdadm create-md r0
Writing meta data…
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

5) Start the DRBD daemon,

root@heuristics2:~# /etc/init.d/drbd start
* Starting DRBD resources [
Found valid meta data in the expected location, 5104463872 bytes into /dev/sda3.
d(r0) s(r0) n(r0) ] [ OK ]

6) Now in the primary server(ie, heuristics) we need to enter the following command.

drbdadm — –overwrite-data-of-peer primary all

7) Create filesystem on /dev/drbd0 .

root@heuristics:~# mkfs.ext3 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
312000 inodes, 1246160 blocks
62308 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1279262720
39 block groups
32768 blocks per group, 32768 fragments per group
8000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount it on “/home” (or any partition you choose)

root@heuristics:~# mount /dev/drbd0 /home
root@heuristics:~# df -Th /home
Filesystem Type Size Used Avail Use% Mounted on
/dev/drbd0 ext3 5.1G 145M 4.7G 4% /home

For switching roles between primary and secondary, do the following:

1) Unmount “/dev/drbd0” on primary

root@heuristics:~# umount /dev/drbd0

2) Change current primary to secondary

root@heuristics:~# drbdadm secondary r0

3) Change current secondary(heuristics2) to primary and mount it on “/home”

root@heuristics2:~# drbdadm primary r0
root@heuristics2:~# mount /dev/drbd0 /home

During node failure(of either primary or secondary), the surviving node detects the peer node’s failure, and switches to disconnected mode. DRBD does not promote the surviving node to the primary role; it is the cluster management application’s responsibility to do so. Linux Heartbeat package or Pacemaker would work fine as a cluster management suite.

To know the detailed working of DRBD during node-failure, refer the url pasted below,


1) The DRBD status can be monitored from the file “/proc/drbd”.

2) If DRBD needs to be configured with clustered file systems like GFS or OCFS2, then the “allow-two-primaries ” option in DRBD must be specified.

3) While performing initial disk synchronization after a HDD failure, its important to perform the synchronization in the right direction; otherwise data loss will be the result:( . FOr more detailed information check the url given below,

4) Split brain recovery.


Categories: HA

OCFS2 + ISCSI Centralized storage in Ubuntu 10.10

August 10, 2011 5 comments

In this article I will mention the steps to mount an ISCSI target on two ubuntu machines and then cluster it using oracle clustered file system. The newly mounted partition can be used as a centralized storage location in High availability, Failover or load-balancing setup.

The step by step howto is provided below,

1) Setup an ISCSI Server using Openfiler and create SAN LUN, and assign the IP to it.

For setting up openfiler based ISCI target, you can refer steps 1 to 8 mentioned in the url pasted below.

2) Setup two servers with Ubuntu 10.10 in it.

server1 ==> name: heuristics –> IP
server2 ==> name: heuristics2 –> IP

3) Install open-iscsi tool in both servers

apt-get install open-iscsi open-iscsi-utils
/etc/init.d/open-iscsi start

4) List out the ISCSI targets available in both servers.

iscsiadm -m discovery -t sendtargets -p

In my case the above command produced the following output,

root@heuristics2:~# iscsiadm -m discovery -t sendtargets -p,1

5) Mount the ISCSI target “″(Lets call it TG57) to the local machine

iscsiadm -m node -T -p –login

6) Step5 will make the iscsi target TG57 as a device to the system. Which can be viewed as a block device.

root@heuristics# fdisk -l

Disk /dev/sdb: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003f05c

Device Boot Start End Blocks Id System
/dev/sdb1 1 1011 1047383 83 Linux

7) Install OCFS2 – Oracle Cluster File System for Linux

apt-get install ocfs2 ocfs2-tools

8) Configure OCFS2

Create a configuration file with proper indendation and copy it to both the servers. In my case “ocfs2” is the cluster name.

root@heuristics:~# cat /etc/ocfs2/cluster.conf



If proper indentation is not provided the following error will be shown,

Starting cluster oracle: Failed
o2cb_ctl: Unable to load cluster configuration file “/etc/ocfs2/cluster.conf”
Stopping cluster oracle: Failed
o2cb_ctl: Unable to load cluster configuration file “/etc/ocfs2/cluster.conf”

9) Start the cluster service in both the machines

/etc/init.d/ocfs2 start
/etc/init.d/o2cb start

10) Create one partition named /dev/sdb1 in the ISCSI target

fdisk /dev/sdb

11) Make ocfs cluster file system using the following command(need to execute only on one machine)

mkfs.ocfs2 -b 4k -C 32k -N3 -L cluster-storage /dev/sdb1

This creates a file system with 4096 block size and 32768 (32k) cluster size.
NOTE: N= 3 , for a cluster with 2 machines, N=3 and for a cluster with ‘n’ machines N=(n+1)

12) Update partition table on all servers in the cluster. In this case all the servers have /dev/sdb as the iSCSI target.

We will run the following to re-read the partition:

blockdev –rereadpt /dev/sdb

Next, we will want to create a mount point on the servers for this cluster.

mkdir /mycluster

Mount the partition,

mount -L cluster-storage /cluster-storage

13) Show results and test

root@heuristics2:~# hostname
root@heuristics2:~# df -TH /cluster-storage/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb1 ocfs2 1.1G 330M 744M 31% /cluster-storage
root@heuristics:~# hostname
root@heuristics:~# df -TH /cluster-storage/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb1 ocfs2 1.1G 330M 744M 31% /cluster-storage


Categories: HA