Archive

Archive for the ‘HA’ Category

Apache proxy redirect

June 7, 2013 1 comment

SITUATION: Customer has a single website with four different web applications installed under four sub directories of the website. Now configure apache to serve all these four applications from four different ports.

ASSUMPTIONS:

1) OS – Ubuntu 11

2) Website name and documentroot,

Name: jackal777.com

DocumentRoot:  /home/jackal/public_html

3) Web application sub-directories and the ports going to be used,

/home/jackal/public_html/app1 : Port 7001
/home/jackal/public_html/app2 : Port 7002
/home/jackal/public_html/app3 : Port 7003
/home/jackal/public_html/app4 : Port 7004

4) Apache mod_proxy module is installed. You can install it using,

apt-get install libapache2-mod-proxy-html -y

SOLUTION:

1) Open up /etc/apache2/ports.conf and add the following directives,

Listen 80
Listen 127.0.0.1:7001
Listen 127.0.0.1:7002
Listen 127.0.0.1:7003
Listen 127.0.0.1:7004

2) Enable mod_proxy by copying the configuratons from ‘mods-available’ directory to ‘mods-enabled’

cp -pr /etc/apache2/mods-available/*proxy* /etc/apache2/mods-enabled/

3) Create a virtualhost file “/etc/apache2/sites-enabled/jackal777.com” for website with the following contents,

<VirtualHost *:80>
ServerName jackal777.com
DocumentRoot /home/jackal/public_html

ProxyPass /app1/ http://127.0.0.1:7001/
ProxyPass /app2/ http://127.0.0.1:7002/
ProxyPass /app3/ http://127.0.0.1:7003/
ProxyPass /app4/ http://127.0.0.1:7004/

</VirtualHost>

<VirtualHost 127.0.0.1:7001>
DocumentRoot /home/jackal/public_html/app1
</VirtualHost>

<VirtualHost 127.0.0.1:7002>
DocumentRoot /home/jackal/public_html/app2
</VirtualHost>

<VirtualHost 127.0.0.1:7003>
DocumentRoot /home/jackal/public_html/app3
</VirtualHost>

<VirtualHost 127.0.0.1:7004>
DocumentRoot /home/jackal/public_html/app4
</VirtualHost>

4) Test configuration and gracefully restart apache.

apache2ctl -t
apache2ctl -k graceful

5) Now access the url’s,

http://jackal777.com/app1/
http://jackal777.com/app2/
http://jackal777.com/app3/
http://jackal777.com/app4/

 

SCOPE: Using mod_proxy, we could forward requests coming to different servers and make the applications running from several different servers.

 

Hope this info will be somewhat useful 🙂

Advertisements

Linux: Live Sync directories in two remote machines

December 26, 2011 Leave a comment

I came across an article at cyberciti which explains the steps to monitor directories for changes and take action when a new inode event occurs. The author mentions “inotify” for monitoring directories. One limitation of this method was, it doesn’t monitor the sub-directories. On searching I could find a python module named “pyinotify” which supports monitoring sub-directories recursively. This article mentions the steps to keep directories in two remote machines in Live sync using “pyinotify”.

Machine1 ==> Source ==> 10.0.0.236
Machine2 ==> Destination ==> 10.0.0.237
Folder to kept in sync: “/root/testing”

1) Install “pyinotify” python module in source machine

cd /usr/local/src/
wget https://nodeload.github.com/seb-m/pyinotify/zipball/master
unzip master
cd seb-m-pyinotify-d5d471e/
python setup.py install

2) Enable ssh passwordless login from source(10.0.0.236) to destination(10.0.0.237)

[root@user1 testing]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
6e:11:82:9b:8c:2f:d6:b1:a6:29:07:a6:ea:17:e9:3f root@user.testserver.com
[root@user1 testing]#
[root@user1 testing]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.237
21
The authenticity of host ‘10.0.0.237 (10.0.0.237)’ can’t be established.
RSA key fingerprint is f4:cd:cd:e9:51:08:11:68:1c:90:b5:84:9a:c4:6b:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.0.0.237’ (RSA) to the list of known hosts.
root@10.0.0.237’s password:
Now try logging into the machine, with “ssh ‘root@10.0.0.237′”, and check in:

.ssh/authorized_keys

to make sure we haven’t added extra keys that you weren’t expecting.

[root@user1 testing]# ssh root@10.0.0.237
Last login: Wed Dec 28 08:33:13 2011 from 10.0.0.28
[root@user2 ~]#

3) Run “pyinotify.py” to sync the source and destination directory.

cd /usr/local/src/seb-m-pyinotify-d5d471e/python2/
python pyinotify.py -v -r -s /root/testing -c "rsync -r -e \"ssh\" -v /root/testing/ root@10.0.0.237:/root/testing"

Options:

-v : displaying verbose messages
-r : recursively monitor the directories
-s : source directory
-c : command to execute when an inode notification orrcurs

Use the “–delete” option in rsync to remove files/folders in destination when they get deleted in source.

cd /usr/local/src/seb-m-pyinotify-d5d471e/python2/
python pyinotify.py -v -r -s /root/testing -c "rsync -r --delete -e \"ssh\" -v /root/testing/ root@10.0.0.237:/root/testing"

Add the above command in /etc/rc.local to start it during system start-up.

Categories: HA

SETUP DRBD IN UBUNTU 10.10

August 13, 2011 Leave a comment

Setup two ubuntu10.10 nodes. The details of which are pasted below,

++++++++++++++++++
Primary ——-> heuristics
IP’s: ——-> 192.168.1.30 , 192.168.1.31
Block device: —-> /dev/sda3 ( 10GB size in my case)

Secondary ——-> heuristics2
IP’s: ——-> 192.168.1.32 , 192.168.1.33
Block device: —-> /dev/sda3 ( 10GB size in my case)
++++++++++++++++++

1) Install drbd8-utils package on both servers.

apt-get install drbd8-utils

2) Create a configuration file named “/etc/drbd.conf” with exactly the same contents on both the machines.

global { usage-count no; }
common { syncer { rate 1000M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on heuristics {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.1.31:7788;
                meta-disk internal;
        }
        on heuristics2 {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.1.33:7788;
                meta-disk internal;
        }
}

3) In my machines, the “/dev/sda3” partition was previously being used by “/home”. So, I have to unmount the “/home” and then destroy the filesystem. If any important data is already present in your machines, then take backup of it before proceeding 🙂

root@heuristics2:~# dd if=/dev/zero bs=512 count=512 of=/dev/sda3
512+0 records in
512+0 records out
262144 bytes (262 kB) copied, 0.0142205 s, 18.4 MB/s
root@heuristics2:~#

4) After destroying the filesystem initialize the meta datat storage on both the server as follows,

root@heuristics2:~# drbdadm create-md r0
Writing meta data…
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
root@heuristics2:~#

5) Start the DRBD daemon,

root@heuristics2:~# /etc/init.d/drbd start
* Starting DRBD resources [
r0
Found valid meta data in the expected location, 5104463872 bytes into /dev/sda3.
d(r0) s(r0) n(r0) ] [ OK ]
root@heuristics2:~#

6) Now in the primary server(ie, heuristics) we need to enter the following command.

drbdadm — –overwrite-data-of-peer primary all

7) Create filesystem on /dev/drbd0 .

root@heuristics:~# mkfs.ext3 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
312000 inodes, 1246160 blocks
62308 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1279262720
39 block groups
32768 blocks per group, 32768 fragments per group
8000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@heuristics:~#

Mount it on “/home” (or any partition you choose)

root@heuristics:~# mount /dev/drbd0 /home
root@heuristics:~# df -Th /home
Filesystem Type Size Used Avail Use% Mounted on
/dev/drbd0 ext3 5.1G 145M 4.7G 4% /home
root@heuristics:~#

For switching roles between primary and secondary, do the following:

1) Unmount “/dev/drbd0” on primary

root@heuristics:~# umount /dev/drbd0
root@heuristics:~#

2) Change current primary to secondary

root@heuristics:~# drbdadm secondary r0
root@heuristics:~#

3) Change current secondary(heuristics2) to primary and mount it on “/home”

root@heuristics2:~# drbdadm primary r0
root@heuristics2:~# mount /dev/drbd0 /home

During node failure(of either primary or secondary), the surviving node detects the peer node’s failure, and switches to disconnected mode. DRBD does not promote the surviving node to the primary role; it is the cluster management application’s responsibility to do so. Linux Heartbeat package or Pacemaker would work fine as a cluster management suite.

To know the detailed working of DRBD during node-failure, refer the url pasted below,

NOTE:

1) The DRBD status can be monitored from the file “/proc/drbd”.

2) If DRBD needs to be configured with clustered file systems like GFS or OCFS2, then the “allow-two-primaries ” option in DRBD must be specified.

3) While performing initial disk synchronization after a HDD failure, its important to perform the synchronization in the right direction; otherwise data loss will be the result:( . FOr more detailed information check the url given below,

4) Split brain recovery.

REFERENCES:

http://www.drbd.org/users-guide/re-drbdsetup.html
https://help.ubuntu.com/10.10/serverguide/C/drbd.html

Categories: HA

OCFS2 + ISCSI Centralized storage in Ubuntu 10.10

August 10, 2011 5 comments

In this article I will mention the steps to mount an ISCSI target on two ubuntu machines and then cluster it using oracle clustered file system. The newly mounted partition can be used as a centralized storage location in High availability, Failover or load-balancing setup.

The step by step howto is provided below,

1) Setup an ISCSI Server using Openfiler and create SAN LUN, and assign the IP 192.168.1.11 to it.

For setting up openfiler based ISCI target, you can refer steps 1 to 8 mentioned in the url pasted below.

2) Setup two servers with Ubuntu 10.10 in it.

server1 ==> name: heuristics –> IP 192.168.1.30
server2 ==> name: heuristics2 –> IP 192.168.1.32

3) Install open-iscsi tool in both servers

apt-get install open-iscsi open-iscsi-utils
/etc/init.d/open-iscsi start

4) List out the ISCSI targets available in both servers.

iscsiadm -m discovery -t sendtargets -p 192.168.1.11

In my case the above command produced the following output,

root@heuristics2:~# iscsiadm -m discovery -t sendtargets -p 192.168.1.11
192.168.1.11:3260,1 iqn.2006-01.com.openfiler:tsn.0d0c0c810c57
root@heuristics2:~#

5) Mount the ISCSI target “iqn.2006-01.com.openfiler:tsn.0d0c0c810c57″(Lets call it TG57) to the local machine

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.0d0c0c810c57 -p 192.168.1.11 –login

6) Step5 will make the iscsi target TG57 as a device to the system. Which can be viewed as a block device.

root@heuristics# fdisk -l

Disk /dev/sdb: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003f05c

Device Boot Start End Blocks Id System
/dev/sdb1 1 1011 1047383 83 Linux

7) Install OCFS2 – Oracle Cluster File System for Linux

apt-get install ocfs2 ocfs2-tools

8) Configure OCFS2

Create a configuration file with proper indendation and copy it to both the servers. In my case “ocfs2” is the cluster name.

root@heuristics:~# cat /etc/ocfs2/cluster.conf
node:
    ip_port=7777
    ip_address=192.168.1.30
    number=0
    name=heuristics
    cluster=ocfs2

node:
    ip_port=7777
    ip_address=192.168.1.32
    number=1
    name=heuristics2
    cluster=ocfs2

cluster:
    node_count=2
    name=ocfs2
root@heuristics:~# 

If proper indentation is not provided the following error will be shown,

Starting cluster oracle: Failed
o2cb_ctl: Unable to load cluster configuration file “/etc/ocfs2/cluster.conf”
Stopping cluster oracle: Failed
o2cb_ctl: Unable to load cluster configuration file “/etc/ocfs2/cluster.conf”

9) Start the cluster service in both the machines

/etc/init.d/ocfs2 start
/etc/init.d/o2cb start

10) Create one partition named /dev/sdb1 in the ISCSI target

fdisk /dev/sdb

11) Make ocfs cluster file system using the following command(need to execute only on one machine)

mkfs.ocfs2 -b 4k -C 32k -N3 -L cluster-storage /dev/sdb1

This creates a file system with 4096 block size and 32768 (32k) cluster size.
NOTE: N= 3 , for a cluster with 2 machines, N=3 and for a cluster with ‘n’ machines N=(n+1)

12) Update partition table on all servers in the cluster. In this case all the servers have /dev/sdb as the iSCSI target.

We will run the following to re-read the partition:

blockdev –rereadpt /dev/sdb

Next, we will want to create a mount point on the servers for this cluster.

mkdir /mycluster

Mount the partition,

mount -L cluster-storage /cluster-storage

13) Show results and test

root@heuristics2:~# hostname
heuristics2
root@heuristics2:~# df -TH /cluster-storage/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb1 ocfs2 1.1G 330M 744M 31% /cluster-storage
root@heuristics2:~#
root@heuristics:~# hostname
heuristics
root@heuristics:~# df -TH /cluster-storage/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb1 ocfs2 1.1G 330M 744M 31% /cluster-storage
root@heuristics:~#

REFERENCE

http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml
http://knowledgelayer.softlayer.com/questions/265/Can+I+connect+multiple+servers+to+a+single+iSCSI+LUN%3F

Categories: HA