SITUATION: Customer has a single website with four different web applications installed under four sub directories of the website. Now configure apache to serve all these four applications from four different ports.
1) OS – Ubuntu 11
2) Website name and documentroot,
Name: jackal777.com DocumentRoot: /home/jackal/public_html
3) Web application sub-directories and the ports going to be used,
/home/jackal/public_html/app1 : Port 7001 /home/jackal/public_html/app2 : Port 7002 /home/jackal/public_html/app3 : Port 7003 /home/jackal/public_html/app4 : Port 7004
4) Apache mod_proxy module is installed. You can install it using,
apt-get install libapache2-mod-proxy-html -y
1) Open up /etc/apache2/ports.conf and add the following directives,
Listen 80 Listen 127.0.0.1:7001 Listen 127.0.0.1:7002 Listen 127.0.0.1:7003 Listen 127.0.0.1:7004
2) Enable mod_proxy by copying the configuratons from ‘mods-available’ directory to ‘mods-enabled’
cp -pr /etc/apache2/mods-available/*proxy* /etc/apache2/mods-enabled/
3) Create a virtualhost file “/etc/apache2/sites-enabled/jackal777.com” for website with the following contents,
<VirtualHost *:80> ServerName jackal777.com DocumentRoot /home/jackal/public_html ProxyPass /app1/ http://127.0.0.1:7001/ ProxyPass /app2/ http://127.0.0.1:7002/ ProxyPass /app3/ http://127.0.0.1:7003/ ProxyPass /app4/ http://127.0.0.1:7004/ </VirtualHost> <VirtualHost 127.0.0.1:7001> DocumentRoot /home/jackal/public_html/app1 </VirtualHost> <VirtualHost 127.0.0.1:7002> DocumentRoot /home/jackal/public_html/app2 </VirtualHost> <VirtualHost 127.0.0.1:7003> DocumentRoot /home/jackal/public_html/app3 </VirtualHost> <VirtualHost 127.0.0.1:7004> DocumentRoot /home/jackal/public_html/app4 </VirtualHost>
4) Test configuration and gracefully restart apache.
apache2ctl -t apache2ctl -k graceful
5) Now access the url’s,
http://jackal777.com/app1/ http://jackal777.com/app2/ http://jackal777.com/app3/ http://jackal777.com/app4/
SCOPE: Using mod_proxy, we could forward requests coming to different servers and make the applications running from several different servers.
Hope this info will be somewhat useful 🙂
I came across an article at cyberciti which explains the steps to monitor directories for changes and take action when a new inode event occurs. The author mentions “inotify” for monitoring directories. One limitation of this method was, it doesn’t monitor the sub-directories. On searching I could find a python module named “pyinotify” which supports monitoring sub-directories recursively. This article mentions the steps to keep directories in two remote machines in Live sync using “pyinotify”.
1) Install “pyinotify” python module in source machine
2) Enable ssh passwordless login from source(10.0.0.236) to destination(10.0.0.237)
3) Run “pyinotify.py” to sync the source and destination directory.
-v : displaying verbose messages
-r : recursively monitor the directories
-s : source directory
-c : command to execute when an inode notification orrcurs
Use the “–delete” option in rsync to remove files/folders in destination when they get deleted in source.
Add the above command in /etc/rc.local to start it during system start-up.
Setup two ubuntu10.10 nodes. The details of which are pasted below,
1) Install drbd8-utils package on both servers.
2) Create a configuration file named “/etc/drbd.conf” with exactly the same contents on both the machines.
3) In my machines, the “/dev/sda3” partition was previously being used by “/home”. So, I have to unmount the “/home” and then destroy the filesystem. If any important data is already present in your machines, then take backup of it before proceeding 🙂
4) After destroying the filesystem initialize the meta datat storage on both the server as follows,
5) Start the DRBD daemon,
6) Now in the primary server(ie, heuristics) we need to enter the following command.
7) Create filesystem on /dev/drbd0 .
Mount it on “/home” (or any partition you choose)
For switching roles between primary and secondary, do the following:
1) Unmount “/dev/drbd0” on primary
2) Change current primary to secondary
3) Change current secondary(heuristics2) to primary and mount it on “/home”
During node failure(of either primary or secondary), the surviving node detects the peer node’s failure, and switches to disconnected mode. DRBD does not promote the surviving node to the primary role; it is the cluster management application’s responsibility to do so. Linux Heartbeat package or Pacemaker would work fine as a cluster management suite.
To know the detailed working of DRBD during node-failure, refer the url pasted below,
1) The DRBD status can be monitored from the file “/proc/drbd”.
2) If DRBD needs to be configured with clustered file systems like GFS or OCFS2, then the “allow-two-primaries ” option in DRBD must be specified.
3) While performing initial disk synchronization after a HDD failure, its important to perform the synchronization in the right direction; otherwise data loss will be the result:( . FOr more detailed information check the url given below,
4) Split brain recovery.
In this article I will mention the steps to mount an ISCSI target on two ubuntu machines and then cluster it using oracle clustered file system. The newly mounted partition can be used as a centralized storage location in High availability, Failover or load-balancing setup.
The step by step howto is provided below,
1) Setup an ISCSI Server using Openfiler and create SAN LUN, and assign the IP 192.168.1.11 to it.
For setting up openfiler based ISCI target, you can refer steps 1 to 8 mentioned in the url pasted below.
2) Setup two servers with Ubuntu 10.10 in it.
3) Install open-iscsi tool in both servers
4) List out the ISCSI targets available in both servers.
In my case the above command produced the following output,
5) Mount the ISCSI target “iqn.2006-01.com.openfiler:tsn.0d0c0c810c57″(Lets call it TG57) to the local machine
6) Step5 will make the iscsi target TG57 as a device to the system. Which can be viewed as a block device.
7) Install OCFS2 – Oracle Cluster File System for Linux
8) Configure OCFS2
Create a configuration file with proper indendation and copy it to both the servers. In my case “ocfs2” is the cluster name.
If proper indentation is not provided the following error will be shown,
9) Start the cluster service in both the machines
10) Create one partition named /dev/sdb1 in the ISCSI target
11) Make ocfs cluster file system using the following command(need to execute only on one machine)
This creates a file system with 4096 block size and 32768 (32k) cluster size.
NOTE: N= 3 , for a cluster with 2 machines, N=3 and for a cluster with ‘n’ machines N=(n+1)
12) Update partition table on all servers in the cluster. In this case all the servers have /dev/sdb as the iSCSI target.
We will run the following to re-read the partition:
Next, we will want to create a mount point on the servers for this cluster.
Mount the partition,
13) Show results and test