Home > HA > SETUP DRBD IN UBUNTU 10.10

SETUP DRBD IN UBUNTU 10.10

Setup two ubuntu10.10 nodes. The details of which are pasted below,

++++++++++++++++++
Primary ——-> heuristics
IP’s: ——-> 192.168.1.30 , 192.168.1.31
Block device: —-> /dev/sda3 ( 10GB size in my case)

Secondary ——-> heuristics2
IP’s: ——-> 192.168.1.32 , 192.168.1.33
Block device: —-> /dev/sda3 ( 10GB size in my case)
++++++++++++++++++

1) Install drbd8-utils package on both servers.

apt-get install drbd8-utils

2) Create a configuration file named “/etc/drbd.conf” with exactly the same contents on both the machines.

global { usage-count no; }
common { syncer { rate 1000M; } }
resource r0 {
        protocol C;
        startup {
                wfc-timeout  15;
                degr-wfc-timeout 60;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "secret";
        }
        on heuristics {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.1.31:7788;
                meta-disk internal;
        }
        on heuristics2 {
                device /dev/drbd0;
                disk /dev/sda3;
                address 192.168.1.33:7788;
                meta-disk internal;
        }
}

3) In my machines, the “/dev/sda3” partition was previously being used by “/home”. So, I have to unmount the “/home” and then destroy the filesystem. If any important data is already present in your machines, then take backup of it before proceeding 🙂

root@heuristics2:~# dd if=/dev/zero bs=512 count=512 of=/dev/sda3
512+0 records in
512+0 records out
262144 bytes (262 kB) copied, 0.0142205 s, 18.4 MB/s
root@heuristics2:~#

4) After destroying the filesystem initialize the meta datat storage on both the server as follows,

root@heuristics2:~# drbdadm create-md r0
Writing meta data…
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
root@heuristics2:~#

5) Start the DRBD daemon,

root@heuristics2:~# /etc/init.d/drbd start
* Starting DRBD resources [
r0
Found valid meta data in the expected location, 5104463872 bytes into /dev/sda3.
d(r0) s(r0) n(r0) ] [ OK ]
root@heuristics2:~#

6) Now in the primary server(ie, heuristics) we need to enter the following command.

drbdadm — –overwrite-data-of-peer primary all

7) Create filesystem on /dev/drbd0 .

root@heuristics:~# mkfs.ext3 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
312000 inodes, 1246160 blocks
62308 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1279262720
39 block groups
32768 blocks per group, 32768 fragments per group
8000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@heuristics:~#

Mount it on “/home” (or any partition you choose)

root@heuristics:~# mount /dev/drbd0 /home
root@heuristics:~# df -Th /home
Filesystem Type Size Used Avail Use% Mounted on
/dev/drbd0 ext3 5.1G 145M 4.7G 4% /home
root@heuristics:~#

For switching roles between primary and secondary, do the following:

1) Unmount “/dev/drbd0” on primary

root@heuristics:~# umount /dev/drbd0
root@heuristics:~#

2) Change current primary to secondary

root@heuristics:~# drbdadm secondary r0
root@heuristics:~#

3) Change current secondary(heuristics2) to primary and mount it on “/home”

root@heuristics2:~# drbdadm primary r0
root@heuristics2:~# mount /dev/drbd0 /home

During node failure(of either primary or secondary), the surviving node detects the peer node’s failure, and switches to disconnected mode. DRBD does not promote the surviving node to the primary role; it is the cluster management application’s responsibility to do so. Linux Heartbeat package or Pacemaker would work fine as a cluster management suite.

To know the detailed working of DRBD during node-failure, refer the url pasted below,

NOTE:

1) The DRBD status can be monitored from the file “/proc/drbd”.

2) If DRBD needs to be configured with clustered file systems like GFS or OCFS2, then the “allow-two-primaries ” option in DRBD must be specified.

3) While performing initial disk synchronization after a HDD failure, its important to perform the synchronization in the right direction; otherwise data loss will be the result:( . FOr more detailed information check the url given below,

4) Split brain recovery.

REFERENCES:

http://www.drbd.org/users-guide/re-drbdsetup.html
https://help.ubuntu.com/10.10/serverguide/C/drbd.html

Advertisements
Categories: HA
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: