Building simple DRBD Cluster on Linux – CentOS 6.5 and CentOS7
The Distributed Replicated Block Device (DRBD) is a Linux Kernel module that constitutes a distributed storage system. In this blog we will look into setting up very simple replication cluster between 2 partition /dev/sdb1 located on 2 nodes u1.local and u2.local. You can use DRBD to share block devices between Linux servers to share file systems and data. Is is similar to Raid 1 – mirroring over the network. For automatic failover support, you can combine DRBD with the Linux Heartbeat project.
DRBD version 9 on CentOS 7
Lets look at basic steps setting up DRBD on CentOS 7.
Assumptions: 2 CentOS 7 systems d1.local and d2.local (virtual or physical) 192.168.0.21, 192.168.0.22 with additional drive attached to each one /dev/sdb)
Install prerequisites on both servers
yum -y update yum -y install gcc make automake autoconf libxslt libxslt-devel flex rpm-build kernel-devel
Install DRBD on both servers
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm yum install drbd90-utils kmod-drbd90
Make sure module is loaded and if its not load it and make sure it loads on boot.
lsmod | grep -i drbd modprobe drbd lsmod | grep -i drbd
To make sure module loaded on boot run the following
echo drbd > /etc/modules-load.d/drbd.conf
Configure DRBD by editing two main configuration files located in /etc/drbd.d/
vi global_common.conf global { usage-count yes; } common { net { protocol C; } }
Create r0 configuration file
vi r0.res resource r0 { on d1.local { device /dev/drbd0; disk /dev/sdb; address 192.168.0.21:7789; meta-disk internal; } on d2.local { device /dev/drbd0; disk /dev/sdb; address 192.168.0.22:7789; meta-disk internal; } }
Create resource on both nodes
drbdadm create-md r0 drbdadm up r0
On the server that will be primary run the following command
drbdadm primary --force r0
Start drbd service
systemctl start drbd # this command must be executed on both nodes at the same time systemctl enable drbd
Now run the following command to check status
drbdadm status r0 [root@d1 drbd.d]# drbdadm status r0 r0 role:Primary disk:UpToDate d2.local role:Secondary peer-disk:UpToDate [root@d2 drbd.d]# drbdadm status r0 r0 role:Secondary disk:UpToDate d1.local role:Primary peer-disk:UpToDate
Lets now create new file system and mount in on /data directory
[root@d1 drbd.d]# mkfs.xfs /dev/drbd0 [root@d1 drbd.d]# mkdir /data [root@d1 drbd.d]# mount /dev/drbd0 /data/
Run df command to see your newly mounted drbd drive
[root@d1 drbd.d]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 2.6G 1.4G 1.3G 52% / devtmpfs 486M 0 486M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 6.6M 490M 2% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup /dev/sda1 1014M 152M 863M 15% /boot tmpfs 100M 0 100M 0% /run/user/0 /dev/drbd0 2.0G 33M 2.0G 2% /data
To fail over to second node.
On d1.local
[root@d1 ~]# umount /data [root@d1 ~]# drbdadm secondary r0
ON d2.local
[root@d2 drbd.d]# mkdir /data [root@d2 drbd.d]# drbdadm primary r0 [root@d2 drbd.d]# mount /dev/drbd1 /data
DRBD 8.3 on CentOS 6
Prerequisites
1. Open firewall ports TCP ports 7788 through 7799 between 2 systems.
2. Install and configure ntp on both systems, so the time is correctly synchronized.
3. Make sure hostnames set correctly. Edit /etc/hosts file to make sure name correct ip addresses are in place.
4. Install repo
rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
5. Update packages
yum update
Installation
yum -y install drbd83-utils kmod-drbd83
Configuration
After you have installed DRBD, you must set aside a roughly identically sized storage area on both cluster nodes. A hard drive partition (or a full physical hard drive), a software RAID device, an LVM Logical Volume or any other block device configured by the Linux device-mapper infrastructure,
any other block device type found on your system.
note:
it is not nessesary for this storage area to be empty it is recommended, though not strictly required, that you run your DRBD replication over a dedicated connection.
All aspects of DRBD are controlled in its configuration file, /etc/drbd.conf which includes
include “/etc/drbd.d/global_common.conf”;
include “/etc/drbd.d/*.res”;
/etc/drbd.d/global_common.conf contains the global and common sections of the DRBD configuration, and the .res files contain one resource section each.
1. We will start configuration process by editing this files.
vi etc/drbd.d/global_common.conf
global {
usage-count yes;
}
common {
net {
protocol C;
}
}
#vi /etc/drbd.d/r0.res
resource r0 {
device /dev/drbd1;
disk /dev/sdb1;
meta-disk internal;
on u1 {
address 192.168.0.3:7789;
}
on u2 {
address 192.168.0.4:7789;
}
}
2. run modprobe drbd
modprobe drbd
3. Each of the following steps must be completed on both nodes.
drbdadm create-md r0
drbdadm up r0
service drbd start
4. Check synchronization and wait for the Distributed Replicated Block Device disk initial synchronization to complete.
cat /proc/drbd
5. On primary node.
mkfs.ext4 /dev/drbd1
mkdir /data
mount /dev/drbd1 /data
The process to fail-overs manually in a passive/active configuration the process is as follows.
On node – u1
umount /data
drbdadm secondary r0
On node – u2 (Only if directory does not exist)
mkdir /data
drbdadm primary r0
mount /dev/drbd1 /data
Troubleshooting
drbdadm cstate r0
cat /proc/drbd