Building simple DRBD Cluster on Linux – CentOS 6.5

The Distributed Replicated Block Device (DRBD) is a Linux Kernel module that constitutes a distributed storage system. In this blog we will look into setting up very simple replication cluster between 2 partition /dev/sdb1 located on 2 nodes u1.local and u2.local. You can use DRBD to share block devices between Linux servers to share file systems and data. Is is similar to Raid 1 – mirroring over the network. For automatic failover support, you can combine DRBD with the Linux Heartbeat project.

1. Open firewall ports TCP ports 7788 through 7799 between 2 systems.
2. Install and configure ntp on both systems, so the time is correctly synchronized.
3. Make sure hostnames set correctly. Edit /etc/hosts file to make sure name correct ip addresses are in place.
4. Install repo

rpm -ivh

5. Update packages

yum update



yum -y install drbd83-utils kmod-drbd83

After you have installed DRBD, you must set aside a roughly identically sized storage area on both cluster nodes. A hard drive partition (or a full physical hard drive), a software RAID device, an LVM Logical Volume or any other block device configured by the Linux device-mapper infrastructure,
any other block device type found on your system.

it is not nessesary for this storage area to be empty it is recommended, though not strictly required, that you run your DRBD replication over a dedicated connection.

All aspects of DRBD are controlled in its configuration file, /etc/drbd.conf which includes
include “/etc/drbd.d/global_common.conf”;
include “/etc/drbd.d/*.res”;

/etc/drbd.d/global_common.conf contains the global and common sections of the DRBD configuration, and the .res files contain one resource section each.

1. We will start configuration process by editing this files.

vi etc/drbd.d/global_common.conf

global {
usage-count yes;
common {
net {
protocol C;

#vi /etc/drbd.d/r0.res

resource r0 {
device /dev/drbd1;
disk /dev/sdb1;
meta-disk internal;
on u1 {
on u2 {

2. run modprobe drbd

modprobe drbd

3. Each of the following steps must be completed on both nodes.

drbdadm create-md r0
drbdadm up r0
service drbd start

4. Check synchronization and wait for the Distributed Replicated Block Device disk initial synchronization to complete.

cat /proc/drbd

5. On primary node.

mkfs.ext4 /dev/drbd1
mkdir /data
mount /dev/drbd1 /data

5. The process to fail-overs manually in a passive/active configuration the process is as follows.
On node – u1

umount /data
drbdadm secondary r0

On node – u2

mkdir /data
drbdadm primary r0
mount /dev/drbd1 /data



drbdadm cstate r0
cat /proc/drbd