Setting up the Loadbalancer (using HAProxy) seemed that it is a good idea to have MASTER -> SLAVE failover capabilities in case of a disaster. What we actually need here is only two identical servers with VIP (Virtual IP), which is migrated between them - quite simple and reliable. The choice was made in keepalived’s favour. As stated on the site - The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. The project relies on the Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. As was already mentioned - for load balancing will be used HAProxy - but more about this in the next articles. Mainly what we need now is just to be able to move a set of VIPs to SLAVE and back to MASTER.

The configuration is quite straightforward:

global_defs {
        lvs_id LB1
}

vrrp_script check_haproxy {
        script "killall -0 haproxy"
        interval 2
        weight 2
}

vrrp_instance HAproxy {
        state MASTER
        interface eth0
        virtual_router_id 10
        priority 101
        advert_int 1
        virtual_ipaddress {
          198.51.100.100/24 dev eth0 label eth0:1
        }
        track_script {
          check_haproxy
        }
}

this was the master configuration file. On the slave side one should change only lvs_id in global_defs section to reflect that this is the different LVS director (Linux Virtual Server). And two more changes in vrrp_instance - state -> SLAVE and priority -> 100. That’s it - after starting/restarting keepalived process we should see our defined IP on the eth0 interface:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 18:a9:04:22:19:11 brd ff:ff:ff:ff:ff:ff
    inet 198.51.100.10/24 brd 198.51.100.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 198.51.100.100/24 scope global secondary eth0:1
       valid_lft forever preferred_lft forever

Docs? Phew! Who cares?

Everything was nice and quiet till we needed about 30 virtual_ipaddress (for the different test purposes) on the interface - that did not work. As we found out later the virtual_ipaddress block is limited to 20 IP addresses. Yeah, reading documentation is actually helps!

The simplest solution was to put one more configuration sub-section virtual_ipaddress_excluded in vrrp_instance section:

virtual_ipaddress_excluded {
  198.51.100.101/24 dev eth0 label eth0:2
  198.51.100.102/24 dev eth0 label eth0:3
  198.51.100.107/24 dev eth0 label eth0:4
  ....
}

since virtual_ipaddress_excluded block is unlimited, now we can have there any number of VIPs which will be bring up when the migration occurs.