Deploying an active-active-VRRP cluster
This topic includes the following information:
Configuration overview
The Virtual Router Redundancy Protocol (VRRP) is designed to eliminate the single point of failure inherent in the static default routed environment. VRRP specifies an election protocol that dynamically assigns responsibility for a virtual router to one of the VRRP routers on a LAN. The VRRP router controlling the IP address(es) associated with a virtual router is called the primary, and forwards packets sent to these IP addresses. The election process provides dynamic fail-over in the forwarding responsibility should the primary become unavailable. Any of the virtual router's IP addresses on a LAN can then be used as the default first hop router by end-hosts. The advantage of VRRP is a higher availability default path without requiring configuration of dynamic routing or router discovery protocols on every end-host.
A virtual router is defined by its virtual router identifier (VRID) and a set of IP addresses. A VRRP router may associate a virtual router with its real address on an interface, and may also be configured with additional virtual router mappings and priority that the virtual router can back up. The mapping between VRID and addresses must be coordinated among all VRRP routers on a LAN.
FortiADC only adopts the VRRP concept, but not the exact VRRP protocol itself. For this reason, its HA Active-Active VRRP mode cab only be called a VRRP-like HA mode
VRRP configurations can be used as a high availability (HA) solution to ensure that your network maintains connectivity with the Internet (or with other networks) even if the default router for your network fails. Using VRRP, you can assign VRRP routers as primary or backup routers. The primary router processes traffic, while the backup routers monitor the primary router and start forwarding traffic the moment the primary router fails.
VRRP is described in RFC 3768.
FortiADC units can function as primary or backup Virtual Router Redundancy Protocol (VRRP) routers and can be quickly and easily integrated into a network that has already deployed VRRP. In a VRRP configuration, when a FortiADC unit operating as the primary unit fails, a backup unit automatically takes its place and continues processing network traffic. In such a situation, all traffic to the failed unit transparently fails over to the backup unit that takes over the role of the failed primary FortiADC unit. When the failed FortiADC unit is restored, it will once again take over processing traffic for the network. See An active-active-VRRP cluster configuration using two FortiADC units.
In an active-active-VRRP cluster, one of the nodes is selected as the primary node of a traffic group, and the rest of the nodes are member nodes of the traffic group. Traffic from the upstream can be load-balanced among up to eight member nodes. Active-active-VRRP clusters also support failover. If the primary node fails, the traffic group work on this node will fail over to one of the backup nodes which will send gratuitous ARP to adjacent devices to redirect traffic for its own MAC address to all network interfaces within the traffic group.
The FortiADC VRRP configuration involves the following:
- Traffic group and their features (See Creating a traffic group)
- Interface and virtual server (pertinent floating IP and traffic group )
- HA
Note:It is important to note that FortiADC only supports VRRP configuration between two or more FortiADC units. It can NOT be integrated into a VRRP group formed with any third-party VRRP devices.
Basic steps
To deploy an active-active-VRRP cluster:
For how to deploy, refer to Deploy HA-VRRP mode in HA Deployment Guide. In the following steps, we introduce how to configure the VRRP cluster after deployment.
- Configure the HA active-active--VRRP cluster.
For example:
FAD 1:
config system ha
set mode active-active-vrrp
set hbdev port2
set group-id 14
set local-node-id 1
end
FAD 2:
config system ha
set mode active-active-vrrp
set hbdev port2
set group-id 14
set local-node-id 2
end
- Configure the traffic group.
Configure the traffic group and set its parameters. The failover sequence must be configured according to the order of node IDs. This means that if a node is dead, the next node in queue will take over handling the traffic. If you want to decide when a node should retake the traffic over from power-down to start-up, you can enable the preempt.
If only two nodes, connect the two appliances directly with a crossover cable.
If more than two nodes, link the appliances through a switch. If connected through a switch, the interfaces must be reachable by Layer 2 multicast.
config system traffic-group
edit "traffic-group-1"
set failover-order 1 2
next
edit "traffic-group-2"
set failover-order 2 1
set preempt enable
next
end
- Configure applications and relate them with the traffic group
Relate applications with the traffic group in the virtual server configuration and interface + IP configuration. If no traffic group is related, the “default” traffic group will be used.
For example (Relate a virtual server to a traffic group):
config load-balance virtual-server
edit "vs1"
set packet-forwarding-method FullNAT
set interface port1
set ip 10.128.3.4
set load-balance-profile LB_PROF_HTTP
set load-balance-method LB_METHOD_DEST_IP_HASH
set load-balance-pool rs1
set ippool-list vs1-pool vs1-pool-1
set traffic-group traffic-group-1
next
edit "vs2"
set packet-forwarding-method FullNAT
set interface port1
set ip 10.127.3.4
set load-balance-profile LB_PROF_HTTP
set load-balance-method LB_METHOD_DEST_IP_HASH
set load-balance-pool rs2
set ippool-list vs2-pool vs2-pool-1
set traffic-group traffic-group-2
next
end
For example (Relate an interface and IP address with a traffic group):
config load-balance virtual-server
edit "port1"
set vdom root
set ip 10.128.3.1/16
set allowaccess https ping ssh snmp http telnet
set traffic-group traffic-group-1
set floating enable
set floating-ip 10.128.3.3
next
edit "port2"
set vdom root
set ip 10.127.3.1/16
set allowaccess https ping ssh snmp http telnet
set traffic-group traffic-group-2
set floating enable
set floating-ip 10.127.3.3
next
end
- Configure working status of the HA nodes.
Configure the above two traffic groups, with the opposite failover-order, and configure two virtual servers which are related with the two traffic groups respectively. The VS1 will work on node 1, and VS2 will work on node 2. If one HA node fails, the other node will take over all the traffic from the failed one.
Best practice tips
The following tips are best practices:
-
Note: After you have saved the HA configuration changes, cluster members join or rejoin the cluster. After you have saved configuration changes on the primary node, it automatically pushes its configuration to the member nodes.