Fortinet Document Library

Version:

Version:


Table of Contents

Handbook

Download PDF
Copy Link

Configuring HA settings

Note: Currently, FortiADC only supports HA configurations for IPv4 address mode; HA is not supported on IPv6.

Before you begin:

  • You must have Read-Write permission to items in the System category.
To configure HA settings:
  1. Go to System > High Availability.
  2. Complete the configuration as described in High availability configuration.
  3. Save the configuration.

After you have saved the configuration, cluster members begin to send heartbeat traffic to each other. Members with the same Group ID join the cluster. They send synchronization traffic through their data links.

High availability configuration

Settings Guidelines
Cluster Mode
  • Standalone
  • Active-Passive
  • Active-Active
  • Active-Active-VRRP
Basic Settings  
Active-Pactive  
Group Name Name to identify the HA cluster if you have more than one. This setting is optional, and does not affect HA function. The maximum length is 63 characters.
Group ID Number that identifies the HA cluster. Nodes with the same group ID join the cluster. If you have more than one HA cluster on the same network, each cluster must have a different group ID. The group ID is used in the virtual MAC address that is sent in broadcast ARP messages. The valid range is 0 to 31. The default value is 0.
Config Priority

The default value is 100, but you can specify any numeric value ranging from 0 to 255.

Note: FortiADC 4.7.x has introduced a new parameter called config-priotity for HA configuration. It allows you to determine which configuration the system uses when synchronizing the configuration between the HA nodes. Therefore, upon upgrading to FortiADC 4.7.x, it is highly recommended that you use this option to manually set different HA configuration priority values on the nodes. Otherwise, you'll have no control over the system's master-slave configuration sync behavior. When the configuration priority values are identical on both nodes (whether by default or by configuration), the system uses the configuration of the appliance with the larger serial number to override that of the appliance with the smaller serial number. When the configuration priority values on the nodes are different, the configuration of the appliance with the lower configuration priority will prevail.

Active-Active  
Group Name Same as Active-Passive. See above.
Group ID Same as Active-Passive. See above.
Config Priority Same as Active-Passive. See above.
Local Node ID A number that uniquely identifies the member within the cluster. The valid range is from 0 to 7. This number is used in the virtual MAC address that is sent in ARP responses.
Node List Select the node IDs for the nodes in the cluster. An active-active cluster can have up to eight members.
Active-Active-VRRP  
Group Name Same as Active-Passive. See above.
Group ID Same as Active-Passive. See above.
Config Priority Same as Active-Passive. See above.
Local Node ID Same as Active-Active. See above.
Synchronization  
Layer 7 Persistence Synchronization

Enable to synchronize Layer 7 session data used for persistence to backend servers.

When enabled, the Source Address Persistence table is synchronized between HA members.

When not enabled, a node that receives traffic due to failover would not know that a session had been created already, so it will be treated as a new session.

Synchronization of the persistence table is not required for cookie-based or hash-based persistence methods to get the desired result. Client traffic will be routed to the same backend server.

Synchronization of the persistence table is not possible for SSL session ID. When the session via the first node is terminated, the client must re-establish an SSL connection via the second node. When a client requests a new SSL connection with an SSL server, the initial TCP connection has an SSL Session ID of 0. This zero value tells the server that it needs to set up a new SSL session and to generate an SSL Session ID. The server sends the new SSL Session ID in its response to the client as part of the SSL handshake.

Layer 4 Persistence Synchronization

Enable to synchronize Layer 4 session data used for persistence to backend servers.

When enabled, the Source Address Persistence table is synchronized between HA members. When not enabled, a node that receives traffic because of load balancing or failover would not know that a session had been created already, so it will be treated as a new session.

Synchronization of the persistence table is not required for hash-based persistence methods to get the desired result. Client traffic will be routed to the same backend server.

Layer 4 Connection Synchronization

Enable to synchronize Layer 4 connection state data.

When enabled, the TCP session table is synchronized. If subsequent traffic for the connection is distributed through a different cluster node because of failover, the TCP sessions can resume without interruption.

When not enabled, a node that receives traffic because of failover would not know that a session had been created already, and the client will be required to re-initialize the connection.

Advanced Settings  
Priority Number indicating priority of the member node when electing the cluster primary node. This setting is optional. The smaller the number, the higher the priority. The default is 5. The valid range is from 0 to 9.

Note: By default, up time is more important than this setting unless Override is enabled. See below.
Override Enabled by default. This makes device priority (see above) a more important factor than up time when selecting the primary node.
Heartbeat Interval

Number of 100-millisecond intervals at which heartbeat packets are sent. This is also the interval at which a node expects to receive heartbeat packets. This part of the configuration is pushed from the primary node to member nodes. The default is 2. The valid range is 1 to 20 (that is, between 100 and 2,000 milliseconds).

Note: Although this setting is pushed from the primary node to member nodes, you should initially configure all nodes with the same Detection Interval to prevent inadvertent failover from occurring before the initial synchronization.

Lost Heartbeat Threshold

Number of times a node retries the heartbeat and waits to receive HA heartbeat packets from the other nodes before concluding the other node is down. This part of the configuration is pushed from the primary node to member nodes. Normally, you do not need to change this setting. Exceptions include:

  • Increase the failure detection threshold if a failure is detected when none has actually occurred. For example, in an active-passive deployment, if the primary node is very busy during peak traffic times, it might not respond to heartbeat packets in time, and a standby node might assume that the primary node has failed.
  • Decrease the failure detection threshold or detection interval if administrators and HTTP clients have to wait too long before being able to connect through the primary node, resulting in noticeable down time.


The valid range is from 1 to 60.

Note: Although this setting is pushed from the primary node to member nodes, you should initially configure all nodes with the same HB Lost Threshold to prevent inadvertent failover from occurring before the initial synchronization.

ARP Times Number of times that the cluster member broadcasts extra address resolution protocol (ARP) packets when it takes on the primary role. (Even though a new NIC has not actually been connected to the network, the member does this to notify the network that a new physical port has become associated with the IP address and virtual MAC of the HA cluster.) This is sometimes called “using gratuitous ARP packets to train the network,” and can occur when the primary node is starting up, or during a failover. Also configure ARP Packet Interval.

Normally, you do not need to change this setting. Exceptions include:

  • Increase the number of times the primary node sends gratuitous ARP packets if an active-passive cluster takes a long time to fail over or to train the network. Sending more gratuitous ARP packets may help the failover to happen faster.
  • Decrease the number of times the primary node sends gratuitous ARP packets if the cluster has a large number of VLAN interfaces and virtual domains. Because gratuitous ARP packets are broadcast, sending them might generate a large amount of network traffic. As long as the active-passive cluster fails over successfully, you can reduce the number of times gratuitous ARP packets are sent to reduce the amount of traffic produced by a failover.

The valid range is 1 to 60. The default is 5.
ARP Interval Number of seconds to wait between each broadcast of ARP packets. Normally, you do not need to change this setting. Exceptions include:
  • Decrease the interval if an active-passive cluster takes a long time to fail over or to train the network. Sending ARP packets more frequently may help the failover to happen faster.
  • Increase the interval if the cluster has a large number of VLAN interfaces and virtual domains. Because gratuitous ARP packets are broadcast, sending them might generate a large amount of network traffic. As long as the active-passive cluster fails over successfully, you can increase the interval between when gratuitous ARP packets are sent to reduce the rate of traffic produced by a failover.

The valid range is from 1 to 20. The default is 6 seconds.
Data

Set the network interface to be used for data synchronization among cluster nodes. You can configure up to two data ports. If one data port fails, its traffic fails over to the next data port. If all data ports fail, data synchronization traffic fails over to the heartbeat port. If you do not configure a data port, the heartbeat port is used for synchronization. Use the same port numbers for all cluster members. For example, if you select port3 on the primary node, select port3 as the data port interface on the other member nodes.

Remote IP Monitor

Enable or disable active monitoring of remote beacon IP addresses to determine if the network path is available.

Note: This option is disabled by default. If enabled, you must specify the Failover Threshold and Failover Hold Time described below.

Failover Threshold Number of unreachable remote-ip-monitor-list to indicate failure. The default is 5. The valid range is 1-64.
Failover Hold Time If failover occurs due to a remote IP monitor test, and this node's role changes (to master or slave), it cannot change again until the hold time elapses. The hold time can be used to prevent looping. The default hold time is 120 seconds. The valid range is from 60 to 86400.

Configuring HA settings

Note: Currently, FortiADC only supports HA configurations for IPv4 address mode; HA is not supported on IPv6.

Before you begin:

  • You must have Read-Write permission to items in the System category.
To configure HA settings:
  1. Go to System > High Availability.
  2. Complete the configuration as described in High availability configuration.
  3. Save the configuration.

After you have saved the configuration, cluster members begin to send heartbeat traffic to each other. Members with the same Group ID join the cluster. They send synchronization traffic through their data links.

High availability configuration

Settings Guidelines
Cluster Mode
  • Standalone
  • Active-Passive
  • Active-Active
  • Active-Active-VRRP
Basic Settings  
Active-Pactive  
Group Name Name to identify the HA cluster if you have more than one. This setting is optional, and does not affect HA function. The maximum length is 63 characters.
Group ID Number that identifies the HA cluster. Nodes with the same group ID join the cluster. If you have more than one HA cluster on the same network, each cluster must have a different group ID. The group ID is used in the virtual MAC address that is sent in broadcast ARP messages. The valid range is 0 to 31. The default value is 0.
Config Priority

The default value is 100, but you can specify any numeric value ranging from 0 to 255.

Note: FortiADC 4.7.x has introduced a new parameter called config-priotity for HA configuration. It allows you to determine which configuration the system uses when synchronizing the configuration between the HA nodes. Therefore, upon upgrading to FortiADC 4.7.x, it is highly recommended that you use this option to manually set different HA configuration priority values on the nodes. Otherwise, you'll have no control over the system's master-slave configuration sync behavior. When the configuration priority values are identical on both nodes (whether by default or by configuration), the system uses the configuration of the appliance with the larger serial number to override that of the appliance with the smaller serial number. When the configuration priority values on the nodes are different, the configuration of the appliance with the lower configuration priority will prevail.

Active-Active  
Group Name Same as Active-Passive. See above.
Group ID Same as Active-Passive. See above.
Config Priority Same as Active-Passive. See above.
Local Node ID A number that uniquely identifies the member within the cluster. The valid range is from 0 to 7. This number is used in the virtual MAC address that is sent in ARP responses.
Node List Select the node IDs for the nodes in the cluster. An active-active cluster can have up to eight members.
Active-Active-VRRP  
Group Name Same as Active-Passive. See above.
Group ID Same as Active-Passive. See above.
Config Priority Same as Active-Passive. See above.
Local Node ID Same as Active-Active. See above.
Synchronization  
Layer 7 Persistence Synchronization

Enable to synchronize Layer 7 session data used for persistence to backend servers.

When enabled, the Source Address Persistence table is synchronized between HA members.

When not enabled, a node that receives traffic due to failover would not know that a session had been created already, so it will be treated as a new session.

Synchronization of the persistence table is not required for cookie-based or hash-based persistence methods to get the desired result. Client traffic will be routed to the same backend server.

Synchronization of the persistence table is not possible for SSL session ID. When the session via the first node is terminated, the client must re-establish an SSL connection via the second node. When a client requests a new SSL connection with an SSL server, the initial TCP connection has an SSL Session ID of 0. This zero value tells the server that it needs to set up a new SSL session and to generate an SSL Session ID. The server sends the new SSL Session ID in its response to the client as part of the SSL handshake.

Layer 4 Persistence Synchronization

Enable to synchronize Layer 4 session data used for persistence to backend servers.

When enabled, the Source Address Persistence table is synchronized between HA members. When not enabled, a node that receives traffic because of load balancing or failover would not know that a session had been created already, so it will be treated as a new session.

Synchronization of the persistence table is not required for hash-based persistence methods to get the desired result. Client traffic will be routed to the same backend server.

Layer 4 Connection Synchronization

Enable to synchronize Layer 4 connection state data.

When enabled, the TCP session table is synchronized. If subsequent traffic for the connection is distributed through a different cluster node because of failover, the TCP sessions can resume without interruption.

When not enabled, a node that receives traffic because of failover would not know that a session had been created already, and the client will be required to re-initialize the connection.

Advanced Settings  
Priority Number indicating priority of the member node when electing the cluster primary node. This setting is optional. The smaller the number, the higher the priority. The default is 5. The valid range is from 0 to 9.

Note: By default, up time is more important than this setting unless Override is enabled. See below.
Override Enabled by default. This makes device priority (see above) a more important factor than up time when selecting the primary node.
Heartbeat Interval

Number of 100-millisecond intervals at which heartbeat packets are sent. This is also the interval at which a node expects to receive heartbeat packets. This part of the configuration is pushed from the primary node to member nodes. The default is 2. The valid range is 1 to 20 (that is, between 100 and 2,000 milliseconds).

Note: Although this setting is pushed from the primary node to member nodes, you should initially configure all nodes with the same Detection Interval to prevent inadvertent failover from occurring before the initial synchronization.

Lost Heartbeat Threshold

Number of times a node retries the heartbeat and waits to receive HA heartbeat packets from the other nodes before concluding the other node is down. This part of the configuration is pushed from the primary node to member nodes. Normally, you do not need to change this setting. Exceptions include:

  • Increase the failure detection threshold if a failure is detected when none has actually occurred. For example, in an active-passive deployment, if the primary node is very busy during peak traffic times, it might not respond to heartbeat packets in time, and a standby node might assume that the primary node has failed.
  • Decrease the failure detection threshold or detection interval if administrators and HTTP clients have to wait too long before being able to connect through the primary node, resulting in noticeable down time.


The valid range is from 1 to 60.

Note: Although this setting is pushed from the primary node to member nodes, you should initially configure all nodes with the same HB Lost Threshold to prevent inadvertent failover from occurring before the initial synchronization.

ARP Times Number of times that the cluster member broadcasts extra address resolution protocol (ARP) packets when it takes on the primary role. (Even though a new NIC has not actually been connected to the network, the member does this to notify the network that a new physical port has become associated with the IP address and virtual MAC of the HA cluster.) This is sometimes called “using gratuitous ARP packets to train the network,” and can occur when the primary node is starting up, or during a failover. Also configure ARP Packet Interval.

Normally, you do not need to change this setting. Exceptions include:

  • Increase the number of times the primary node sends gratuitous ARP packets if an active-passive cluster takes a long time to fail over or to train the network. Sending more gratuitous ARP packets may help the failover to happen faster.
  • Decrease the number of times the primary node sends gratuitous ARP packets if the cluster has a large number of VLAN interfaces and virtual domains. Because gratuitous ARP packets are broadcast, sending them might generate a large amount of network traffic. As long as the active-passive cluster fails over successfully, you can reduce the number of times gratuitous ARP packets are sent to reduce the amount of traffic produced by a failover.

The valid range is 1 to 60. The default is 5.
ARP Interval Number of seconds to wait between each broadcast of ARP packets. Normally, you do not need to change this setting. Exceptions include:
  • Decrease the interval if an active-passive cluster takes a long time to fail over or to train the network. Sending ARP packets more frequently may help the failover to happen faster.
  • Increase the interval if the cluster has a large number of VLAN interfaces and virtual domains. Because gratuitous ARP packets are broadcast, sending them might generate a large amount of network traffic. As long as the active-passive cluster fails over successfully, you can increase the interval between when gratuitous ARP packets are sent to reduce the rate of traffic produced by a failover.

The valid range is from 1 to 20. The default is 6 seconds.
Data

Set the network interface to be used for data synchronization among cluster nodes. You can configure up to two data ports. If one data port fails, its traffic fails over to the next data port. If all data ports fail, data synchronization traffic fails over to the heartbeat port. If you do not configure a data port, the heartbeat port is used for synchronization. Use the same port numbers for all cluster members. For example, if you select port3 on the primary node, select port3 as the data port interface on the other member nodes.

Remote IP Monitor

Enable or disable active monitoring of remote beacon IP addresses to determine if the network path is available.

Note: This option is disabled by default. If enabled, you must specify the Failover Threshold and Failover Hold Time described below.

Failover Threshold Number of unreachable remote-ip-monitor-list to indicate failure. The default is 5. The valid range is 1-64.
Failover Hold Time If failover occurs due to a remote IP monitor test, and this node's role changes (to master or slave), it cannot change again until the hold time elapses. The hold time can be used to prevent looping. The default hold time is 120 seconds. The valid range is from 60 to 86400.