Fortinet white logo
Fortinet white logo

CLI Reference

system ha

system ha

Enable and configure FortiGate FGCP high availability (HA) and virtual clustering. Some of these options are also used for FGSP and content clustering.

note icon

In FGCP mode, most settings are automatically synchronized among cluster units. The following settings are not synchronized:

  • override
  • priority (including the secondary-vcluster priority)
  • ha-mgmt-interface-gateway
  • ha-mgmt-interface-gateway6
  • cpu-threshold, memory-threshold, http-proxy-threshold, ftp-proxy-threshold, imap-proxy-threshold, nntp-proxy-threshold, pop3-proxy-threshold, smtp-proxy-threshold
  • The ha-priority setting of the config system link-monitor command
  • The config system interface settings of the FortiGate interface that becomes an HA reserved management interface
  • The config system global hostname setting.
  • The GUI Dashboard configuration. After a failover you may have to re-configure dashboard widgets.
  • OSPF summary-addresses settings.

History

The following table shows all newly added, changed, or removed entries as of FortiOS 6.0.

Command Description

set unicast-hb-netmask {disable | enable}

Set unicast heartbeat netmask.

set inter-cluster-session-sync {disable | enable}

Enable or disable FGSP session synchronization between FGCP clusters.

This entry is only available when mode is set to either a-a or a-p.

CLI Syntax

config system ha
    set group-id {integer}   Cluster group ID  (0 - 255). Must be the same for all members. range[0-255]
    set group-name {string}   Cluster group name. Must be the same for all members. size[32]
    set mode {standalone | a-a | a-p}   HA mode. Must be the same for all members. FGSP requires standalone.
            standalone  Standalone mode.
            a-a         Active-active mode.
            a-p         Active-passive mode.
    set sync-packet-balance {enable | disable}   Enable/disable HA packet distribution to multiple CPUs.
    set password {password_string}   Cluster password. Must be the same for all members. size[128]
    set key {password_string}   key size[16]
    set hbdev {string}   Heartbeat interfaces. Must be the same for all members.
    set session-sync-dev {string}   Offload session sync to one or more interfaces to distribute traffic and prevent delays if needed.
    set route-ttl {integer}   TTL for primary unit routes (5 - 3600 sec). Increase to maintain active routes during failover. range[5-3600]
    set route-wait {integer}   Time to wait before sending new routes to the cluster (0 - 3600 sec). range[0-3600]
    set route-hold {integer}   Time to wait between routing table updates to the cluster (0 - 3600 sec). range[0-3600]
    set multicast-ttl {integer}   HA multicast TTL on master (5 - 3600 sec). range[5-3600]
    set load-balance-all {enable | disable}   Enable to load balance TCP sessions. Disable to load balance proxy sessions only.
    set sync-config {enable | disable}   Enable/disable configuration synchronization.
    set encryption {enable | disable}   Enable/disable heartbeat message encryption.
    set authentication {enable | disable}   Enable/disable heartbeat message authentication.
    set hb-interval {integer}   Time between sending heartbeat packets (1 - 20 (100*ms)). Increase to reduce false positives. range[1-20]
    set hb-lost-threshold {integer}   Number of lost heartbeats to signal a failure (1 - 60). Increase to reduce false positives. range[1-60]
    set hello-holddown {integer}   Time to wait before changing from hello to work state (5 - 300 sec). range[5-300]
    set gratuitous-arps {enable | disable}   Enable/disable gratuitous ARPs. Disable if link-failed-signal enabled.
    set arps {integer}   Number of gratuitous ARPs (1 - 60). Lower to reduce traffic. Higher to reduce failover time. range[1-60]
    set arps-interval {integer}   Time between gratuitous ARPs  (1 - 20 sec). Lower to reduce failover time. Higher to reduce traffic. range[1-20]
    set session-pickup {enable | disable}   Enable/disable session pickup. Enabling it can reduce session down time when fail over happens.
    set session-pickup-connectionless {enable | disable}   Enable/disable UDP and ICMP session sync for FGSP.
    set session-pickup-expectation {enable | disable}   Enable/disable session helper expectation session sync for FGSP.
    set session-pickup-nat {enable | disable}   Enable/disable NAT session sync for FGSP.
    set session-pickup-delay {enable | disable}   Enable to sync sessions longer than 30 sec. Only longer lived sessions need to be synced.
    set link-failed-signal {enable | disable}   Enable to shut down all interfaces for 1 sec after a failover. Use if gratuitous ARPs do not update network.
    set uninterruptible-upgrade {enable | disable}   Enable to upgrade a cluster without blocking network traffic.
    set standalone-mgmt-vdom {enable | disable}   Enable/disable standalone management VDOM.
    set ha-mgmt-status {enable | disable}   Enable to reserve interfaces to manage individual cluster units.
    config ha-mgmt-interfaces
        edit {id}
        # Reserve interfaces to manage individual cluster units.
            set id {integer}   Table ID. range[0-4294967295]
            set interface {string}   Interface to reserve for HA management. size[15] - datasource(s): system.interface.name
            set dst {ipv4 classnet}   Default route destination for reserved HA management interface.
            set gateway {ipv4 address}   Default route gateway for reserved HA management interface.
            set gateway6 {ipv6 address}   Default IPv6 gateway for reserved HA management interface.
        next
    set ha-eth-type {string}   HA heartbeat packet Ethertype (4-digit hex). size[4]
    set hc-eth-type {string}   Transparent mode HA heartbeat packet Ethertype (4-digit hex). size[4]
    set l2ep-eth-type {string}   Telnet session HA heartbeat packet Ethertype (4-digit hex). size[4]
    set ha-uptime-diff-margin {integer}   Normally you would only reduce this value for failover testing. range[1-65535]
    set standalone-config-sync {enable | disable}   Enable/disable FGSP configuration synchronization.
    set vcluster2 {enable | disable}   Enable/disable virtual cluster 2 for virtual clustering.
    set vcluster-id {integer}   Cluster ID. range[0-255]
    set override {enable | disable}   Enable and increase the priority of the unit that should always be primary (master).
    set priority {integer}   Increase the priority to select the primary unit (0 - 255). range[0-255]
    set override-wait-time {integer}   Delay negotiating if override is enabled (0 - 3600 sec). Reduces how often the cluster negotiates. range[0-3600]
    set schedule {option}   Type of A-A load balancing. Use none if you have external load balancers.
            none                None.
            hub                 Hub.
            leastconnection     Least connection.
            round-robin         Round robin.
            weight-round-robin  Weight round robin.
            random              Random.
            ip                  IP.
            ipport              IP port.
    set weight {string}   Weight-round-robin weight for each cluster unit. Syntax <priority> <weight>.
    set cpu-threshold {string}   Dynamic weighted load balancing CPU usage weight and high and low thresholds.
    set memory-threshold {string}   Dynamic weighted load balancing memory usage weight and high and low thresholds.
    set http-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of HTTP proxy sessions.
    set ftp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of FTP proxy sessions.
    set imap-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of IMAP proxy sessions.
    set nntp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of NNTP proxy sessions.
    set pop3-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of POP3 proxy sessions.
    set smtp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of SMTP proxy sessions.
    set monitor {string}   Interfaces to check for port monitoring (or link failure).
    set pingserver-monitor-interface {string}   Interfaces to check for remote IP monitoring.
    set pingserver-failover-threshold {integer}   Remote IP monitoring failover threshold (0 - 50). range[0-50]
    set pingserver-slave-force-reset {enable | disable}   Enable to force the cluster to negotiate after a remote IP monitoring failover.
    set pingserver-flip-timeout {integer}   Time to wait in minutes before renegotiating after a remote IP monitoring failover. range[6-2147483647]
    set vdom {string}   VDOMs in virtual cluster 1.
    config secondary-vcluster
        set vcluster-id {integer}   Cluster ID. range[0-255]
        set override {enable | disable}   Enable and increase the priority of the unit that should always be primary (master).
        set priority {integer}   Increase the priority to select the primary unit (0 - 255). range[0-255]
        set override-wait-time {integer}   Delay negotiating if override is enabled (0 - 3600 sec). Reduces how often the cluster negotiates. range[0-3600]
        set monitor {string}   Interfaces to check for port monitoring (or link failure).
        set pingserver-monitor-interface {string}   Interfaces to check for remote IP monitoring.
        set pingserver-failover-threshold {integer}   Remote IP monitoring failover threshold (0 - 50). range[0-50]
        set pingserver-slave-force-reset {enable | disable}   Enable to force the cluster to negotiate after a remote IP monitoring failover.
        set vdom {string}   VDOMs in virtual cluster 2.
    set ha-direct {enable | disable}   Enable/disable using ha-mgmt interface for syslog, SNMP, remote authentication (RADIUS), FortiAnalyzer, and FortiSandbox.
    set memory-compatible-mode {enable | disable}   Enable/disable memory compatible mode.
    set inter-cluster-session-sync {enable | disable}   Enable/disable synchronization of sessions among HA clusters.
end

Additional information

The following section is for those options that require additional explanation.

group-id <id>

The HA group ID, same for all members, from 0 to 255. The group ID identifies individual clusters on the network because the group ID affects the cluster virtual MAC address. All cluster members must have the same group ID. If you have more than two clusters on the same network they must have different Group IDs.

group-name <name>

The HA group name, same for all members. Max 32 characters. The HA group name identifies the cluster. All cluster members must have the same group name. Can be blank if mode is standalone.

mode {standalone | a-a | a-p}

The HA mode.

  • standalone to disable HA. The mode required for FGSP.
  • a-a to create an Active-Active cluster.
  • a-p to create an Active-Passive cluster.

All members of an HA cluster must be set to the same HA mode.

password <password>

The HA cluster password, must be the same for all cluster units. The maximum password length is 128 characters.

hbdev <interface-name> <priority> [<interface-name> <priority>]...

Select the FortiGate interfaces to be heartbeat interfaces and set the heartbeat priority for each interface. The heartbeat interface with the highest priority processes all heartbeat traffic. If two or more heartbeat interfaces have the same priority, the heartbeat interface that with the lowest hash map order value processes all heartbeat traffic.

By default two interfaces are configured to be heartbeat interfaces and the priority for both these interfaces is set to 50. The heartbeat interface priority range is 0 to 512.

You can select up to 8 heartbeat interfaces. This limit only applies to FortiGate units with more than 8 physical interfaces.

You can use the append command to add more entries. The default depends on the FortiGate model.

multicast-ttl <timeout>

Time to wait before re-synchronizing the multicast routes to the kernel after an HA failover. The default is 600 seconds, the range is 5 to 3600 seconds.

After an HA failover, the new primary FortiGate waits for the multicast-ttl to expire before synchronizing multicast routes to the kernel. If you notice that multicast sessions are not connecting after an HA failover, this may be because the 600 seconds has not elapsed so the multicast routes in the kernel are out of date (for example, the Kernel could have multicast routes that are no longer valid). To reduce this delay, you can set the multicast-ttl time to a low value, for example 10 seconds, resulting in quicker updates of the kernel multicast routing table.

session-sync-dev <interface>

Select one or more FortiGate interfaces to use for synchronizing sessions as required for session pickup. Normally session synchronization occurs over the HA heartbeat link. Using this HA option means only the selected interfaces are used for session synchronization and not the HA heartbeat link. If you select more than one interface, session synchronization traffic is load balanced among the selected interfaces.

Moving session synchronization from the HA heartbeat interface reduces the bandwidth required for HA heartbeat traffic and may improve the efficiency and performance of the deployment, especially if the deployment is synchronizing a large number of sessions. Load balancing session synchronization among multiple interfaces can further improve performance and efficiency if the deployment is synchronizing a large number of sessions.

Session synchronization packets use Ethertype 0x8892. The interfaces to use for session synchronization must be connected together either directly using the appropriate cable (possible if there are only two units in the deployment) or using switches. If one of the interfaces becomes disconnected the deployment uses the remaining interfaces for session synchronization. If all of the session synchronization interfaces become disconnected, session synchronization reverts back to using the HA heartbeat link. All session synchronization traffic is between the primary unit and each subordinate unit.

Since large amounts of session synchronization traffic can increase network congestion, it is recommended that you keep this traffic off of your network by using dedicated connections for it.

sync-packet-balance {disable | enable}

Enable this option for FortiOS Carrier FGCP clusters or FGSP peers to distribute the processing of HA synchronization packets to multiple CPUs. By default, this option is disabled and all HA synchronization packets are processed by one CPU.

Enabling this option may improve the performance of an entity that is processing large numbers of packets causing session synchronization using excessive amounts of CPU cycles. For example, GTP traffic can result in very high packet rates and you can improve the performance of a FortiOS Carrier FGCP cluster or FGSP deployment that is processing GTP traffic by enabling this option.

route-ttl <ttl>

Control how long routes remain in a cluster unit's routing table. The time to live range is 5 to 3600 seconds (3600 seconds is one hour). The default time to live is 10 seconds.

The time to live controls how long routes remain active in a cluster unit routing table after the cluster unit becomes a primary unit. To maintain communication sessions after a cluster unit becomes a primary unit, routes remain active in the routing table for the route time to live while the new primary unit acquires new routes.

By default, route-ttl is set to 10 which may mean that only a few routes will remain in the routing table after a failover. Normally keeping route-ttl to 10 or reducing the value to 5 is acceptable because acquiring new routes usually occurs very quickly, especially if graceful restart is enabled, so only a minor delay is caused by acquiring new routes.

If the primary unit needs to acquire a very large number of routes, or if for other reasons there is a delay in acquiring all routes, the primary unit may not be able to maintain all communication sessions.

You can increase the route time to live if you find that communication sessions are lost after a failover so that the primary unit can use synchronized routes that are already in the routing table, instead of waiting to acquire new routes.

route-wait <wait>

The amount of time in seconds that the primary unit waits after receiving routing updates before sending the updates to the subordinate units. For quick routing table updates to occur, set route-wait to a relatively short time so that the primary unit does not hold routing table changes for too long before updating the subordinate units.

The route-wait range is 0 to 3600 seconds. The default route-wait is 0 seconds. Normally, because the is 0 seconds.

Normally, because the route-wait time is 0 seconds the primary unit sends routing table updates to the subordinate units every time its routing table changes.

Once a routing table update is sent, the primary unit waits the route-hold time before sending the next update.

Usually routing table updates are periodic and sporadic. Subordinate units should receive these changes as soon as possible so route-wait is set to 0 seconds. route-hold can be set to a relatively long time because normally the next route update would not occur for a while.

In some cases, routing table updates can occur in bursts. A large burst of routing table updates can occur if a router or a link on a network fails or changes. When a burst of routing table updates occurs, there is a potential that the primary unit could flood the subordinate units with routing table updates. Flooding routing table updates can affect cluster performance if a great deal of routing information is synchronized between cluster units. Setting route-wait to a longer time reduces the frequency of additional updates are and prevents flooding of routing table updates from occurring.

route-hold <hold>

The amount of time in seconds that the primary unit waits between sending routing table updates to subordinate units. The route hold range is 0 to 3600 seconds. The default route hold time is 10 seconds.

To avoid flooding routing table updates to subordinate units, set route-hold to a relatively long time to prevent subsequent updates from occurring too quickly. Flooding routing table updates can affect cluster performance if a great deal of routing information is synchronized between cluster units. Increasing the time between updates means that this data exchange will not have to happen so often.

The route-hold time should be coordinated with the route-wait time.

sync-config {disable | enable}

Enable or disable automatic synchronization configuration changes to all cluster units.

encryption {disable | enable}

Enable or disable HA heartbeat message encryption using AES-128 for encryption and SHA1 for authentication. Disabled by default.

authentication {disable | enable}

Enable or disable HA heartbeat message authentication using SHA1. Disabled by default.

hb-interval <interval>

The time between sending heartbeat packets. The heartbeat interval range is 1 to 20 (100*milliseconds). The default is 2.

A heartbeat interval of 2 means the time between heartbeat packets is 200 ms. Changing the heartbeat interval to 5 changes the time between heartbeat packets to 500 ms (5 * 100ms = 500ms).

HA heartbeat packets consume more bandwidth if the heartbeat interval is short. But if the heartbeat interval is very long, the cluster is not as sensitive to topology and other network changes.

The heartbeat interval combines with the lost heartbeat threshold to set how long a cluster unit waits before assuming that another cluster unit has failed and is no longer sending heartbeat packets. By default, if a cluster unit does not receive a heartbeat packet from a cluster unit for 6 * 200 = 1200 milliseconds or 1.2 seconds the cluster unit assumes that the other cluster unit has failed.

You can increase both the heartbeat interval and the lost heartbeat threshold to reduce false positives. For example, increasing the heartbeat interval to 20 and the lost heartbeat threshold to 30 means a failure will be assumed if no heartbeat packets are received after 30 * 2000 milliseconds = 60,000 milliseconds, or 60 seconds.

hb-lost-threshold <threshold>

The number of consecutive heartbeat packets that are not received from another cluster unit before assuming that the cluster unit has failed. The default value is 6, meaning that if the 6 heartbeat packets are not received from a cluster unit then that cluster unit is considered to have failed. The range is 1 to 60 packets.

If the primary unit does not receive a heartbeat packet from a subordinate unit before the heartbeat threshold expires, the primary unit assumes that the subordinate unit has failed.

If a subordinate unit does not receive a heartbeat packet from the primary unit before the heartbeat threshold expires, the subordinate unit assumes that the primary unit has failed. The subordinate unit then begins negotiating to become the new primary unit.

The lower the hb-lost-threshold the faster a cluster responds when a unit fails. However, sometimes heartbeat packets may not be sent because a cluster unit is very busy. This can lead to a false positive failure detection. To reduce these false positives you can increase the hb-lost-threshold.

hello-holddown <timer>

The number of seconds that a cluster unit waits before changing from the hello state to the work state. The default is 20 seconds and the range is 5 to 300 seconds.

The hello state hold-down time is the number of seconds that a cluster unit waits before changing from hello state to work state. After a failure or when starting up, cluster units operate in the hello state to send and receive heartbeat packets so that all the cluster units can find each other and form a cluster. A cluster unit should change from the hello state to work state after it finds all of the other FortiGate units to form a cluster with. If for some reason all cluster units cannot find each other during the hello state then some cluster units may be joining the cluster after it has formed. This can cause disruptions to the cluster and affect how it operates.

One reason for a delay in all of the cluster units joining the cluster could be the cluster units are located at different sites of if for some other reason communication is delayed between the heartbeat interfaces.

If cluster units are joining your cluster after it has started up or if it takes a while for units to join the cluster you can increase the time that the cluster units wait in the hello state.

gratuitous-arps {disable | enable}

Enable or disable sending gratuitous ARP packets from a new primary unit. Enabled by default.

In most cases you would want to send gratuitous ARP packets because its a reliable way for the cluster to notify the network to send traffic to the new primary unit. However, in some cases, sending gratuitous ARP packets may be less optimal. For example, if you have a cluster of FortiGate units in Transparent mode, after a failover the new primary unit will send gratuitous ARP packets to all of the addresses in its Forwarding Database (FDB). If the FDB has a large number of addresses it may take extra time to send all the packets and the sudden burst of traffic could disrupt the network.

If you choose to disable sending gratuitous ARP packets you must first enable the link-failed-signal setting. The cluster must have some way of informing attached network devices that a failover has occurred.

arps <number>

The number of times that the primary unit sends gratuitous ARP packets. Gratuitous ARP packets are sent when a cluster unit becomes a primary unit (this can occur when the cluster is starting up or after a failover). The default is 5 packets, the range is 1 to 60.

Usually you would not change the default setting of 5. In some cases, however, you might want to reduce the number of gratuitous ARP packets. For example, if your cluster has a large number of VLAN interfaces and virtual domains and because gratuitous ARP packets are broadcast, sending a higher number gratuitous ARP packets may generate a lot of network traffic. As long as the cluster still fails over successfully, you could reduce the number of gratuitous ARP packets that are sent to reduce the amount of traffic produced after a failover.

If failover is taking longer that expected, you may be able to reduce the failover time by increasing the number gratuitous ARP packets sent.

arps-interval <interval>

The number of seconds to wait between sending gratuitous ARP packets. When a cluster unit becomes a primary unit (this occurs when the cluster is starting up or after a failover) the primary unit sends gratuitous ARP packets immediately to inform connected network equipment of the IP address and MAC address of the primary unit. The default is 8 seconds, the range is 1 to 20 seconds.

Normally you would not need to change the time interval. However, you could decrease the time to be able send more packets in less time if your cluster takes a long time to failover.

There may also be a number of reasons to set the interval higher. For example, if your cluster has a large number of VLAN interfaces and virtual domains and because gratuitous ARP packets are broadcast, sending gratuitous ARP packets may generate a lot of network traffic. As long as the cluster still fails over successfully you could increase the interval to reduce the amount of traffic produced after a failover.

session-pickup {disable | enable}

Enable or disable session pickup. Disabled by default.

Enable session-pickup so that if the primary unit fails, all sessions are picked up by the new primary unit. If you enable session pickup the subordinate units maintain session tables that match the primary unit session table. If the primary unit fails, the new primary unit can maintain most active communication sessions.

If you do not enable session pickup the subordinate units do not maintain session tables. If the primary unit fails all sessions are interrupted and must be restarted when the new primary unit is operating.

Many protocols can successfully restart sessions with little, if any, loss of data. For example, after a failover, users browsing the web can just refresh their browsers to resume browsing. Since most HTTP sessions are very short, in most cases they will not even notice an interruption unless they are downloading large files. Users downloading a large file may have to restart their download after a failover.

Other protocols may experience data loss and some protocols may require sessions to be manually restarted. For example, a user downloading files with FTP may have to either restart downloads or restart their FTP client.

session-pickup-connectionless {disable | enable}

This option applies to both FGCP and FGSP.

Enable or disable session synchronization for connectionless (UDP and ICMP) sessions.

When mode is set to a-a or a-p this option applies to FGCP.

When mode is standalone, this option applies to FGSP only.

This option is only available if session-pickup in enabled and is disabled by default.

session-pickup-expectation {disable | enable}

This option applies only to FGSP.

Enable or disable session synchronization for expectation sessions in an FGSP deployment. This option is only available if session-pickup is enabled and mode is standalone and is disabled by default.

FortiOS session helpers keep track of the communication of Layer-7 protocols such as FTP and SIP that have control sessions and expectation sessions. Usually the control sessions establish the link between server and client and negotiate the ports and protocols that will be used for data communications. The session helpers then create expectation sessions through the FortiGate for the ports and protocols negotiated by the control session.

The expectation sessions are usually the sessions that actually communicate data. For FTP, the expectation sessions transmit files being uploaded or downloaded. For SIP, the expectation sessions transmit voice and video data. Expectation sessions usually have a timeout value of 30 seconds. If the communication from the server is not initiated within 30 seconds the expectation session times out and traffic will be denied.

session-pickup-nat {disable | enable}

This option applies only to FGSP.

Enable or disable session synchronization for NAT sessions in an FGSP deployment. This option is only available if session-pickup is enabled and mode is standalone and is disabled by default.

session-pickup-delay {disable | enable}

This option applies to both FGCP and FGSP.

Enable or disable synchronizing sessions only if they remain active for more than 30 seconds. This option improves performance when session-pickup is enabled by reducing the number of sessions that are synchronized.

session-sync-daemon-number <number>

The number of processes used by the HA session sync daemon. Increase the number of processes to handle session packets sent from the kernel efficiently when the session rate is high. Intended for ELBC clusters, this feature only works for clusters with two members. The default is 1, the range 1 to 15.

link-failed-signal {disable | enable}

Enable or disable shutting down all interfaces (except for heartbeat device interfaces) of a cluster unit with a failed monitored interface for one second after a failover occurs. Enable this option if the switch the cluster is connected to does not update its MAC forwarding tables after a failover caused by a link failure. Disabled by default.

If you choose to disable sending gratuitous ARP packets (by setting gratuitous-arps to disable) you must first enable link-failed-signal. The cluster must have some way of informing attached network devices that a failover has occurred.

uninterruptible-upgrade {disable | enable}

Enable or disable upgrading the cluster without interrupting cluster traffic processing. Enabled by default.

If uninterruptible-upgrade is enabled, traffic processing is not interrupted during a normal firmware upgrade. This process can take some time and may reduce the capacity of the cluster for a short time. If is enabled, traffic processing is not interrupted during a normal firmware upgrade. This process can take some time and may reduce the capacity of the cluster for a short time.

If uninterruptible-upgrade is disabled, traffic processing is interrupted during a normal firmware upgrade (similar to upgrading the firmware operating on a standalone FortiGate unit).

ha-mgmt-status {enable | disable}

Enable or disable the HA reserved management interface feature. Disabled by default.

ha-mgmt-interface <interface_name>

The FortiGate interface to be the reserved HA management interface. You can configure the IP address and other settings for this interface using the config system interface command. When you enable the reserved management interface feature the configuration of the reserved management interface is not synchronized by the FGCP.

ha-mgmt-interface-gateway <gateway_IP>

The default route for the reserved HA management interface (IPv4). This setting is not synchronized by the FGCP.

ha-mgmt-interface-gateway6 <gateway_IP>

The default route for the reserved HA management interface (IPv6). This setting is not synchronized by the FGCP.

ha-eth-type <type>

The Ethertype used by HA heartbeat packets for NAT mode clusters. <type> is a 4-digit number. Default is 8890.

hc-eth-type <type>

The Ethertype used by HA heartbeat packets for Transparent mode clusters. <type> is a 4-digit number. Default is 8891.

l2ep-eth-type <type>

The Ethertype used by HA telnet sessions between cluster units over the HA link. <type> is a 4-digit number. Default is 8893.

ha-uptime-diff-margin <margin>

The cluster age difference margin (grace period). This margin is the age difference ignored by the cluster when selecting a primary unit based on age. Normally the default value of 300 seconds (5 minutes) should not be changed. However, for demo purposes you can use this option to lower the difference margin. The range is 1 to 65535 seconds.

You may want to reduce the margin if during failover testing you don’t want to wait the default age difference margin of 5 minutes. You may also want to reduce the margin to allow uninterruptible upgrades to work.

You may want to increase the age margin if cluster unit startup time differences are larger than 5 minutes.

During a cluster firmware upgrade with uninterruptible-upgrade enabled (the default configuration) the cluster should not select a new primary unit after the firmware of all cluster units has been updated. But since the age difference of the cluster units is most likely less than 300 seconds, age is not used to affect primary unit selection and the cluster may select a new primary unit.

During failover testing where cluster units are failed over repeatedly the age difference between the cluster units will most likely be less than 5 minutes. During normal operation, if a failover occurs, when the failed unit rejoins the cluster its age will be very different from the age of the still operating cluster units so the cluster will not select a new primary unit. However, if a unit fails and is restored in a very short time the age difference may be less than 5 minutes. As a result the cluster may select a new primary unit during some failover testing scenarios.

vcluster2 {disable | enable}

Enable or disable virtual cluster 2 (also called secondary-vcluster).

When multiple VDOMs are enabled, virtual cluster 2 is enabled by default. When virtual cluster 2 is enabled you can use config secondary-vcluster to configure virtual cluster 2.

Disable virtual cluster 2 to move all virtual domains from virtual cluster 2 back to virtual cluster 1.

Enabling virtual cluster 2 enables override for virtual cluster 1 and virtual cluster 2.

vcluster-id

Indicates the virtual cluster you are configuring. You can't change this setting. You can use the config secondary-vcluster command to edit vcluster 2.

standalone-config-sync {disable | enable}

Synchronize the configuration of the FortiGate unit to another FortiGate unit. This is available if session-pickup is enabled and mode is standalone. Disabled by default.

override {disable | enable}

Enable or disable forcing the cluster to renegotiate and select a new primary unit every time a cluster unit leaves or joins a cluster, changes status within a cluster, or every time the HA configuration of a cluster unit changes.

Disabled by default. Automatically enabled when you enable virtual cluster 2. This setting is not synchronized to other cluster units.

In most cases you should keep override disabled to reduce how often the cluster negotiates. Frequent negotiations may cause frequent traffic interruptions. However, if you want to make sure that the same cluster unit always operates as the primary unit and if you are less concerned about frequent cluster negotiation you can set its device priority higher than other cluster units and enable override.

priority <priority>

The device priority of the cluster unit. Each cluster unit can have a different device priority. During HA negotiation, the cluster unit with the highest device priority becomes the primary unit. The device priority range is 0 to 255. The default is 128. This setting is not synchronized to other cluster units. The device priority of the cluster unit. Each cluster unit can have a different device priority. During HA negotiation, the cluster unit with the highest device priority becomes the primary unit. The device priority range is 0 to 255. The default is 128. This setting is not synchronized to other cluster units.

override-wait-time <seconds>

Delay renegotiating when override is enabled and HA is enabled or the cluster mode is changed or after a cluster unit reboots. You can add a time to prevent negotiation during transitions and configuration changes. Range 0 to 3600 seconds.

schedule {hub | ip | ipport | leastconnection | none | random | round-robin | weight-round-robin}

The cluster's active-active load balancing schedule.

  • hub load balancing if the cluster interfaces are connected to hubs. Traffic is distributed to cluster units based on the Source IP and Destination IP of the packet.
  • ip load balancing according to IP address.
  • ipport load balancing according to IP address and port.
  • leastconnection least connection load balancing.
  • none no load balancing. Use when the cluster interfaces are connected to load balancing switches.
  • random random load balancing.
  • round-robin round robin load balancing. If the cluster units are connected using switches, use round-robin to distribute traffic to the next available cluster unit.
  • weight-round-robin weighted round robin load balancing. Similar to round robin, but you can assign weighted values to each of the units in the cluster.

slave-switch-standby {disable | enable}

Enable to force a subordinate FortiSwitch-5203B or FortiController-5902D into standby mode even though its weight is non-zero. This is a content clustering option and is disabled by default.

minimum-worker-threshold <threshold>

Available on FortiSwitch-5203Bs or FortiController-5902Ds only in inter-chassis content-cluster mode. In inter-chassis mode the system considers the number of operating workers in a chassis when electing the primary chassis. A chassis that has less than the minimum-worker-threshold of workers operating is ranked lower than a chassis that meets or exceeds the minimum-worker-threshold. The default value of 1 effectively disables the threshold. The range is 1 to 11.

monitor <interface-name> [<interface-name>...]

Enable or disable port monitoring for link failure. Port monitoring (also called interface monitoring) monitors FortiGate interfaces to verify that the monitored interfaces are functioning properly and connected to their networks.

Enter the names of the interfaces to monitor. Use a space to separate each interface name. Use append to add an interface to the list. If there are no monitored interfaces then port monitoring is disabled.

You can monitor physical interfaces, redundant interfaces, and 802.3ad aggregated interfaces but not VLAN interfaces, IPSec VPN interfaces, or switch interfaces.

You can monitor up to 64 interfaces. In a multiple VDOM configuration you can monitor up to 64 interfaces per virtual cluster.

pingserver-monitor-interface <interface-name> [<interface-name>...]

Enable HA remote IP monitoring by specifying the FortiGate unit interfaces that will be used to monitor remote IP addresses. You can configure remote IP monitoring for all types of interfaces including physical interfaces, VLAN interfaces, redundant interfaces and aggregate interfaces.

Use a space to separate each interface name. Use append to add an interface to the list.

pingserver-failover-threshold <threshold>

The HA remote IP monitoring failover threshold. The failover threshold range is 0 to 50. Setting the failover threshold to 0 (the default) means that if any ping server added to the HA remote IP monitoring configuration fails an HA failover will occur.

Set the priority for each remote IP monitoring ping server using the ha-priority option of the config system link-monitor command. Increase the priority to require more remote links to fail before a failover occurs.

pingserver-slave-force-reset {disable | enable}

In a remote IP monitoring configuration, if you also want the same cluster unit to always be the primary unit you can set its device priority higher and enable override. With this configuration, when a remote IP monitoring failover occurs, after the flip timeout expires another failover will occur (because override is enabled) and the unit with override enabled becomes the primary unit again. So the cluster automatically returns to normal operation.

The primary unit starts remote IP monitoring again. If the remote link is restored the cluster continues to operate normally. If, however, the remote link is still down, remote link failover causes the cluster to failover again. This will repeat each time the flip timeout expires until the failed remote link is restored.

You can use the pingserver-slave-force-reset option to control this behavior. By default this option is enabled and the behavior described above occurs. The overall behavior is that when the remote link is restored the cluster automatically returns to normal operation after the flip timeout.

If you disable pingserver-slave-force-reset after the initial remote IP monitoring failover nothing will happen after the flip timeout (as long as the new primary unit doesn't experience some kind of failover). The result is that repeated failovers no longer happen. But it also means that the original primary unit will remain the subordinate unit and will not resume operating as the primary unit.

pingserver-flip-timeout <timeout>

The HA remote IP monitoring flip timeout in minutes. If HA remote IP monitoring fails on all cluster units because none of the cluster units can connect to the monitored IP addresses, the flip timeout stops a failover from occurring until the timer runs out. The range is 6 to 2147483647 minutes. The default is 60 minutes.

The flip timeout reduces the frequency of failovers if, after a failover, HA remote IP monitoring on the new primary unit also causes a failover. This can happen if the new primary unit cannot connect to one or more of the monitored remote IP addresses. The result could be that until you fix the network problem that blocks connections to the remote IP addresses, the cluster will experience repeated failovers. You can control how often the failovers occur by setting the flip timeout.

The flip timeout stops HA remote IP monitoring from causing a failover until the primary unit has been operating for the duration of the flip timeout. If you set the flip timeout to a relatively high number of minutes you can find and repair the network problem that prevented the cluster from connecting to the remote IP address without the cluster experiencing very many failovers. Even if it takes a while to detect the problem, repeated failovers at relatively long time intervals do not usually disrupt network traffic.

The flip timeout also causes the cluster to renegotiate when it expires unless you have disabled pingserver-slave-force-reset.

vdom <vdom-name> [<vdom-name>...]

Add virtual domains to a virtual cluster. By default all VDOMs are added to virtual cluster 1. Adding a virtual domain to a virtual cluster removes it from the other virtual cluster. You add VDOMs to virtual cluster 1 using the following syntax:

config system ha

set vdom root vdom1

end

You add VDOMs to virtual cluster 2 using the following syntax:

config system ha

set secondary-vcluster enable

config vcluster2

set vdom root vdom1

end

end

ha-direct {disable | enable}

Enable to use the reserved HA management interface for following management features:

  • Remote logging (including syslog, FortiAnalyzer, and FortiCloud).
  • SNMP queries and traps.
  • Remote authentication and certificate verification.
  • Communication with FortiSandbox.

This means that individual cluster units send log messages and communicate with FortiSandbox and so on using their HA reserved management interface instead of one of the cluster interfaces. This allows you to manage each cluster unit separately and to separate the management traffic from each cluster unit. This can also be useful if each cluster unit is in a different location.

Disabled by default. Only appears if ha-mgmt-status is enabled.

Reserved management interfaces and their IP addresses should not be used for managing a cluster using FortiManager. To correctly manage a FortiGate HA cluster with FortiManager use the IP address of one of the cluster unit interfaces.

load-balance-all {disable | enable}

By default, active-active HA load balancing distributes proxy-based security profile processing to all cluster units. Proxy-based security profile processing is CPU and memory-intensive, so FGCP load balancing may result in higher throughput because resource-intensive processing is distributed among all cluster units.

Proxy-based security profile processing that is load balanced includes proxy-based virus scanning, proxy-based web filtering, proxy-based email filtering, and proxy-based data leak prevention (DLP) of HTTP, FTP, IMAP, IMAPS, POP3, POP3S, SMTP, SMTPS, IM, and NNTP, sessions accepted by security policies. Other features enabled in security policies such as Endpoint security, traffic shaping and authentication have no effect on active-active load balancing.

You can enable load-balance-all to have the primary unit load balance all TCP sessions. Load balancing TCP sessions increases overhead and may actually reduce performance so it is disabled by default.

load-balance-udp {disable | enable}

Enable or disable load balancing UDP proxy-based security profile sessions. Load balancing UDP sessions increases overhead so it is also disabled by default.

This content clustering option is available for the FortiSwitch-5203B and FortiController-5902D.

weight {0 | 1 | 2 | 3} <weight>

The weighted round robin load balancing weight to assign to each unit in an active-active cluster. The weight is set according to the priority of the unit in the cluster. An FGCP cluster can include up to four FortiGates (numbered 0 to 3) so you can set up to 4 weights. The default weights mean that the four possible units in the cluster all have the same weight of 40. The weight range is 0 to 255. Increase the weight to increase the number of connections processed by the FortiGate with that priority.

Weights are assigned to individual FortiGates according to their priority in the cluster. The priorities are assigned when the cluster negotiates and can change every time the cluster re-negotiates.

You enter the weight for each FortiGate separately. For example, if you have a cluster of three FortiGate units you can set the weights for the units as follows:

set weight 0 5

set weight 1 10

set weight 2 15

cpu-threshold <weight> <low> <high>

Dynamic weighted load balancing by CPU usage. When enabled fewer sessions will be load balanced to the cluster unit when its CPU usage reaches the high watermark.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

memory-threshold <weight> <low> <high>

Dynamic weighted load balancing by memory usage. When enabled fewer sessions will be load balanced to the cluster unit when its memory usage reaches the high watermark.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

http-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of HTTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

imap-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of IMAP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

nntp-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of NNTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

pop3-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of POP3 proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

smtp-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of SMTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

inter-cluster-session-sync {disable | enable}

Enable or disable session synchronization between FGCP clusters. When enabled this cluster can participate in an FGSP configuration using inter-cluster session synchronization. Once inter-cluster session synchronization is enabled, all FGSP configuration options are available from the FGCP cluster CLI and you can set up the FGSP configuration in the same way as a standalone FortiGate.

Inter-cluster session synchronization is compatible with all FGCP operating modes including active-active, active-passive, virtual clustering, full mesh HA, and so on. Inter-cluster session synchronization synchronizes all supported FGSP session types including TCP sessions, IPsec tunnels, IKE routes, connectionless (UDP and ICMP) sessions, NAT sessions, asymmetric sessions, and expectation sessions. Inter-cluster session synchronization does not support configuration synchronization.

Disabled by default.

set unicast-hb {disable | enable}

For a FortiGate VM, enable or disable (the default) unicast HA heartbeat. In virtual machine (VM) environments that do not support broadcast communication, you can set up unicast HA heartbeat when configuring HA. Setting up unicast HA heartbeat consists of enabling the feature and using unicast-hp-peerip to add a peer IP address.

Unicast HA is only supported between two FortiGates VMs. The heartbeat interfaces must be connected to the same network and you must add IP addresses to these interfaces.

set unicast-hb-peerip <ip>

Add a unicast HA heart peer IP address. The peer IP address is the IP address of the HA heartbeat interface of the other FortiGate VM in the HA cluster.

config secondary-vcluster

Configure virtual cluster 2 using the following syntax. You must first enable vcluster2.

config secondary-vcluster

set vcluster-id 2

set override {disable | enable}

set priority <priority>

set override-wait-time <time>

{set | append} monitor <interface-name> [<interface-name>...]

{set | append} pingserver-monitor-interface <interface-name> [<interface-name>...]

set pingserver-failover-threshold <threshold>

set pingserver-slave-force-reset {disable | enable}

{set | append} vdom <vdom-name> [<vdom-name>...]

end

system ha

system ha

Enable and configure FortiGate FGCP high availability (HA) and virtual clustering. Some of these options are also used for FGSP and content clustering.

note icon

In FGCP mode, most settings are automatically synchronized among cluster units. The following settings are not synchronized:

  • override
  • priority (including the secondary-vcluster priority)
  • ha-mgmt-interface-gateway
  • ha-mgmt-interface-gateway6
  • cpu-threshold, memory-threshold, http-proxy-threshold, ftp-proxy-threshold, imap-proxy-threshold, nntp-proxy-threshold, pop3-proxy-threshold, smtp-proxy-threshold
  • The ha-priority setting of the config system link-monitor command
  • The config system interface settings of the FortiGate interface that becomes an HA reserved management interface
  • The config system global hostname setting.
  • The GUI Dashboard configuration. After a failover you may have to re-configure dashboard widgets.
  • OSPF summary-addresses settings.

History

The following table shows all newly added, changed, or removed entries as of FortiOS 6.0.

Command Description

set unicast-hb-netmask {disable | enable}

Set unicast heartbeat netmask.

set inter-cluster-session-sync {disable | enable}

Enable or disable FGSP session synchronization between FGCP clusters.

This entry is only available when mode is set to either a-a or a-p.

CLI Syntax

config system ha
    set group-id {integer}   Cluster group ID  (0 - 255). Must be the same for all members. range[0-255]
    set group-name {string}   Cluster group name. Must be the same for all members. size[32]
    set mode {standalone | a-a | a-p}   HA mode. Must be the same for all members. FGSP requires standalone.
            standalone  Standalone mode.
            a-a         Active-active mode.
            a-p         Active-passive mode.
    set sync-packet-balance {enable | disable}   Enable/disable HA packet distribution to multiple CPUs.
    set password {password_string}   Cluster password. Must be the same for all members. size[128]
    set key {password_string}   key size[16]
    set hbdev {string}   Heartbeat interfaces. Must be the same for all members.
    set session-sync-dev {string}   Offload session sync to one or more interfaces to distribute traffic and prevent delays if needed.
    set route-ttl {integer}   TTL for primary unit routes (5 - 3600 sec). Increase to maintain active routes during failover. range[5-3600]
    set route-wait {integer}   Time to wait before sending new routes to the cluster (0 - 3600 sec). range[0-3600]
    set route-hold {integer}   Time to wait between routing table updates to the cluster (0 - 3600 sec). range[0-3600]
    set multicast-ttl {integer}   HA multicast TTL on master (5 - 3600 sec). range[5-3600]
    set load-balance-all {enable | disable}   Enable to load balance TCP sessions. Disable to load balance proxy sessions only.
    set sync-config {enable | disable}   Enable/disable configuration synchronization.
    set encryption {enable | disable}   Enable/disable heartbeat message encryption.
    set authentication {enable | disable}   Enable/disable heartbeat message authentication.
    set hb-interval {integer}   Time between sending heartbeat packets (1 - 20 (100*ms)). Increase to reduce false positives. range[1-20]
    set hb-lost-threshold {integer}   Number of lost heartbeats to signal a failure (1 - 60). Increase to reduce false positives. range[1-60]
    set hello-holddown {integer}   Time to wait before changing from hello to work state (5 - 300 sec). range[5-300]
    set gratuitous-arps {enable | disable}   Enable/disable gratuitous ARPs. Disable if link-failed-signal enabled.
    set arps {integer}   Number of gratuitous ARPs (1 - 60). Lower to reduce traffic. Higher to reduce failover time. range[1-60]
    set arps-interval {integer}   Time between gratuitous ARPs  (1 - 20 sec). Lower to reduce failover time. Higher to reduce traffic. range[1-20]
    set session-pickup {enable | disable}   Enable/disable session pickup. Enabling it can reduce session down time when fail over happens.
    set session-pickup-connectionless {enable | disable}   Enable/disable UDP and ICMP session sync for FGSP.
    set session-pickup-expectation {enable | disable}   Enable/disable session helper expectation session sync for FGSP.
    set session-pickup-nat {enable | disable}   Enable/disable NAT session sync for FGSP.
    set session-pickup-delay {enable | disable}   Enable to sync sessions longer than 30 sec. Only longer lived sessions need to be synced.
    set link-failed-signal {enable | disable}   Enable to shut down all interfaces for 1 sec after a failover. Use if gratuitous ARPs do not update network.
    set uninterruptible-upgrade {enable | disable}   Enable to upgrade a cluster without blocking network traffic.
    set standalone-mgmt-vdom {enable | disable}   Enable/disable standalone management VDOM.
    set ha-mgmt-status {enable | disable}   Enable to reserve interfaces to manage individual cluster units.
    config ha-mgmt-interfaces
        edit {id}
        # Reserve interfaces to manage individual cluster units.
            set id {integer}   Table ID. range[0-4294967295]
            set interface {string}   Interface to reserve for HA management. size[15] - datasource(s): system.interface.name
            set dst {ipv4 classnet}   Default route destination for reserved HA management interface.
            set gateway {ipv4 address}   Default route gateway for reserved HA management interface.
            set gateway6 {ipv6 address}   Default IPv6 gateway for reserved HA management interface.
        next
    set ha-eth-type {string}   HA heartbeat packet Ethertype (4-digit hex). size[4]
    set hc-eth-type {string}   Transparent mode HA heartbeat packet Ethertype (4-digit hex). size[4]
    set l2ep-eth-type {string}   Telnet session HA heartbeat packet Ethertype (4-digit hex). size[4]
    set ha-uptime-diff-margin {integer}   Normally you would only reduce this value for failover testing. range[1-65535]
    set standalone-config-sync {enable | disable}   Enable/disable FGSP configuration synchronization.
    set vcluster2 {enable | disable}   Enable/disable virtual cluster 2 for virtual clustering.
    set vcluster-id {integer}   Cluster ID. range[0-255]
    set override {enable | disable}   Enable and increase the priority of the unit that should always be primary (master).
    set priority {integer}   Increase the priority to select the primary unit (0 - 255). range[0-255]
    set override-wait-time {integer}   Delay negotiating if override is enabled (0 - 3600 sec). Reduces how often the cluster negotiates. range[0-3600]
    set schedule {option}   Type of A-A load balancing. Use none if you have external load balancers.
            none                None.
            hub                 Hub.
            leastconnection     Least connection.
            round-robin         Round robin.
            weight-round-robin  Weight round robin.
            random              Random.
            ip                  IP.
            ipport              IP port.
    set weight {string}   Weight-round-robin weight for each cluster unit. Syntax <priority> <weight>.
    set cpu-threshold {string}   Dynamic weighted load balancing CPU usage weight and high and low thresholds.
    set memory-threshold {string}   Dynamic weighted load balancing memory usage weight and high and low thresholds.
    set http-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of HTTP proxy sessions.
    set ftp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of FTP proxy sessions.
    set imap-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of IMAP proxy sessions.
    set nntp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of NNTP proxy sessions.
    set pop3-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of POP3 proxy sessions.
    set smtp-proxy-threshold {string}   Dynamic weighted load balancing weight and high and low number of SMTP proxy sessions.
    set monitor {string}   Interfaces to check for port monitoring (or link failure).
    set pingserver-monitor-interface {string}   Interfaces to check for remote IP monitoring.
    set pingserver-failover-threshold {integer}   Remote IP monitoring failover threshold (0 - 50). range[0-50]
    set pingserver-slave-force-reset {enable | disable}   Enable to force the cluster to negotiate after a remote IP monitoring failover.
    set pingserver-flip-timeout {integer}   Time to wait in minutes before renegotiating after a remote IP monitoring failover. range[6-2147483647]
    set vdom {string}   VDOMs in virtual cluster 1.
    config secondary-vcluster
        set vcluster-id {integer}   Cluster ID. range[0-255]
        set override {enable | disable}   Enable and increase the priority of the unit that should always be primary (master).
        set priority {integer}   Increase the priority to select the primary unit (0 - 255). range[0-255]
        set override-wait-time {integer}   Delay negotiating if override is enabled (0 - 3600 sec). Reduces how often the cluster negotiates. range[0-3600]
        set monitor {string}   Interfaces to check for port monitoring (or link failure).
        set pingserver-monitor-interface {string}   Interfaces to check for remote IP monitoring.
        set pingserver-failover-threshold {integer}   Remote IP monitoring failover threshold (0 - 50). range[0-50]
        set pingserver-slave-force-reset {enable | disable}   Enable to force the cluster to negotiate after a remote IP monitoring failover.
        set vdom {string}   VDOMs in virtual cluster 2.
    set ha-direct {enable | disable}   Enable/disable using ha-mgmt interface for syslog, SNMP, remote authentication (RADIUS), FortiAnalyzer, and FortiSandbox.
    set memory-compatible-mode {enable | disable}   Enable/disable memory compatible mode.
    set inter-cluster-session-sync {enable | disable}   Enable/disable synchronization of sessions among HA clusters.
end

Additional information

The following section is for those options that require additional explanation.

group-id <id>

The HA group ID, same for all members, from 0 to 255. The group ID identifies individual clusters on the network because the group ID affects the cluster virtual MAC address. All cluster members must have the same group ID. If you have more than two clusters on the same network they must have different Group IDs.

group-name <name>

The HA group name, same for all members. Max 32 characters. The HA group name identifies the cluster. All cluster members must have the same group name. Can be blank if mode is standalone.

mode {standalone | a-a | a-p}

The HA mode.

  • standalone to disable HA. The mode required for FGSP.
  • a-a to create an Active-Active cluster.
  • a-p to create an Active-Passive cluster.

All members of an HA cluster must be set to the same HA mode.

password <password>

The HA cluster password, must be the same for all cluster units. The maximum password length is 128 characters.

hbdev <interface-name> <priority> [<interface-name> <priority>]...

Select the FortiGate interfaces to be heartbeat interfaces and set the heartbeat priority for each interface. The heartbeat interface with the highest priority processes all heartbeat traffic. If two or more heartbeat interfaces have the same priority, the heartbeat interface that with the lowest hash map order value processes all heartbeat traffic.

By default two interfaces are configured to be heartbeat interfaces and the priority for both these interfaces is set to 50. The heartbeat interface priority range is 0 to 512.

You can select up to 8 heartbeat interfaces. This limit only applies to FortiGate units with more than 8 physical interfaces.

You can use the append command to add more entries. The default depends on the FortiGate model.

multicast-ttl <timeout>

Time to wait before re-synchronizing the multicast routes to the kernel after an HA failover. The default is 600 seconds, the range is 5 to 3600 seconds.

After an HA failover, the new primary FortiGate waits for the multicast-ttl to expire before synchronizing multicast routes to the kernel. If you notice that multicast sessions are not connecting after an HA failover, this may be because the 600 seconds has not elapsed so the multicast routes in the kernel are out of date (for example, the Kernel could have multicast routes that are no longer valid). To reduce this delay, you can set the multicast-ttl time to a low value, for example 10 seconds, resulting in quicker updates of the kernel multicast routing table.

session-sync-dev <interface>

Select one or more FortiGate interfaces to use for synchronizing sessions as required for session pickup. Normally session synchronization occurs over the HA heartbeat link. Using this HA option means only the selected interfaces are used for session synchronization and not the HA heartbeat link. If you select more than one interface, session synchronization traffic is load balanced among the selected interfaces.

Moving session synchronization from the HA heartbeat interface reduces the bandwidth required for HA heartbeat traffic and may improve the efficiency and performance of the deployment, especially if the deployment is synchronizing a large number of sessions. Load balancing session synchronization among multiple interfaces can further improve performance and efficiency if the deployment is synchronizing a large number of sessions.

Session synchronization packets use Ethertype 0x8892. The interfaces to use for session synchronization must be connected together either directly using the appropriate cable (possible if there are only two units in the deployment) or using switches. If one of the interfaces becomes disconnected the deployment uses the remaining interfaces for session synchronization. If all of the session synchronization interfaces become disconnected, session synchronization reverts back to using the HA heartbeat link. All session synchronization traffic is between the primary unit and each subordinate unit.

Since large amounts of session synchronization traffic can increase network congestion, it is recommended that you keep this traffic off of your network by using dedicated connections for it.

sync-packet-balance {disable | enable}

Enable this option for FortiOS Carrier FGCP clusters or FGSP peers to distribute the processing of HA synchronization packets to multiple CPUs. By default, this option is disabled and all HA synchronization packets are processed by one CPU.

Enabling this option may improve the performance of an entity that is processing large numbers of packets causing session synchronization using excessive amounts of CPU cycles. For example, GTP traffic can result in very high packet rates and you can improve the performance of a FortiOS Carrier FGCP cluster or FGSP deployment that is processing GTP traffic by enabling this option.

route-ttl <ttl>

Control how long routes remain in a cluster unit's routing table. The time to live range is 5 to 3600 seconds (3600 seconds is one hour). The default time to live is 10 seconds.

The time to live controls how long routes remain active in a cluster unit routing table after the cluster unit becomes a primary unit. To maintain communication sessions after a cluster unit becomes a primary unit, routes remain active in the routing table for the route time to live while the new primary unit acquires new routes.

By default, route-ttl is set to 10 which may mean that only a few routes will remain in the routing table after a failover. Normally keeping route-ttl to 10 or reducing the value to 5 is acceptable because acquiring new routes usually occurs very quickly, especially if graceful restart is enabled, so only a minor delay is caused by acquiring new routes.

If the primary unit needs to acquire a very large number of routes, or if for other reasons there is a delay in acquiring all routes, the primary unit may not be able to maintain all communication sessions.

You can increase the route time to live if you find that communication sessions are lost after a failover so that the primary unit can use synchronized routes that are already in the routing table, instead of waiting to acquire new routes.

route-wait <wait>

The amount of time in seconds that the primary unit waits after receiving routing updates before sending the updates to the subordinate units. For quick routing table updates to occur, set route-wait to a relatively short time so that the primary unit does not hold routing table changes for too long before updating the subordinate units.

The route-wait range is 0 to 3600 seconds. The default route-wait is 0 seconds. Normally, because the is 0 seconds.

Normally, because the route-wait time is 0 seconds the primary unit sends routing table updates to the subordinate units every time its routing table changes.

Once a routing table update is sent, the primary unit waits the route-hold time before sending the next update.

Usually routing table updates are periodic and sporadic. Subordinate units should receive these changes as soon as possible so route-wait is set to 0 seconds. route-hold can be set to a relatively long time because normally the next route update would not occur for a while.

In some cases, routing table updates can occur in bursts. A large burst of routing table updates can occur if a router or a link on a network fails or changes. When a burst of routing table updates occurs, there is a potential that the primary unit could flood the subordinate units with routing table updates. Flooding routing table updates can affect cluster performance if a great deal of routing information is synchronized between cluster units. Setting route-wait to a longer time reduces the frequency of additional updates are and prevents flooding of routing table updates from occurring.

route-hold <hold>

The amount of time in seconds that the primary unit waits between sending routing table updates to subordinate units. The route hold range is 0 to 3600 seconds. The default route hold time is 10 seconds.

To avoid flooding routing table updates to subordinate units, set route-hold to a relatively long time to prevent subsequent updates from occurring too quickly. Flooding routing table updates can affect cluster performance if a great deal of routing information is synchronized between cluster units. Increasing the time between updates means that this data exchange will not have to happen so often.

The route-hold time should be coordinated with the route-wait time.

sync-config {disable | enable}

Enable or disable automatic synchronization configuration changes to all cluster units.

encryption {disable | enable}

Enable or disable HA heartbeat message encryption using AES-128 for encryption and SHA1 for authentication. Disabled by default.

authentication {disable | enable}

Enable or disable HA heartbeat message authentication using SHA1. Disabled by default.

hb-interval <interval>

The time between sending heartbeat packets. The heartbeat interval range is 1 to 20 (100*milliseconds). The default is 2.

A heartbeat interval of 2 means the time between heartbeat packets is 200 ms. Changing the heartbeat interval to 5 changes the time between heartbeat packets to 500 ms (5 * 100ms = 500ms).

HA heartbeat packets consume more bandwidth if the heartbeat interval is short. But if the heartbeat interval is very long, the cluster is not as sensitive to topology and other network changes.

The heartbeat interval combines with the lost heartbeat threshold to set how long a cluster unit waits before assuming that another cluster unit has failed and is no longer sending heartbeat packets. By default, if a cluster unit does not receive a heartbeat packet from a cluster unit for 6 * 200 = 1200 milliseconds or 1.2 seconds the cluster unit assumes that the other cluster unit has failed.

You can increase both the heartbeat interval and the lost heartbeat threshold to reduce false positives. For example, increasing the heartbeat interval to 20 and the lost heartbeat threshold to 30 means a failure will be assumed if no heartbeat packets are received after 30 * 2000 milliseconds = 60,000 milliseconds, or 60 seconds.

hb-lost-threshold <threshold>

The number of consecutive heartbeat packets that are not received from another cluster unit before assuming that the cluster unit has failed. The default value is 6, meaning that if the 6 heartbeat packets are not received from a cluster unit then that cluster unit is considered to have failed. The range is 1 to 60 packets.

If the primary unit does not receive a heartbeat packet from a subordinate unit before the heartbeat threshold expires, the primary unit assumes that the subordinate unit has failed.

If a subordinate unit does not receive a heartbeat packet from the primary unit before the heartbeat threshold expires, the subordinate unit assumes that the primary unit has failed. The subordinate unit then begins negotiating to become the new primary unit.

The lower the hb-lost-threshold the faster a cluster responds when a unit fails. However, sometimes heartbeat packets may not be sent because a cluster unit is very busy. This can lead to a false positive failure detection. To reduce these false positives you can increase the hb-lost-threshold.

hello-holddown <timer>

The number of seconds that a cluster unit waits before changing from the hello state to the work state. The default is 20 seconds and the range is 5 to 300 seconds.

The hello state hold-down time is the number of seconds that a cluster unit waits before changing from hello state to work state. After a failure or when starting up, cluster units operate in the hello state to send and receive heartbeat packets so that all the cluster units can find each other and form a cluster. A cluster unit should change from the hello state to work state after it finds all of the other FortiGate units to form a cluster with. If for some reason all cluster units cannot find each other during the hello state then some cluster units may be joining the cluster after it has formed. This can cause disruptions to the cluster and affect how it operates.

One reason for a delay in all of the cluster units joining the cluster could be the cluster units are located at different sites of if for some other reason communication is delayed between the heartbeat interfaces.

If cluster units are joining your cluster after it has started up or if it takes a while for units to join the cluster you can increase the time that the cluster units wait in the hello state.

gratuitous-arps {disable | enable}

Enable or disable sending gratuitous ARP packets from a new primary unit. Enabled by default.

In most cases you would want to send gratuitous ARP packets because its a reliable way for the cluster to notify the network to send traffic to the new primary unit. However, in some cases, sending gratuitous ARP packets may be less optimal. For example, if you have a cluster of FortiGate units in Transparent mode, after a failover the new primary unit will send gratuitous ARP packets to all of the addresses in its Forwarding Database (FDB). If the FDB has a large number of addresses it may take extra time to send all the packets and the sudden burst of traffic could disrupt the network.

If you choose to disable sending gratuitous ARP packets you must first enable the link-failed-signal setting. The cluster must have some way of informing attached network devices that a failover has occurred.

arps <number>

The number of times that the primary unit sends gratuitous ARP packets. Gratuitous ARP packets are sent when a cluster unit becomes a primary unit (this can occur when the cluster is starting up or after a failover). The default is 5 packets, the range is 1 to 60.

Usually you would not change the default setting of 5. In some cases, however, you might want to reduce the number of gratuitous ARP packets. For example, if your cluster has a large number of VLAN interfaces and virtual domains and because gratuitous ARP packets are broadcast, sending a higher number gratuitous ARP packets may generate a lot of network traffic. As long as the cluster still fails over successfully, you could reduce the number of gratuitous ARP packets that are sent to reduce the amount of traffic produced after a failover.

If failover is taking longer that expected, you may be able to reduce the failover time by increasing the number gratuitous ARP packets sent.

arps-interval <interval>

The number of seconds to wait between sending gratuitous ARP packets. When a cluster unit becomes a primary unit (this occurs when the cluster is starting up or after a failover) the primary unit sends gratuitous ARP packets immediately to inform connected network equipment of the IP address and MAC address of the primary unit. The default is 8 seconds, the range is 1 to 20 seconds.

Normally you would not need to change the time interval. However, you could decrease the time to be able send more packets in less time if your cluster takes a long time to failover.

There may also be a number of reasons to set the interval higher. For example, if your cluster has a large number of VLAN interfaces and virtual domains and because gratuitous ARP packets are broadcast, sending gratuitous ARP packets may generate a lot of network traffic. As long as the cluster still fails over successfully you could increase the interval to reduce the amount of traffic produced after a failover.

session-pickup {disable | enable}

Enable or disable session pickup. Disabled by default.

Enable session-pickup so that if the primary unit fails, all sessions are picked up by the new primary unit. If you enable session pickup the subordinate units maintain session tables that match the primary unit session table. If the primary unit fails, the new primary unit can maintain most active communication sessions.

If you do not enable session pickup the subordinate units do not maintain session tables. If the primary unit fails all sessions are interrupted and must be restarted when the new primary unit is operating.

Many protocols can successfully restart sessions with little, if any, loss of data. For example, after a failover, users browsing the web can just refresh their browsers to resume browsing. Since most HTTP sessions are very short, in most cases they will not even notice an interruption unless they are downloading large files. Users downloading a large file may have to restart their download after a failover.

Other protocols may experience data loss and some protocols may require sessions to be manually restarted. For example, a user downloading files with FTP may have to either restart downloads or restart their FTP client.

session-pickup-connectionless {disable | enable}

This option applies to both FGCP and FGSP.

Enable or disable session synchronization for connectionless (UDP and ICMP) sessions.

When mode is set to a-a or a-p this option applies to FGCP.

When mode is standalone, this option applies to FGSP only.

This option is only available if session-pickup in enabled and is disabled by default.

session-pickup-expectation {disable | enable}

This option applies only to FGSP.

Enable or disable session synchronization for expectation sessions in an FGSP deployment. This option is only available if session-pickup is enabled and mode is standalone and is disabled by default.

FortiOS session helpers keep track of the communication of Layer-7 protocols such as FTP and SIP that have control sessions and expectation sessions. Usually the control sessions establish the link between server and client and negotiate the ports and protocols that will be used for data communications. The session helpers then create expectation sessions through the FortiGate for the ports and protocols negotiated by the control session.

The expectation sessions are usually the sessions that actually communicate data. For FTP, the expectation sessions transmit files being uploaded or downloaded. For SIP, the expectation sessions transmit voice and video data. Expectation sessions usually have a timeout value of 30 seconds. If the communication from the server is not initiated within 30 seconds the expectation session times out and traffic will be denied.

session-pickup-nat {disable | enable}

This option applies only to FGSP.

Enable or disable session synchronization for NAT sessions in an FGSP deployment. This option is only available if session-pickup is enabled and mode is standalone and is disabled by default.

session-pickup-delay {disable | enable}

This option applies to both FGCP and FGSP.

Enable or disable synchronizing sessions only if they remain active for more than 30 seconds. This option improves performance when session-pickup is enabled by reducing the number of sessions that are synchronized.

session-sync-daemon-number <number>

The number of processes used by the HA session sync daemon. Increase the number of processes to handle session packets sent from the kernel efficiently when the session rate is high. Intended for ELBC clusters, this feature only works for clusters with two members. The default is 1, the range 1 to 15.

link-failed-signal {disable | enable}

Enable or disable shutting down all interfaces (except for heartbeat device interfaces) of a cluster unit with a failed monitored interface for one second after a failover occurs. Enable this option if the switch the cluster is connected to does not update its MAC forwarding tables after a failover caused by a link failure. Disabled by default.

If you choose to disable sending gratuitous ARP packets (by setting gratuitous-arps to disable) you must first enable link-failed-signal. The cluster must have some way of informing attached network devices that a failover has occurred.

uninterruptible-upgrade {disable | enable}

Enable or disable upgrading the cluster without interrupting cluster traffic processing. Enabled by default.

If uninterruptible-upgrade is enabled, traffic processing is not interrupted during a normal firmware upgrade. This process can take some time and may reduce the capacity of the cluster for a short time. If is enabled, traffic processing is not interrupted during a normal firmware upgrade. This process can take some time and may reduce the capacity of the cluster for a short time.

If uninterruptible-upgrade is disabled, traffic processing is interrupted during a normal firmware upgrade (similar to upgrading the firmware operating on a standalone FortiGate unit).

ha-mgmt-status {enable | disable}

Enable or disable the HA reserved management interface feature. Disabled by default.

ha-mgmt-interface <interface_name>

The FortiGate interface to be the reserved HA management interface. You can configure the IP address and other settings for this interface using the config system interface command. When you enable the reserved management interface feature the configuration of the reserved management interface is not synchronized by the FGCP.

ha-mgmt-interface-gateway <gateway_IP>

The default route for the reserved HA management interface (IPv4). This setting is not synchronized by the FGCP.

ha-mgmt-interface-gateway6 <gateway_IP>

The default route for the reserved HA management interface (IPv6). This setting is not synchronized by the FGCP.

ha-eth-type <type>

The Ethertype used by HA heartbeat packets for NAT mode clusters. <type> is a 4-digit number. Default is 8890.

hc-eth-type <type>

The Ethertype used by HA heartbeat packets for Transparent mode clusters. <type> is a 4-digit number. Default is 8891.

l2ep-eth-type <type>

The Ethertype used by HA telnet sessions between cluster units over the HA link. <type> is a 4-digit number. Default is 8893.

ha-uptime-diff-margin <margin>

The cluster age difference margin (grace period). This margin is the age difference ignored by the cluster when selecting a primary unit based on age. Normally the default value of 300 seconds (5 minutes) should not be changed. However, for demo purposes you can use this option to lower the difference margin. The range is 1 to 65535 seconds.

You may want to reduce the margin if during failover testing you don’t want to wait the default age difference margin of 5 minutes. You may also want to reduce the margin to allow uninterruptible upgrades to work.

You may want to increase the age margin if cluster unit startup time differences are larger than 5 minutes.

During a cluster firmware upgrade with uninterruptible-upgrade enabled (the default configuration) the cluster should not select a new primary unit after the firmware of all cluster units has been updated. But since the age difference of the cluster units is most likely less than 300 seconds, age is not used to affect primary unit selection and the cluster may select a new primary unit.

During failover testing where cluster units are failed over repeatedly the age difference between the cluster units will most likely be less than 5 minutes. During normal operation, if a failover occurs, when the failed unit rejoins the cluster its age will be very different from the age of the still operating cluster units so the cluster will not select a new primary unit. However, if a unit fails and is restored in a very short time the age difference may be less than 5 minutes. As a result the cluster may select a new primary unit during some failover testing scenarios.

vcluster2 {disable | enable}

Enable or disable virtual cluster 2 (also called secondary-vcluster).

When multiple VDOMs are enabled, virtual cluster 2 is enabled by default. When virtual cluster 2 is enabled you can use config secondary-vcluster to configure virtual cluster 2.

Disable virtual cluster 2 to move all virtual domains from virtual cluster 2 back to virtual cluster 1.

Enabling virtual cluster 2 enables override for virtual cluster 1 and virtual cluster 2.

vcluster-id

Indicates the virtual cluster you are configuring. You can't change this setting. You can use the config secondary-vcluster command to edit vcluster 2.

standalone-config-sync {disable | enable}

Synchronize the configuration of the FortiGate unit to another FortiGate unit. This is available if session-pickup is enabled and mode is standalone. Disabled by default.

override {disable | enable}

Enable or disable forcing the cluster to renegotiate and select a new primary unit every time a cluster unit leaves or joins a cluster, changes status within a cluster, or every time the HA configuration of a cluster unit changes.

Disabled by default. Automatically enabled when you enable virtual cluster 2. This setting is not synchronized to other cluster units.

In most cases you should keep override disabled to reduce how often the cluster negotiates. Frequent negotiations may cause frequent traffic interruptions. However, if you want to make sure that the same cluster unit always operates as the primary unit and if you are less concerned about frequent cluster negotiation you can set its device priority higher than other cluster units and enable override.

priority <priority>

The device priority of the cluster unit. Each cluster unit can have a different device priority. During HA negotiation, the cluster unit with the highest device priority becomes the primary unit. The device priority range is 0 to 255. The default is 128. This setting is not synchronized to other cluster units. The device priority of the cluster unit. Each cluster unit can have a different device priority. During HA negotiation, the cluster unit with the highest device priority becomes the primary unit. The device priority range is 0 to 255. The default is 128. This setting is not synchronized to other cluster units.

override-wait-time <seconds>

Delay renegotiating when override is enabled and HA is enabled or the cluster mode is changed or after a cluster unit reboots. You can add a time to prevent negotiation during transitions and configuration changes. Range 0 to 3600 seconds.

schedule {hub | ip | ipport | leastconnection | none | random | round-robin | weight-round-robin}

The cluster's active-active load balancing schedule.

  • hub load balancing if the cluster interfaces are connected to hubs. Traffic is distributed to cluster units based on the Source IP and Destination IP of the packet.
  • ip load balancing according to IP address.
  • ipport load balancing according to IP address and port.
  • leastconnection least connection load balancing.
  • none no load balancing. Use when the cluster interfaces are connected to load balancing switches.
  • random random load balancing.
  • round-robin round robin load balancing. If the cluster units are connected using switches, use round-robin to distribute traffic to the next available cluster unit.
  • weight-round-robin weighted round robin load balancing. Similar to round robin, but you can assign weighted values to each of the units in the cluster.

slave-switch-standby {disable | enable}

Enable to force a subordinate FortiSwitch-5203B or FortiController-5902D into standby mode even though its weight is non-zero. This is a content clustering option and is disabled by default.

minimum-worker-threshold <threshold>

Available on FortiSwitch-5203Bs or FortiController-5902Ds only in inter-chassis content-cluster mode. In inter-chassis mode the system considers the number of operating workers in a chassis when electing the primary chassis. A chassis that has less than the minimum-worker-threshold of workers operating is ranked lower than a chassis that meets or exceeds the minimum-worker-threshold. The default value of 1 effectively disables the threshold. The range is 1 to 11.

monitor <interface-name> [<interface-name>...]

Enable or disable port monitoring for link failure. Port monitoring (also called interface monitoring) monitors FortiGate interfaces to verify that the monitored interfaces are functioning properly and connected to their networks.

Enter the names of the interfaces to monitor. Use a space to separate each interface name. Use append to add an interface to the list. If there are no monitored interfaces then port monitoring is disabled.

You can monitor physical interfaces, redundant interfaces, and 802.3ad aggregated interfaces but not VLAN interfaces, IPSec VPN interfaces, or switch interfaces.

You can monitor up to 64 interfaces. In a multiple VDOM configuration you can monitor up to 64 interfaces per virtual cluster.

pingserver-monitor-interface <interface-name> [<interface-name>...]

Enable HA remote IP monitoring by specifying the FortiGate unit interfaces that will be used to monitor remote IP addresses. You can configure remote IP monitoring for all types of interfaces including physical interfaces, VLAN interfaces, redundant interfaces and aggregate interfaces.

Use a space to separate each interface name. Use append to add an interface to the list.

pingserver-failover-threshold <threshold>

The HA remote IP monitoring failover threshold. The failover threshold range is 0 to 50. Setting the failover threshold to 0 (the default) means that if any ping server added to the HA remote IP monitoring configuration fails an HA failover will occur.

Set the priority for each remote IP monitoring ping server using the ha-priority option of the config system link-monitor command. Increase the priority to require more remote links to fail before a failover occurs.

pingserver-slave-force-reset {disable | enable}

In a remote IP monitoring configuration, if you also want the same cluster unit to always be the primary unit you can set its device priority higher and enable override. With this configuration, when a remote IP monitoring failover occurs, after the flip timeout expires another failover will occur (because override is enabled) and the unit with override enabled becomes the primary unit again. So the cluster automatically returns to normal operation.

The primary unit starts remote IP monitoring again. If the remote link is restored the cluster continues to operate normally. If, however, the remote link is still down, remote link failover causes the cluster to failover again. This will repeat each time the flip timeout expires until the failed remote link is restored.

You can use the pingserver-slave-force-reset option to control this behavior. By default this option is enabled and the behavior described above occurs. The overall behavior is that when the remote link is restored the cluster automatically returns to normal operation after the flip timeout.

If you disable pingserver-slave-force-reset after the initial remote IP monitoring failover nothing will happen after the flip timeout (as long as the new primary unit doesn't experience some kind of failover). The result is that repeated failovers no longer happen. But it also means that the original primary unit will remain the subordinate unit and will not resume operating as the primary unit.

pingserver-flip-timeout <timeout>

The HA remote IP monitoring flip timeout in minutes. If HA remote IP monitoring fails on all cluster units because none of the cluster units can connect to the monitored IP addresses, the flip timeout stops a failover from occurring until the timer runs out. The range is 6 to 2147483647 minutes. The default is 60 minutes.

The flip timeout reduces the frequency of failovers if, after a failover, HA remote IP monitoring on the new primary unit also causes a failover. This can happen if the new primary unit cannot connect to one or more of the monitored remote IP addresses. The result could be that until you fix the network problem that blocks connections to the remote IP addresses, the cluster will experience repeated failovers. You can control how often the failovers occur by setting the flip timeout.

The flip timeout stops HA remote IP monitoring from causing a failover until the primary unit has been operating for the duration of the flip timeout. If you set the flip timeout to a relatively high number of minutes you can find and repair the network problem that prevented the cluster from connecting to the remote IP address without the cluster experiencing very many failovers. Even if it takes a while to detect the problem, repeated failovers at relatively long time intervals do not usually disrupt network traffic.

The flip timeout also causes the cluster to renegotiate when it expires unless you have disabled pingserver-slave-force-reset.

vdom <vdom-name> [<vdom-name>...]

Add virtual domains to a virtual cluster. By default all VDOMs are added to virtual cluster 1. Adding a virtual domain to a virtual cluster removes it from the other virtual cluster. You add VDOMs to virtual cluster 1 using the following syntax:

config system ha

set vdom root vdom1

end

You add VDOMs to virtual cluster 2 using the following syntax:

config system ha

set secondary-vcluster enable

config vcluster2

set vdom root vdom1

end

end

ha-direct {disable | enable}

Enable to use the reserved HA management interface for following management features:

  • Remote logging (including syslog, FortiAnalyzer, and FortiCloud).
  • SNMP queries and traps.
  • Remote authentication and certificate verification.
  • Communication with FortiSandbox.

This means that individual cluster units send log messages and communicate with FortiSandbox and so on using their HA reserved management interface instead of one of the cluster interfaces. This allows you to manage each cluster unit separately and to separate the management traffic from each cluster unit. This can also be useful if each cluster unit is in a different location.

Disabled by default. Only appears if ha-mgmt-status is enabled.

Reserved management interfaces and their IP addresses should not be used for managing a cluster using FortiManager. To correctly manage a FortiGate HA cluster with FortiManager use the IP address of one of the cluster unit interfaces.

load-balance-all {disable | enable}

By default, active-active HA load balancing distributes proxy-based security profile processing to all cluster units. Proxy-based security profile processing is CPU and memory-intensive, so FGCP load balancing may result in higher throughput because resource-intensive processing is distributed among all cluster units.

Proxy-based security profile processing that is load balanced includes proxy-based virus scanning, proxy-based web filtering, proxy-based email filtering, and proxy-based data leak prevention (DLP) of HTTP, FTP, IMAP, IMAPS, POP3, POP3S, SMTP, SMTPS, IM, and NNTP, sessions accepted by security policies. Other features enabled in security policies such as Endpoint security, traffic shaping and authentication have no effect on active-active load balancing.

You can enable load-balance-all to have the primary unit load balance all TCP sessions. Load balancing TCP sessions increases overhead and may actually reduce performance so it is disabled by default.

load-balance-udp {disable | enable}

Enable or disable load balancing UDP proxy-based security profile sessions. Load balancing UDP sessions increases overhead so it is also disabled by default.

This content clustering option is available for the FortiSwitch-5203B and FortiController-5902D.

weight {0 | 1 | 2 | 3} <weight>

The weighted round robin load balancing weight to assign to each unit in an active-active cluster. The weight is set according to the priority of the unit in the cluster. An FGCP cluster can include up to four FortiGates (numbered 0 to 3) so you can set up to 4 weights. The default weights mean that the four possible units in the cluster all have the same weight of 40. The weight range is 0 to 255. Increase the weight to increase the number of connections processed by the FortiGate with that priority.

Weights are assigned to individual FortiGates according to their priority in the cluster. The priorities are assigned when the cluster negotiates and can change every time the cluster re-negotiates.

You enter the weight for each FortiGate separately. For example, if you have a cluster of three FortiGate units you can set the weights for the units as follows:

set weight 0 5

set weight 1 10

set weight 2 15

cpu-threshold <weight> <low> <high>

Dynamic weighted load balancing by CPU usage. When enabled fewer sessions will be load balanced to the cluster unit when its CPU usage reaches the high watermark.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

memory-threshold <weight> <low> <high>

Dynamic weighted load balancing by memory usage. When enabled fewer sessions will be load balanced to the cluster unit when its memory usage reaches the high watermark.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

http-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of HTTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

imap-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of IMAP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

nntp-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of NNTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

pop3-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of POP3 proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

smtp-proxy-threshold <weight> <low> <high>

Dynamic weighted load balancing by the number of SMTP proxy sessions processed by a cluster unit. When enabled fewer sessions will be load balanced to the cluster unit when the high watermark is reached.

This option is available when mode is a-a and schedule is weight-round-robin. Default low and high watermarks of 0 disable the feature. The default weight is 5.

This setting is not synchronized by the FGCP so you can set separate weights for each cluster unit.

inter-cluster-session-sync {disable | enable}

Enable or disable session synchronization between FGCP clusters. When enabled this cluster can participate in an FGSP configuration using inter-cluster session synchronization. Once inter-cluster session synchronization is enabled, all FGSP configuration options are available from the FGCP cluster CLI and you can set up the FGSP configuration in the same way as a standalone FortiGate.

Inter-cluster session synchronization is compatible with all FGCP operating modes including active-active, active-passive, virtual clustering, full mesh HA, and so on. Inter-cluster session synchronization synchronizes all supported FGSP session types including TCP sessions, IPsec tunnels, IKE routes, connectionless (UDP and ICMP) sessions, NAT sessions, asymmetric sessions, and expectation sessions. Inter-cluster session synchronization does not support configuration synchronization.

Disabled by default.

set unicast-hb {disable | enable}

For a FortiGate VM, enable or disable (the default) unicast HA heartbeat. In virtual machine (VM) environments that do not support broadcast communication, you can set up unicast HA heartbeat when configuring HA. Setting up unicast HA heartbeat consists of enabling the feature and using unicast-hp-peerip to add a peer IP address.

Unicast HA is only supported between two FortiGates VMs. The heartbeat interfaces must be connected to the same network and you must add IP addresses to these interfaces.

set unicast-hb-peerip <ip>

Add a unicast HA heart peer IP address. The peer IP address is the IP address of the HA heartbeat interface of the other FortiGate VM in the HA cluster.

config secondary-vcluster

Configure virtual cluster 2 using the following syntax. You must first enable vcluster2.

config secondary-vcluster

set vcluster-id 2

set override {disable | enable}

set priority <priority>

set override-wait-time <time>

{set | append} monitor <interface-name> [<interface-name>...]

{set | append} pingserver-monitor-interface <interface-name> [<interface-name>...]

set pingserver-failover-threshold <threshold>

set pingserver-slave-force-reset {disable | enable}

{set | append} vdom <vdom-name> [<vdom-name>...]

end