HA configuration synchronization
Normally in an HA configuration, the master node pushes most of its configuration to the other member nodes. This is known as HA configuration synchronization. If automatic synchronization is enabled, synchronization occurs automatically when an appliance joins the cluster, and it repeats every 30 seconds thereafter. If synchronization is not enabled, you must initiate synchronization manually.
HA configuration synchronization includes:
- Core CLI-style configuration file (fadc_system.conf)
- X.509 certificates, certificate signing request files (CSR), and private keys
- Layer-7 virtual server error message files
- Layer-4 TCP connection state, Layer-4 persistence table, and Layer-7 persistence table (Source Address Persistence table only)
- Health check status (active-passive deployments only)
For most settings, you configure only the primary node, and its settings are pushed to other members.
HA settings that are not synchronized summarizes the configuration settings that are not synchronized. All other settings are synchronized.
Setting | Explanation |
---|---|
Hostname | The hostnames are not synchronized to enable you to use unique names. |
SNMP system information | Each member node has its own SNMP system information so that you can maintain accurate, separate data in SNMP collections. However, the network interfaces of a standby node are not active, so they cannot be actively monitored with SNMP. |
RAID level | RAID settings are hardware-dependent and determined at boot time by looking at the drives (for software RAID) or the controller (hardware RAID), and are not stored in the system configuration. Therefore, they are not synchronized. |
HA settings | Most of the HA configuration is not synchronized in order to support HA system operations. In particular:
|
In addition to HA settings, the following data is not synchronized either:
- Log messages—These describe events that happened on a specific appliance. After a fail-over, you might notice that there is a gap in the original active appliance’s log files that corresponds to the period of its down time. Log messages created during the time when the standby was acting as the active appliance (if you have configured local log storage) are stored there, on the original standby appliance.
- Generated reports—Like the log messages that they are based upon, reports also describe events that happened on that specific appliance. As such, report settings are synchronized, but report output is not.
You can view the status of cluster members from the dashboard of the primary node. You might need to log into the system for a non-primary member node in the following situations:
- To configure settings that are not synchronized.
- To view log messages recorded about the member node itself on its own hard disk.
- To view traffic reports for traffic processed by the member node.