FGSP basic peer setup
The FortiGate Session Life Support Protocol (FGSP) is a proprietary HA solution for only sharing sessions between entities based on peer-to-peer communications. The entities could be standalone FortiGates or an FGCP cluster. Sessions are load balanced by an upstream load balancer. Each peer will synchronize its sessions with the other peers so that if a failure occurs, sessions will continue to flow as the load balancer redirects the traffic to the other peers.
Basic requirements and limitations
In most production environments, the following requirements should be met:
- The peers are FortiGates of the same model.
- The peers are running the same firmware version.
- There are 2 to 16 standalone FortiGates, or 2 to 16 FortiGate FGCP clusters of two members each.
-
The configurations related to session tables should match. For example, the logical names used in firewall policies, IPsec interface names, VDOM names, firewall policy tables, and so on.
Two FortiGates must have similar capabilities so that data structures used in session synchronization will match, and are capable of delivering similar performance. Therefore, the same model and firmware version is highly recommended for most production deployments. The limitation on the number of FortiGates in FGSP also ensures that session synchronization will occur smoothly.
Example
This example uses two peer FortiGates. The load balancer is configured to send all sessions to Peer_1, and if Peer_1 fails, all traffic is sent to Peer_2.
To configure a basic FGSP peer setup:
These instructions assume that all FortiGates have been factory reset.
- Make all the necessary connections as shown in the topology diagram.
- On Peer_1, configure the peer IP in which this device will peer with:
config system cluster-sync edit 1 set peerip 10.10.10.2 next end config system standalone-cluster set standalone-group-id 1 set group-member-id 1 end
If there are multiple peer IPs from the same peer, enter them as separate entries. If there are multiple peers, enter the IP of each peer in separate entries. See Optimizing FGSP session synchronization and redundancy for an example.
Sessions by default will be synchronized over layer 3 on the interface in which the current unit connects to the peer's IP.
- On Peer_2, configure session synchronization:
config system cluster-sync edit 1 set peerip 10.10.10.1 next end config system standalone-cluster set standalone-group-id 1 set group-member-id 2 end
- Configure identical firewall policies on each peer, such as for traffic going from the same incoming interface (port1) to the outgoing interface (port2).
To test the FGSP peer setup:
- Initiate TCP traffic (like HTTP access) to go through Peer_1.
- Check the session information:
# diagnose sys session filter src <IP_address>
# diagnose sys session list
- Enter the same commands on Peer_2 to verify if the same session information appears.
Optional filters
Filters can be added to synchronize certain types of sessions that meet the filter criteria.
To add filters for session synchronization:
config system cluster-sync edit <id> config session-sync-filter set srcintf <interface> set dstintf <interface> set srcaddr <IPv4_address> set dstaddr <IPv4_address> set srcaddr6 <IPv6_address> set dstaddr6 <IPv6_address> end next end
Filter examples
To synchronize only sessions with a particular source subnet:
config system cluster-sync edit 1 config session-sync-filter set srcaddr 192.168.20.0/24 end next end
To synchronize only sessions with a particular source address range:
config system cluster-sync edit 1 config session-sync-filter set srcaddr 192.168.20.10 192.168.20.20 end next end
To synchronize only sessions with a particular destination address range:
config system cluster-sync edit 1 config session-sync-filter set dstaddr6 2001:db8:0:2::/64 end next end
Session pickup
You can enable this setting to synchronize connectionless (UDP and ICMP) sessions, expectation sessions, and NAT sessions. If session pickup is not enabled, the FGSP does not share session tables for the particular session type, and sessions do not resume after a failover.
To enable UDP and ICMP session synchronization:
config system ha set session-pickup enable set session-pickup-connectionless enable end
Session synchronization
You can specify interfaces used to synchronize sessions in L2 instead of L3 using the session-sync-dev
setting. For more information about using session synchronization, see Session synchronization interfaces in FGSP.
To configure session synchronization over redundant L2 connections:
config system standalone-cluster set session-sync-dev <interface 1> [<interface 2>] ... [<interface n>] end
VDOM synchronization
When multi-VDOM mode is enabled, you can specify the peer VDOM and the synchronized VDOMs. The peer VDOM contains the session synchronization link interface on the peer unit. The synchronized VDOMs' sessions are synchronized using this session synchronization configuration.
To synchronize between VDOMs:
config system cluster-sync edit 1 set peerip <IP address> set peervd <vdom> set syncvd <vdom 1> [<vdom 2>] ... [<vdom n>] next end
Configuring unique group and member ID
FGSP can function between standalone FortiGates or between FGCP clusters. In either case, peers should use different group ID and member ID to uniquely identify each member. This allows each member to actively process traffic without any conflict.
To configure FGSP peering between standalone FortiGates, follow the steps under To configure a basic FGSP peer setup.
To configure FGSP peering between different FGCP clusters:
These instructions assume Peer_1 and Peer_2 are in cluster 1, and Peer_3 and Peer_4 are in cluster 2.
-
On Peer_1, configure the first group ID:
config system standalone-cluster set standalone-group-id 1 set group-member-id 1 end
-
On Peer_2, configure the same group ID but a different member ID:
config system standalone-cluster set standalone-group-id 1 set group-member-id 2 end
-
On Peer_3, configure the second group ID:
config system standalone-cluster set standalone-group-id 2 set group-member-id 1 end
-
On Peer_4, configure the same group ID but a different member ID:
config system standalone-cluster set standalone-group-id 2 set group-member-id 2 end