Fortinet white logo
Fortinet white logo

FortiGate-6000 and FortiGate-7000 v6.0.4 release notes

FortiGate-6000 and FortiGate-7000 v6.0.4 release notes

This document provides the following information for FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405:

Supported models

FortiGate-6000 v6.0.4 build 8405 supports the following models:

  • FortiGate-6300F
  • FortiGate-6301F
  • FortiGate-6500F
  • FortiGate-6501F

FortiGate-7000 v6.0.4 build 8405 supports all FortiGate-7030E, 7040E, and 7060E models and configurations.

What’s new in FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405

FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405 includes the bug fixes described inResolved issues for build 8405 and Resolved issues for build 8385. The first released build of FortiGate-6000 and FortiGate-7000 v6.0.4 was build 6145.

The following new features have been added to FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405:

  • Diagnose debug flow trace output improvements.
  • New diagnose command to show how the DP processor will load balance a session.
  • Enabling or disabling synchronizing connectionless sessions.
  • ICMP traffic can now be load balanced.
  • FortiGate Session Life Support Protocol (FGSP) support.
  • The report produced by the execute tac report command now includes more information, including new information about SLBC operations.

Diagnose debug flow trace output improvements

The diagnose debug flow trace output from the FortiGate-6000 management board CLI now displays debug data for the management board and for all of the FPCs. Each line of output begins with the name of the component that produced the output. For example:

diagnose debug enable
[FPC06]  id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10001->4ff5::13:80) from vlan-port1."
[FPC07]  id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10000->4ff5::11:80) from vlan-port1."
[FPC06]  id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000eb730"
[FPC07]  id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000eb722"
[FPC06]  id=20085 trace_id=2 func=vf_ip6_route_input line=1125 msg="find a route: gw-4ff5::13 via vlan-port2 err 0 flags 01000001"

Running FortiGate-6000 diagnose debug flow trace commands from an individual FPC CLI shows traffic processed by that FPC only. For example:

diagnose debug enable 
[FPC02] id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10001->4ff5::28:80) from vlan-port1." [FPC02] id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000f00fb" [FPC02] id=20085 trace_id=2 func=vf_ip6_route_input line=1125 msg="find a route: gw-4ff5::28 via vlan-port2 err 0 flags 01000001" [FPC02] id=20085 trace_id=2 func=fw6_forward_handler line=345 msg="Check policy between vlan-port1 -> vlan-port2"

The diagnose debug flow trace output from the FortiGate-7000 primary FIM CLI now shows traffic from all FIMs and FPMs. Each line of output begins with the name of the component that produced the output. For example:

diagnose debug enable
[FPM04]  id=20085 trace_id=6 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10001->20.0.0.100:80) from HA-LAG0. flag [S], seq 2670272303, ack 0, win 32768"
[FPM03]  id=20085 trace_id=7 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10002->20.0.0.100:80) from HA-LAG0. flag [S], seq 3193740413, ack 0, win 32768"
[FPM04]  id=20085 trace_id=6 func=init_ip_session_common line=5937 msg="allocate a new session-0000074c"
[FPM04]  id=20085 trace_id=6 func=vf_ip_route_input_common line=2591 msg="find a route: flag=04000000 gw-20.0.0.100 via HA-LAG1"
[FPM04]  id=20085 trace_id=6 func=fw_forward_handler line=755 msg="Allowed by Policy-10000:"

Running FortiGate-7000 diagnose debug flow trace commands from an individual FPM CLI shows traffic processed by that FPM only.

diagnose debug enable 
[FPM03]  id=20085 trace_id=7 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10002->20.0.0.100:80) from HA-LAG0. flag [S], seq 3193740413, ack 0, win 32768"
[FPM03]  id=20085 trace_id=7 func=init_ip_session_common line=5937 msg="allocate a new session-000007b2"
[FPM03]  id=20085 trace_id=7 func=vf_ip_route_input_common line=2591 msg="find a route: flag=04000000 gw-20.0.0.100 via HA-LAG1"
[FPM03]  id=20085 trace_id=7 func=fw_forward_handler line=755 msg="Allowed by Policy-10000:"

Show how the DP processor will load balance a session

You can use the following command to display the FPC or FPM slot that the DP processor will load balance a session to.

diagnose load-balance dp find session {normal | reverse | fragment | pinhole}

Normal and reverse sessions

For a normal or corresponding reverse session you can define the following:

{normal | reverse} <ip-protocol> <src-ip> {<src-port> | <icmp-type> | <icmp-typecode>} <dst-ip> {<dst-port> | <icmp-id>} [<x-vid>] [<x-cfi>] [<x-pri>]

Fragment packet sessions

For a session for fragment packets you can define the following:

fragment <ip-protocol> {<src-port> | <icmp-type> | <icmp-typecode>} <dst-ip> <ip-id> [<x-vid>] [<x-cfi>] [<x-pri>]

Pinhole sessions

For a pinhole sessions you can define the following:

pinhole <ip-protocol> <dst-ip> <dst-port> [<x-vid>] [<x-cfi>] [<x-pri>]

Normal session example output

For example, the following command shows that a new TCP session (protocol number 6) with source IP address 11.1.1.11, source port 53386, destination IP address 12.1.1.11, and destination port 22 would be sent to slot 8 by the DP processor.

diagnose load-balance dp find session normal 6 11.1.1.11 53386 12.1.1.11 22
==========================================================================
MBD SN: F6KF503E17900068
Primary Bin 9708928
New session to slot 8 (src-dst-ip-sport-dport)

Additional information about the session also appears in the command output in some cases.

FGCP session synchronization options

FortiGate-6000 and FortiGate-7000 platforms now support the following FGCP session synchronization options.

config system ha

set session-pickup {disable | enable}

set session-pickup-connectionless {disable | enable}

set session-pickup-delay {disable | enable}

set inter-cluster-session-sync {disable | enable}

end

The session-pickup-connectionless option is new in FortiOS 6.0.4. In FortiOS 5.6, enabling session-pickup synchronized TCP, SCTP and connectionless (UDP, ICMP, and so on) sessions. In FortiOS 6.0.4, session-pickup only synchronizes TCP and SCTP sessions.

You can now choose to reduce processing overhead by not synchronizing connectionless sessions if you don't need to. If you want to synchronize connectionless sessions you can enable session-pickup-connectionless.

If you have enabled session-pickup for FortiOS 5.6, after upgrading to 6.0.4 if you want to continue synchronizing connectionless sessions, you have to manually enable session-pickup-connectionless.

The session-pickup-delay option applies to TCP sessions only and does not apply to connectionless and SCTP sessions.

The session-pickup-delay option does not currently work for IPv6 TCP traffic. This known issue (553996) will be fixed in a future firmware version.

The inter-cluster-session-sync option is supported only for inter-cluster session synchronization between FGCP clusters.

ICMP load balancing

You can use the following option to configure load balancing for ICMP sessions:

config load-balance setting

set dp-icmp-distribution-method {to-master | src-ip | dst-ip | src-dst-ip | derived}

end

The default setting is to-master and all ICMP sessions are sent to the primary (master) FPC or FPM. As a result, ICMP sessions are handled in the same way as in previous releases.

If you want to load balance ICMP sessions to multiple FPCs or FPMs, you can select one of the other options. You can load balance ICMP sessions by source IP address, by destination IP address, or by source and destination IP address.

You an also select derived to load balance ICMP sessions using the dp-load-distribution-method setting. Since port-based ICMP load balancing is not possible, if dp-load-distribution-method is set to a load balancing method that includes ports, ICMP load balancing will use the equivalent load balancing method that does not include ports. For example, if dp-load-distribution-method is set to the src-dst-ip-sport-dport (the default) then ICMP load balancing will use src-dst-ip load balancing.

Note

Two additional load balance setting options are also visible in this release: dp-keep-assist-sessions cannot be changed and dp-session-table-type will be supported in a future version. For FortiOS 6.0.4, dp-session-table-type must be set to intf-vlan-based (the default value).

FGSP support

FortiGate-6000 and FortiGate-7000 for FortiOS 6.0.4 supports FortiGate Session Life Support Protocol (FGSP) HA (also called standalone session sync). FGSP is supported for up to four FortiGate-6000s or FortiGate-7000s. All of the FortiGates in the FGSP cluster must be the same model. For details about FGSP for FortiOS 6.0, see: FGSP.

FortiGate-6000 and FortiGate-7000 FGSP support has the following limitations:

  • Configuration synchronization is currently not supported, you must configure all of the devices in the FGSP cluster separately or use FortiManager to keep key parts of the configuration, such as security policies, synchronized on the devices in the FGSP cluster.
  • FortiGate-6000 FGSP can use the HA1 and HA2 interfaces for session synchronization. FortiGate-7000 FGSP can use the 1-M1 and 1-M2 and 2-M1 and 2-M2 interfaces for session synchronization. For both FortiGate-6000 and FortiGate-7000 FGSP, using multiple interfaces is recommended for redundancy. To use these interfaces, you must give them IP addresses and optionally set up routing for them. Ideally the session synchronization interfaces would be on the same network and that network would only be used for session synchronization traffic. However, you can configure routing to send session synchronization traffic between networks. NAT between session synchronization interfaces is not supported.
  • Multiple VDOMs can be synchronized over the same session synchronization interface. You can also distribute synchronization traffic to multiple interfaces.
  • FGSP doesn't support setting up IPv6 session filters using the config session-sync-filter option.
  • FGSP doesn't synchronize ICMP sessions in DP to peer FortiGates when the default ICMP load balancing setting to-master is used. If you want to synchronize these sessions, ICMP load balancing should be set to either src-ip, dst-ip, or src-dst-ip. See ICMP load balancing for more information.
  • Asymmetric IPv6 SCTP traffic sessions are not supported. These sessions are dropped.
  • Inter-cluster session synchronization, or FGSP between FGCP clusters, is not supported.
  • FGSP IPsec tunnel synchronization is not supported.
  • Fragmented packet synchronization is not supported.

FGSP session synchronization

The following session synchronization options apply to FGSP HA:

config system ha

set session-pickup {disable | enable}

set session-pickup-connectionless {disable | enable}

set session-pickup-expectation {disable | enable}

set session-pickup-nat {disable | enable}

end

  • Turning on session synchronization for TCP sessions by enabling session-pickup also turns on session synchronization for connectionless protocol sessions, such as ICMP and UDP, by enabling session-pickup-connectionless. You can choose to reduce processing overhead by not synchronizing connectionless sessions if you don't need to.
  • The session-pickup-expectation and session-pickup-nat options only apply to FGSP HA. FGCP HA synchronizes NAT sessions when you enable session-pickup.
  • The session-pickup-delay option applies to TCP sessions only and does not apply to connectionless and SCTP sessions.
  • The session-pickup-delay option does not currently work for IPv6 TCP traffic. This known issue (553996) will be fixed in a future firmware version.
  • The session-pickup-delay option should not be used in FGSP topologies where the traffic can take an asymmetric path (forward and reverse traffic going through different FortiGates).

Example FortiGate-6000 FGSP configuration

This example shows how to configure an FGSP cluster to synchronize sessions between two FortiGate-6301Fs for the root VDOM. The example uses the HA1 interfaces of each FortiGate-6301F for session synchronization. The HA1 interfaces are connected to the 172.25.177.0/24 network.

  1. Configure the HA1 interface of the first FortiGate-6301F with an IP address on the 172.25.177.0/24 network:

    config system interface

    edit ha1

    set ip 172.25.177.10 255.255.255.0

    end

  2. Configure the HA1 interface of the second FortiGate-6301F with an IP address on the 172.25.177.0/24 network:

    config system interface

    edit ha1

    set ip 172.25.177.20 255.255.255.0

    end

  3. On the first FortiGate-6301F, configure session synchronization for the root VDOM.

    config system cluster-sync

    edit 0

    set peervd mgmt-vdom

    set peerip 172.25.177.20

    set syncvd root

    next

    Where, peervd will always be mgmt-vdom, the peerip is the IP address of the HA1 interface of the second FortiGate-6301F, and syncvd is the VDOM for which to synchronize sessions, in this case the root VDOM.

  4. On the second FortiGate-6301F, configure session synchronization for the root VDOM.

    config system cluster-sync

    edit 0

    set peervd mgmt-vdom

    set peerip 172.25.177.10

    set syncvd root

    next

    Where, peervd will always be mgmt-vdom, the peerip is the IP address of the HA1 interface of the first FortiGate-6301F, and syncvd is the VDOM for which to synchronize sessions, in this case the root VDOM.

Example FortiGate-7000 FGSP configuration

This example shows how to configure an FGSP cluster to synchronize sessions between two FortiGate-7040Es for two VDOMs: VDOM-1 and VDOM-2. The example uses the 1-M1 interface for VDOM-1 session synchronization and the 1-M2 interface for VDOM-2 session synchronization. The 1-M1 interfaces are connected to the 172.25.177.0/24 network and the 1-M2 interfaces are connected to the 172.25.178.0/24 network.

  1. Configure the 1-M1 and 1-M2 interfaces of the first FortiGate-7040E with IP addresses on the 172.25.177.0/24 and 172.25.178.0/24 networks:

    config system interface

    edit 1-M1

    set ip 172.25.177.30 255.255.255.0

    next

    edit 1-M2

    set ip 172.25.178.35 255.255.255.0

    end

  2. Configure the 1-M1 and 1-M2 interfaces of the second FortiGate-7040E with IP addresses on the 172.25.177.0/24 and 172.25.178.0/24 networks:

    config system interface

    edit 1-M1

    set ip 172.25.177.40 255.255.255.0

    next

    edit 1-M2

    set ip 172.25.178.45 255.255.255.0

    end

  3. On the first FortiGate-7040E, configure session synchronization for VDOM-1 and VDOM-2.

    config system cluster-sync

    edit 1

    set peervd mgmt-vdom

    set peerip 172.25.177.40

    set syncvd VDOM-1

    next

    edit 2

    set peervd mgmt-vdom

    set peerip 172.25.178.45

    set syncvd VDOM-2

    next

    For VDOM-1, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M1 interface of the second FortiGate-7040E, and syncvd is VDOM-1.

    For VDOM-2, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M2 interface of the second FortiGate-7040E, and syncvd is VDOM-2.

  4. On the second FortiGate-7040E, configure session synchronization for VDOM-1 and VDOM-2.

    config system cluster-sync

    edit 1

    set peervd mgmt-vdom

    set peerip 172.25.177.30

    set syncvd VDOM-1

    next

    edit 2

    set peervd mgmt-vdom

    set peerip 172.25.178.35

    set syncvd VDOM-2

    next

    For VDOM-1, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M1 interface of the first FortiGate-7040E, and syncvd is VDOM-1.

    For VDOM-2, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M2 interface of the first FortiGate-7040E, and syncvd is VDOM-2.

FortiGate-6000 and FortiGate-7000 v6.0.4 release notes

FortiGate-6000 and FortiGate-7000 v6.0.4 release notes

This document provides the following information for FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405:

Supported models

FortiGate-6000 v6.0.4 build 8405 supports the following models:

  • FortiGate-6300F
  • FortiGate-6301F
  • FortiGate-6500F
  • FortiGate-6501F

FortiGate-7000 v6.0.4 build 8405 supports all FortiGate-7030E, 7040E, and 7060E models and configurations.

What’s new in FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405

FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405 includes the bug fixes described inResolved issues for build 8405 and Resolved issues for build 8385. The first released build of FortiGate-6000 and FortiGate-7000 v6.0.4 was build 6145.

The following new features have been added to FortiGate-6000 and FortiGate-7000 v6.0.4 build 8405:

  • Diagnose debug flow trace output improvements.
  • New diagnose command to show how the DP processor will load balance a session.
  • Enabling or disabling synchronizing connectionless sessions.
  • ICMP traffic can now be load balanced.
  • FortiGate Session Life Support Protocol (FGSP) support.
  • The report produced by the execute tac report command now includes more information, including new information about SLBC operations.

Diagnose debug flow trace output improvements

The diagnose debug flow trace output from the FortiGate-6000 management board CLI now displays debug data for the management board and for all of the FPCs. Each line of output begins with the name of the component that produced the output. For example:

diagnose debug enable
[FPC06]  id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10001->4ff5::13:80) from vlan-port1."
[FPC07]  id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10000->4ff5::11:80) from vlan-port1."
[FPC06]  id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000eb730"
[FPC07]  id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000eb722"
[FPC06]  id=20085 trace_id=2 func=vf_ip6_route_input line=1125 msg="find a route: gw-4ff5::13 via vlan-port2 err 0 flags 01000001"

Running FortiGate-6000 diagnose debug flow trace commands from an individual FPC CLI shows traffic processed by that FPC only. For example:

diagnose debug enable 
[FPC02] id=20085 trace_id=2 func=resolve_ip6_tuple_fast line=4190 msg="vd-vlan:0 received a packet(proto=6, 3ff5::100:10001->4ff5::28:80) from vlan-port1." [FPC02] id=20085 trace_id=2 func=resolve_ip6_tuple line=4307 msg="allocate a new session-000f00fb" [FPC02] id=20085 trace_id=2 func=vf_ip6_route_input line=1125 msg="find a route: gw-4ff5::28 via vlan-port2 err 0 flags 01000001" [FPC02] id=20085 trace_id=2 func=fw6_forward_handler line=345 msg="Check policy between vlan-port1 -> vlan-port2"

The diagnose debug flow trace output from the FortiGate-7000 primary FIM CLI now shows traffic from all FIMs and FPMs. Each line of output begins with the name of the component that produced the output. For example:

diagnose debug enable
[FPM04]  id=20085 trace_id=6 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10001->20.0.0.100:80) from HA-LAG0. flag [S], seq 2670272303, ack 0, win 32768"
[FPM03]  id=20085 trace_id=7 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10002->20.0.0.100:80) from HA-LAG0. flag [S], seq 3193740413, ack 0, win 32768"
[FPM04]  id=20085 trace_id=6 func=init_ip_session_common line=5937 msg="allocate a new session-0000074c"
[FPM04]  id=20085 trace_id=6 func=vf_ip_route_input_common line=2591 msg="find a route: flag=04000000 gw-20.0.0.100 via HA-LAG1"
[FPM04]  id=20085 trace_id=6 func=fw_forward_handler line=755 msg="Allowed by Policy-10000:"

Running FortiGate-7000 diagnose debug flow trace commands from an individual FPM CLI shows traffic processed by that FPM only.

diagnose debug enable 
[FPM03]  id=20085 trace_id=7 func=print_pkt_detail line=5777 msg="vd-root:0 received a packet(proto=6, 10.0.2.3:10002->20.0.0.100:80) from HA-LAG0. flag [S], seq 3193740413, ack 0, win 32768"
[FPM03]  id=20085 trace_id=7 func=init_ip_session_common line=5937 msg="allocate a new session-000007b2"
[FPM03]  id=20085 trace_id=7 func=vf_ip_route_input_common line=2591 msg="find a route: flag=04000000 gw-20.0.0.100 via HA-LAG1"
[FPM03]  id=20085 trace_id=7 func=fw_forward_handler line=755 msg="Allowed by Policy-10000:"

Show how the DP processor will load balance a session

You can use the following command to display the FPC or FPM slot that the DP processor will load balance a session to.

diagnose load-balance dp find session {normal | reverse | fragment | pinhole}

Normal and reverse sessions

For a normal or corresponding reverse session you can define the following:

{normal | reverse} <ip-protocol> <src-ip> {<src-port> | <icmp-type> | <icmp-typecode>} <dst-ip> {<dst-port> | <icmp-id>} [<x-vid>] [<x-cfi>] [<x-pri>]

Fragment packet sessions

For a session for fragment packets you can define the following:

fragment <ip-protocol> {<src-port> | <icmp-type> | <icmp-typecode>} <dst-ip> <ip-id> [<x-vid>] [<x-cfi>] [<x-pri>]

Pinhole sessions

For a pinhole sessions you can define the following:

pinhole <ip-protocol> <dst-ip> <dst-port> [<x-vid>] [<x-cfi>] [<x-pri>]

Normal session example output

For example, the following command shows that a new TCP session (protocol number 6) with source IP address 11.1.1.11, source port 53386, destination IP address 12.1.1.11, and destination port 22 would be sent to slot 8 by the DP processor.

diagnose load-balance dp find session normal 6 11.1.1.11 53386 12.1.1.11 22
==========================================================================
MBD SN: F6KF503E17900068
Primary Bin 9708928
New session to slot 8 (src-dst-ip-sport-dport)

Additional information about the session also appears in the command output in some cases.

FGCP session synchronization options

FortiGate-6000 and FortiGate-7000 platforms now support the following FGCP session synchronization options.

config system ha

set session-pickup {disable | enable}

set session-pickup-connectionless {disable | enable}

set session-pickup-delay {disable | enable}

set inter-cluster-session-sync {disable | enable}

end

The session-pickup-connectionless option is new in FortiOS 6.0.4. In FortiOS 5.6, enabling session-pickup synchronized TCP, SCTP and connectionless (UDP, ICMP, and so on) sessions. In FortiOS 6.0.4, session-pickup only synchronizes TCP and SCTP sessions.

You can now choose to reduce processing overhead by not synchronizing connectionless sessions if you don't need to. If you want to synchronize connectionless sessions you can enable session-pickup-connectionless.

If you have enabled session-pickup for FortiOS 5.6, after upgrading to 6.0.4 if you want to continue synchronizing connectionless sessions, you have to manually enable session-pickup-connectionless.

The session-pickup-delay option applies to TCP sessions only and does not apply to connectionless and SCTP sessions.

The session-pickup-delay option does not currently work for IPv6 TCP traffic. This known issue (553996) will be fixed in a future firmware version.

The inter-cluster-session-sync option is supported only for inter-cluster session synchronization between FGCP clusters.

ICMP load balancing

You can use the following option to configure load balancing for ICMP sessions:

config load-balance setting

set dp-icmp-distribution-method {to-master | src-ip | dst-ip | src-dst-ip | derived}

end

The default setting is to-master and all ICMP sessions are sent to the primary (master) FPC or FPM. As a result, ICMP sessions are handled in the same way as in previous releases.

If you want to load balance ICMP sessions to multiple FPCs or FPMs, you can select one of the other options. You can load balance ICMP sessions by source IP address, by destination IP address, or by source and destination IP address.

You an also select derived to load balance ICMP sessions using the dp-load-distribution-method setting. Since port-based ICMP load balancing is not possible, if dp-load-distribution-method is set to a load balancing method that includes ports, ICMP load balancing will use the equivalent load balancing method that does not include ports. For example, if dp-load-distribution-method is set to the src-dst-ip-sport-dport (the default) then ICMP load balancing will use src-dst-ip load balancing.

Note

Two additional load balance setting options are also visible in this release: dp-keep-assist-sessions cannot be changed and dp-session-table-type will be supported in a future version. For FortiOS 6.0.4, dp-session-table-type must be set to intf-vlan-based (the default value).

FGSP support

FortiGate-6000 and FortiGate-7000 for FortiOS 6.0.4 supports FortiGate Session Life Support Protocol (FGSP) HA (also called standalone session sync). FGSP is supported for up to four FortiGate-6000s or FortiGate-7000s. All of the FortiGates in the FGSP cluster must be the same model. For details about FGSP for FortiOS 6.0, see: FGSP.

FortiGate-6000 and FortiGate-7000 FGSP support has the following limitations:

  • Configuration synchronization is currently not supported, you must configure all of the devices in the FGSP cluster separately or use FortiManager to keep key parts of the configuration, such as security policies, synchronized on the devices in the FGSP cluster.
  • FortiGate-6000 FGSP can use the HA1 and HA2 interfaces for session synchronization. FortiGate-7000 FGSP can use the 1-M1 and 1-M2 and 2-M1 and 2-M2 interfaces for session synchronization. For both FortiGate-6000 and FortiGate-7000 FGSP, using multiple interfaces is recommended for redundancy. To use these interfaces, you must give them IP addresses and optionally set up routing for them. Ideally the session synchronization interfaces would be on the same network and that network would only be used for session synchronization traffic. However, you can configure routing to send session synchronization traffic between networks. NAT between session synchronization interfaces is not supported.
  • Multiple VDOMs can be synchronized over the same session synchronization interface. You can also distribute synchronization traffic to multiple interfaces.
  • FGSP doesn't support setting up IPv6 session filters using the config session-sync-filter option.
  • FGSP doesn't synchronize ICMP sessions in DP to peer FortiGates when the default ICMP load balancing setting to-master is used. If you want to synchronize these sessions, ICMP load balancing should be set to either src-ip, dst-ip, or src-dst-ip. See ICMP load balancing for more information.
  • Asymmetric IPv6 SCTP traffic sessions are not supported. These sessions are dropped.
  • Inter-cluster session synchronization, or FGSP between FGCP clusters, is not supported.
  • FGSP IPsec tunnel synchronization is not supported.
  • Fragmented packet synchronization is not supported.

FGSP session synchronization

The following session synchronization options apply to FGSP HA:

config system ha

set session-pickup {disable | enable}

set session-pickup-connectionless {disable | enable}

set session-pickup-expectation {disable | enable}

set session-pickup-nat {disable | enable}

end

  • Turning on session synchronization for TCP sessions by enabling session-pickup also turns on session synchronization for connectionless protocol sessions, such as ICMP and UDP, by enabling session-pickup-connectionless. You can choose to reduce processing overhead by not synchronizing connectionless sessions if you don't need to.
  • The session-pickup-expectation and session-pickup-nat options only apply to FGSP HA. FGCP HA synchronizes NAT sessions when you enable session-pickup.
  • The session-pickup-delay option applies to TCP sessions only and does not apply to connectionless and SCTP sessions.
  • The session-pickup-delay option does not currently work for IPv6 TCP traffic. This known issue (553996) will be fixed in a future firmware version.
  • The session-pickup-delay option should not be used in FGSP topologies where the traffic can take an asymmetric path (forward and reverse traffic going through different FortiGates).

Example FortiGate-6000 FGSP configuration

This example shows how to configure an FGSP cluster to synchronize sessions between two FortiGate-6301Fs for the root VDOM. The example uses the HA1 interfaces of each FortiGate-6301F for session synchronization. The HA1 interfaces are connected to the 172.25.177.0/24 network.

  1. Configure the HA1 interface of the first FortiGate-6301F with an IP address on the 172.25.177.0/24 network:

    config system interface

    edit ha1

    set ip 172.25.177.10 255.255.255.0

    end

  2. Configure the HA1 interface of the second FortiGate-6301F with an IP address on the 172.25.177.0/24 network:

    config system interface

    edit ha1

    set ip 172.25.177.20 255.255.255.0

    end

  3. On the first FortiGate-6301F, configure session synchronization for the root VDOM.

    config system cluster-sync

    edit 0

    set peervd mgmt-vdom

    set peerip 172.25.177.20

    set syncvd root

    next

    Where, peervd will always be mgmt-vdom, the peerip is the IP address of the HA1 interface of the second FortiGate-6301F, and syncvd is the VDOM for which to synchronize sessions, in this case the root VDOM.

  4. On the second FortiGate-6301F, configure session synchronization for the root VDOM.

    config system cluster-sync

    edit 0

    set peervd mgmt-vdom

    set peerip 172.25.177.10

    set syncvd root

    next

    Where, peervd will always be mgmt-vdom, the peerip is the IP address of the HA1 interface of the first FortiGate-6301F, and syncvd is the VDOM for which to synchronize sessions, in this case the root VDOM.

Example FortiGate-7000 FGSP configuration

This example shows how to configure an FGSP cluster to synchronize sessions between two FortiGate-7040Es for two VDOMs: VDOM-1 and VDOM-2. The example uses the 1-M1 interface for VDOM-1 session synchronization and the 1-M2 interface for VDOM-2 session synchronization. The 1-M1 interfaces are connected to the 172.25.177.0/24 network and the 1-M2 interfaces are connected to the 172.25.178.0/24 network.

  1. Configure the 1-M1 and 1-M2 interfaces of the first FortiGate-7040E with IP addresses on the 172.25.177.0/24 and 172.25.178.0/24 networks:

    config system interface

    edit 1-M1

    set ip 172.25.177.30 255.255.255.0

    next

    edit 1-M2

    set ip 172.25.178.35 255.255.255.0

    end

  2. Configure the 1-M1 and 1-M2 interfaces of the second FortiGate-7040E with IP addresses on the 172.25.177.0/24 and 172.25.178.0/24 networks:

    config system interface

    edit 1-M1

    set ip 172.25.177.40 255.255.255.0

    next

    edit 1-M2

    set ip 172.25.178.45 255.255.255.0

    end

  3. On the first FortiGate-7040E, configure session synchronization for VDOM-1 and VDOM-2.

    config system cluster-sync

    edit 1

    set peervd mgmt-vdom

    set peerip 172.25.177.40

    set syncvd VDOM-1

    next

    edit 2

    set peervd mgmt-vdom

    set peerip 172.25.178.45

    set syncvd VDOM-2

    next

    For VDOM-1, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M1 interface of the second FortiGate-7040E, and syncvd is VDOM-1.

    For VDOM-2, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M2 interface of the second FortiGate-7040E, and syncvd is VDOM-2.

  4. On the second FortiGate-7040E, configure session synchronization for VDOM-1 and VDOM-2.

    config system cluster-sync

    edit 1

    set peervd mgmt-vdom

    set peerip 172.25.177.30

    set syncvd VDOM-1

    next

    edit 2

    set peervd mgmt-vdom

    set peerip 172.25.178.35

    set syncvd VDOM-2

    next

    For VDOM-1, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M1 interface of the first FortiGate-7040E, and syncvd is VDOM-1.

    For VDOM-2, peervd will always be mgmt-vdom, the peerip is the IP address of the 1-M2 interface of the first FortiGate-7040E, and syncvd is VDOM-2.