Version:

Version:


Table of Contents

New Features

Download PDF
Copy Link

Applying the session synchronization filter only between FGSP peers in an FGCP over FGSP topology 7.2.1

This enhancement ensures that session synchronization happens correctly in an FGCP over FGSP topology:

  • When the session synchronization filter is applied on FGSP, the filter will only affect sessions synchronized between the FGSP peers.
  • When virtual clustering is used, sessions synchronized between each virtual cluster can also be synchronized to FGSP peers. All peers' syncvd must be in the same HA virtual cluster.

Example

In this example, there is a simplified configuration where there is no router or load balancer performing balancing between the FGSP peers, but it demonstrates the following:

  • When sessions pass through FGCP A-P Cluster 1, all sessions are synchronized between the FGT_A and FGT_B regardless of the session synchronization filter.
  • Session synchronization between the FGSP peers (FGCP A-P Cluster 1 and 2) only occurs for the service specified in the filter, which is HTTP/80.
  • The preceding behavior is applicable when virtual clustering is configured. This example focuses on vdom2, which belongs to vcluster2. FGT_A is the primary for vcluster2.

Each FGSP A-P cluster is connected on ha as the FGCP cluster heartbeat device. The FGSP peers are connected on mgmt over 10.1.1.1-2/24.

Virtual clustering between FGT_A and FGT_B:

Interface

FGT_A

FGT_B

FGT_C

FGT_D

wan1

172.16.200.1/24

172.16.200.1/24

172.16.200.3/24

172.16.200.3/24

port1

10.1.100.1/24

10.1.100.1/24

10.1.100.2/24

10.1.100.2/24

mgmt

10.1.1.1/24

10.1.1.1/24

10.1.1.2/24

10.1.1.2/24

ha

FGCP cluster heartbeat device

FGCP cluster heartbeat device

To configure the HA clusters:
  1. Configure FGCP A-P Cluster 1 (use the same configuration for FGT_A and FGT_B):

    config system ha
        set group-id 146
        set group-name "FGT_HA1"
        set mode a-p
        set hbdev "wan2" 100 "ha" 50
        set session-pickup enable
        set vcluster-status enable
        config vcluster
            edit 1
                set override enable
                set priority 25
                set monitor "wan1" "port1"
                set vdom "root"
            next
            edit 2
                set override disable
                set priority 150
                set monitor "wan1"
                set vdom "vdom2" "vdom1"
            next
        end
    end
  2. Configure FGCP A-P Cluster 2 (use the same configuration for FGT_C and FGT_D):

    config system ha
        set group-id 200
        set group-name "FGT_HA2"
        set mode a-p
        set hbdev "wan2" 100 "ha" 50
        set session-pickup enable
        set vcluster-status enable
        config vcluster
            edit 1
                set override enable
                set priority 120
                set monitor "wan1" "port1"
                set vdom "root"
            next
            edit 2
                set override disable
                set priority 150
                set monitor "wan1"
                set vdom "vdom2" "vdom1"
            next
        end
    end
To configure the FGSP peers:
  1. Configure FGT_A:

    config system standalone-cluster
        set standalone-group-id 1
        set group-member-id 1
        config cluster-peer
            edit 1
                set peervd "vdom2"
                set peerip 10.1.1.2
                set syncvd "vdom2"
                config session-sync-filter
                    config custom-service
                        edit 1
                            set dst-port-range 80-80
                        next
                    end
                end
            next
        end
    end

    The configuration is automatically synchronized to FGT_B.

  2. Configure FGT_C:

    config system standalone-cluster
        set standalone-group-id 1
        set group-member-id 2
        config cluster-peer
            edit 1
                set peervd "vdom2"
                set peerip 10.1.1.1
                set syncvd "vdom2"
                config session-sync-filter
                    config custom-service
                        edit 1
                            set dst-port-range 80-80
                        next
                    end
                end
            next
        end
    end

    The configuration is automatically synchronized to FGT_D.

To verify the configuration:
  1. Verify the FGSP peer information on Cluster 1:

    FGT_A (global) # diagnose sys ha fgsp-zone
    Local standalone-member-id: 1
    FGSP peer_num = 1
            peer[1]: standalone-member-id=2, IP=10.1.1.2, vd=vdom2, prio=1
  2. Verify the FGSP peer information on Cluster 2:

    FGT_C (global) # diagnose sys ha fgsp-zone
    Local standalone-member-id: 1
    FGSP peer_num = 1
            peer[1]: standalone-member-id=1, IP=10.1.1.1, vd=vdom2, prio=1
  3. Initiate two sessions, HTTP and SSH.

  4. Verify that the HTTP session is synchronized from Cluster 1 to Cluster 2.

    1. Verify the session list of vdom2 on FGT_A:

      FGT_A (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=693 expire=3593 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log may_dirty npu synced f00
      statistic(bytes/packets/allow_err): org=87531/1678/1 reply=7413876/6043/1 tuples=2
      tx speed(Bps/kbps): 134/1 rx speed(Bps/kbps): 11357/90
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=172.16.200.55/10.1.100.22
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=579 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000c00 ofld-O ofld-R
      npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=66/70, ipid=70/66, vlan=0x0000/0x0000
      vlifid=70/66, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=1/0
      
      session info: proto=6 proto_state=01 duration=326 expire=3589 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log may_dirty npu synced f00
      statistic(bytes/packets/allow_err): org=4721/41/1 reply=5681/36/1 tuples=2
      tx speed(Bps/kbps): 14/0 rx speed(Bps/kbps): 17/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=172.16.200.55/10.1.100.22
      hook=post dir=org act=snat 10.1.100.22:50234->172.16.200.55:22(172.16.200.1:50234)
      hook=pre dir=reply act=dnat 172.16.200.55:22->172.16.200.1:50234(10.1.100.22:50234)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=579 auth_info=0 chk_client_info=0 vd=2
      serial=000a7d90 tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000c00 ofld-O ofld-R
      npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=66/70, ipid=70/66, vlan=0x0000/0x0000
      vlifid=70/66, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=6/6
      total session 2
    2. Verify the session list of vdom2 on FGT_B:

      FGT_B (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=736 expire=3100 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      
      session info: proto=6 proto_state=01 duration=369 expire=3230 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:50234->172.16.200.55:22(172.16.200.1:50234)
      hook=pre dir=reply act=dnat 172.16.200.55:22->172.16.200.1:50234(10.1.100.22:50234)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a7d90 tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 2
    3. Verify the session list of vdom2 on FGT_C:

      FGT_C (vdom2) # diagnose sys session filter dst 172.16.200.55
      FGT_C (vdom2) # diagnose sys session filter src 10.1.100.22
      FGT_C (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=837 expire=2762 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 1
    4. Verify the session list of vdom2 on FGT_D:

      FGT-D (vdom2) # diagnose sys session filter dst 172.16.200.55
      FGT-D (vdom2) # diagnose sys session filter src 10.1.100.22
      FGT-D (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=902 expire=2697 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 1
Note

Session synchronization filters are designed to be configured symmetrically on all of the FGSP peers. In cases where the filters are configured asymmetrically, note the following differences:

  • In an FGCP over FGSP topology, session filtering will be applied on the FGSP peer that has the filtering configured and is receiving the session synchronization.
  • In an FGSP topology between standalone peers, the filtering will be applied on the FGSP peer that has the filtering configured and is sending out the session synchronization.

Applying the session synchronization filter only between FGSP peers in an FGCP over FGSP topology 7.2.1

This enhancement ensures that session synchronization happens correctly in an FGCP over FGSP topology:

  • When the session synchronization filter is applied on FGSP, the filter will only affect sessions synchronized between the FGSP peers.
  • When virtual clustering is used, sessions synchronized between each virtual cluster can also be synchronized to FGSP peers. All peers' syncvd must be in the same HA virtual cluster.

Example

In this example, there is a simplified configuration where there is no router or load balancer performing balancing between the FGSP peers, but it demonstrates the following:

  • When sessions pass through FGCP A-P Cluster 1, all sessions are synchronized between the FGT_A and FGT_B regardless of the session synchronization filter.
  • Session synchronization between the FGSP peers (FGCP A-P Cluster 1 and 2) only occurs for the service specified in the filter, which is HTTP/80.
  • The preceding behavior is applicable when virtual clustering is configured. This example focuses on vdom2, which belongs to vcluster2. FGT_A is the primary for vcluster2.

Each FGSP A-P cluster is connected on ha as the FGCP cluster heartbeat device. The FGSP peers are connected on mgmt over 10.1.1.1-2/24.

Virtual clustering between FGT_A and FGT_B:

Interface

FGT_A

FGT_B

FGT_C

FGT_D

wan1

172.16.200.1/24

172.16.200.1/24

172.16.200.3/24

172.16.200.3/24

port1

10.1.100.1/24

10.1.100.1/24

10.1.100.2/24

10.1.100.2/24

mgmt

10.1.1.1/24

10.1.1.1/24

10.1.1.2/24

10.1.1.2/24

ha

FGCP cluster heartbeat device

FGCP cluster heartbeat device

To configure the HA clusters:
  1. Configure FGCP A-P Cluster 1 (use the same configuration for FGT_A and FGT_B):

    config system ha
        set group-id 146
        set group-name "FGT_HA1"
        set mode a-p
        set hbdev "wan2" 100 "ha" 50
        set session-pickup enable
        set vcluster-status enable
        config vcluster
            edit 1
                set override enable
                set priority 25
                set monitor "wan1" "port1"
                set vdom "root"
            next
            edit 2
                set override disable
                set priority 150
                set monitor "wan1"
                set vdom "vdom2" "vdom1"
            next
        end
    end
  2. Configure FGCP A-P Cluster 2 (use the same configuration for FGT_C and FGT_D):

    config system ha
        set group-id 200
        set group-name "FGT_HA2"
        set mode a-p
        set hbdev "wan2" 100 "ha" 50
        set session-pickup enable
        set vcluster-status enable
        config vcluster
            edit 1
                set override enable
                set priority 120
                set monitor "wan1" "port1"
                set vdom "root"
            next
            edit 2
                set override disable
                set priority 150
                set monitor "wan1"
                set vdom "vdom2" "vdom1"
            next
        end
    end
To configure the FGSP peers:
  1. Configure FGT_A:

    config system standalone-cluster
        set standalone-group-id 1
        set group-member-id 1
        config cluster-peer
            edit 1
                set peervd "vdom2"
                set peerip 10.1.1.2
                set syncvd "vdom2"
                config session-sync-filter
                    config custom-service
                        edit 1
                            set dst-port-range 80-80
                        next
                    end
                end
            next
        end
    end

    The configuration is automatically synchronized to FGT_B.

  2. Configure FGT_C:

    config system standalone-cluster
        set standalone-group-id 1
        set group-member-id 2
        config cluster-peer
            edit 1
                set peervd "vdom2"
                set peerip 10.1.1.1
                set syncvd "vdom2"
                config session-sync-filter
                    config custom-service
                        edit 1
                            set dst-port-range 80-80
                        next
                    end
                end
            next
        end
    end

    The configuration is automatically synchronized to FGT_D.

To verify the configuration:
  1. Verify the FGSP peer information on Cluster 1:

    FGT_A (global) # diagnose sys ha fgsp-zone
    Local standalone-member-id: 1
    FGSP peer_num = 1
            peer[1]: standalone-member-id=2, IP=10.1.1.2, vd=vdom2, prio=1
  2. Verify the FGSP peer information on Cluster 2:

    FGT_C (global) # diagnose sys ha fgsp-zone
    Local standalone-member-id: 1
    FGSP peer_num = 1
            peer[1]: standalone-member-id=1, IP=10.1.1.1, vd=vdom2, prio=1
  3. Initiate two sessions, HTTP and SSH.

  4. Verify that the HTTP session is synchronized from Cluster 1 to Cluster 2.

    1. Verify the session list of vdom2 on FGT_A:

      FGT_A (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=693 expire=3593 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log may_dirty npu synced f00
      statistic(bytes/packets/allow_err): org=87531/1678/1 reply=7413876/6043/1 tuples=2
      tx speed(Bps/kbps): 134/1 rx speed(Bps/kbps): 11357/90
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=172.16.200.55/10.1.100.22
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=579 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000c00 ofld-O ofld-R
      npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=66/70, ipid=70/66, vlan=0x0000/0x0000
      vlifid=70/66, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=1/0
      
      session info: proto=6 proto_state=01 duration=326 expire=3589 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log may_dirty npu synced f00
      statistic(bytes/packets/allow_err): org=4721/41/1 reply=5681/36/1 tuples=2
      tx speed(Bps/kbps): 14/0 rx speed(Bps/kbps): 17/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=172.16.200.55/10.1.100.22
      hook=post dir=org act=snat 10.1.100.22:50234->172.16.200.55:22(172.16.200.1:50234)
      hook=pre dir=reply act=dnat 172.16.200.55:22->172.16.200.1:50234(10.1.100.22:50234)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=579 auth_info=0 chk_client_info=0 vd=2
      serial=000a7d90 tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000c00 ofld-O ofld-R
      npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=66/70, ipid=70/66, vlan=0x0000/0x0000
      vlifid=70/66, vtag_in=0x0000/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=6/6
      total session 2
    2. Verify the session list of vdom2 on FGT_B:

      FGT_B (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=736 expire=3100 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      
      session info: proto=6 proto_state=01 duration=369 expire=3230 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:50234->172.16.200.55:22(172.16.200.1:50234)
      hook=pre dir=reply act=dnat 172.16.200.55:22->172.16.200.1:50234(10.1.100.22:50234)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a7d90 tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 2
    3. Verify the session list of vdom2 on FGT_C:

      FGT_C (vdom2) # diagnose sys session filter dst 172.16.200.55
      FGT_C (vdom2) # diagnose sys session filter src 10.1.100.22
      FGT_C (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=837 expire=2762 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 1
    4. Verify the session list of vdom2 on FGT_D:

      FGT-D (vdom2) # diagnose sys session filter dst 172.16.200.55
      FGT-D (vdom2) # diagnose sys session filter src 10.1.100.22
      FGT-D (vdom2) # diagnose sys session list
      
      session info: proto=6 proto_state=01 duration=902 expire=2697 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=3
      origin-shaper=
      reply-shaper=
      per_ip_shaper=
      class_id=0 ha_id=1:0 policy_dir=0 tunnel=/ vlan_cos=0/255
      state=log dirty may_dirty npu f00 syn_ses
      statistic(bytes/packets/allow_err): org=0/0/0 reply=0/0/0 tuples=2
      tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
      orgin->sink: org pre->post, reply pre->post dev=11->7/7->11 gwy=0.0.0.0/0.0.0.0
      hook=post dir=org act=snat 10.1.100.22:44260->172.16.200.55:80(172.16.200.1:44260)
      hook=pre dir=reply act=dnat 172.16.200.55:80->172.16.200.1:44260(10.1.100.22:44260)
      pos/(before,after) 0/(0,0), 0/(0,0)
      misc=0 policy_id=7 pol_uuid_idx=0 auth_info=0 chk_client_info=0 vd=2
      serial=000a79df tos=ff/ff app_list=0 app=0 url_cat=0
      rpdb_link_id=00000000 ngfwid=n/a
      npu_state=0x4000000
      npu info: flag=0x00/0x00, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/0, vlan=0x0000/0x0000
      vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
      no_ofld_reason:
      total session 1
Note

Session synchronization filters are designed to be configured symmetrically on all of the FGSP peers. In cases where the filters are configured asymmetrically, note the following differences:

  • In an FGCP over FGSP topology, session filtering will be applied on the FGSP peer that has the filtering configured and is receiving the session synchronization.
  • In an FGSP topology between standalone peers, the filtering will be applied on the FGSP peer that has the filtering configured and is sending out the session synchronization.