Verifying the traffic
To verify that pings are sent across the IPsec VPN tunnels
-
On the HQ FortiGate, run the following CLI command:
# diagnose sniffer packet any 'host 10.0.2.10' 4 0 1 interfaces=[any] Using Original Sniffing Mode interfaces=[any] filters=[host 10.0.2.10] pcap_snapshot: snaplen raised from 0 to 262144 2021-06-05 11:35:14.822600 AWS_VPG out 169.254.55.154 -> 10.0.2.10: icmp: echo request 2021-06-05 11:35:14.822789 FGT_AWS_Tun out 172.16.200.2 -> 10.0.2.10: icmp: echo request 2021-06-05 11:35:14.877862 FGT_AWS_Tun in 10.0.2.10 -> 172.16.200.2: icmp: echo reply 2021-06-05 11:35:14.878887 AWS_VPG in 10.0.2.10 -> 169.254.55.154: icmp: echo reply
-
On the cloud FortiGate-VM, run the following CLI command:
# diagnose sniffer packet any 'host 10.0.2.10' 4 0 1 interfaces=[any] Using Original Sniffing Mode interfaces=[any] filters=[host 10.0.2.10] pcap_snapshot: snaplen raised from 0 to 262144 2021-06-05 11:37:57.176329 port2 in 169.254.55.154 -> 10.0.2.10: icmp: echo request 2021-06-05 11:37:57.176363 port2 out 10.0.2.10 -> 169.254.55.154: icmp: echo reply 2021-06-05 11:37:57.176505 Core_Dialup in 172.16.200.2 -> 10.0.2.10: icmp: echo request 2021-06-05 11:37:57.176514 Core_Dialup out 10.0.2.10 -> 172.16.200.2: icmp: echo reply
To verify the SLA health checks on the HQ FortiGate:
-
Go to Network > SD-WAN, select the Performance SLAs tab, select Packet Loss, and click the ping_AWS_Gateway SLA:
-
Run the following CLI command:
# diagnose sys sdwan health-check … Seq(1 AWS_VPG): state(alive), packet-loss(0.000%) latency(56.221), jitter(0.290) sla_map=0x0 Seq(2 FGT_AWS_Tun): state(alive), packet-loss(0.000%) latency(55.039), jitter(0.223) sla_map=0x0
To verify service rules:
-
Go to Network > SD-WAN and select the SD-WAN Rules tab:
-
Run the following CLI command:
# diagnose sys sdwan service4 Service(1): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(6: 80->80), Mode(manual) Members: 1: Seq_num(2 FGT_AWS_Tun), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(2): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(6: 22->22), Mode(manual) Members: 1: Seq_num(1 AWS_VPG), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(3): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(6: 443->443), Mode(manual) Members: 1: Seq_num(2 FGT_AWS_Tun), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(4): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(0: 1->65535), Mode(manual) Members: 1: Seq_num(1 AWS_VPG), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.21-10.0.2.21
To verify that sessions are going to the correct tunnel:
-
Run the following CLI command to verify that HTTPS and HTTP traffic destined for the Web server at 10.0.2.20 uses FGT_AWS_Tun:
# diagnose sys session filter dst 10.0.2.20 # diagnose sys session list session info: proto=6 proto_state=11 duration=2 expire=3597 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=FGT_AWS_Tun/ vlan_cos=0/255 state=log may_dirty npu f00 csf_syncd_log app_valid statistic(bytes/packets/allow_err): org=593/4/1 reply=3689/5/1 tuples=3 tx speed(Bps/kbps): 264/2 rx speed(Bps/kbps): 1646/13 orgin->sink: org pre->post, reply pre->post dev=0->18/18->0 gwy=172.16.200.1/0.0.0.0 hook=post dir=org act=snat 10.100.88.101:55589->10.0.2.20:80(172.16.200.2:55589) hook=pre dir=reply act=dnat 10.0.2.20:80->172.16.200.2:55589(10.100.88.101:55589) hook=post dir=reply act=noop 10.0.2.20:80->10.100.88.101:55589(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b7442c tos=ff/ff app_list=2000 app=34050 url_cat=0 sdwan_mbr_seq=0 sdwan_service_id=0 rpdb_link_id= ff000001 rpdb_svc_id=2154552596 ngfwid=n/a npu_state=0x3041008 session info: proto=6 proto_state=66 duration=1 expire=3 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=FGT_AWS_Tun/ vlan_cos=0/255 state=log may_dirty ndr f00 csf_syncd_log statistic(bytes/packets/allow_err): org=48/1/0 reply=40/1/1 tuples=3 tx speed(Bps/kbps): 26/0 rx speed(Bps/kbps): 22/0 orgin->sink: org pre->post, reply pre->post dev=5->18/18->5 gwy=172.16.200.1/10.100.88.101 hook=post dir=org act=snat 10.100.88.101:55621->10.0.2.20:443(172.16.200.2:55621) hook=pre dir=reply act=dnat 10.0.2.20:443->172.16.200.2:55621(10.100.88.101:55621) hook=post dir=reply act=noop 10.0.2.20:443->10.100.88.101:55621(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b74b50 tos=ff/ff app_list=2000 app=0 url_cat=0 sdwan_mbr_seq=0 sdwan_service_id=0 rpdb_link_id= ff000003 rpdb_svc_id=2154552596 ngfwid=n/a npu_state=0x3041008
-
Run the following CLI command to verify that SSH and FTP traffic destined for the FTP server at 10.0.2.21 uses AWS_VPG:
# diagnose sys session filter dst 10.0.2.20 # diagnose sys session list session info: proto=6 proto_state=11 duration=197 expire=3403 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=AWS_VPG/ helper=ftp vlan_cos=0/255 state=log may_dirty ndr npu f00 csf_syncd_log app_valid statistic(bytes/packets/allow_err): org=580/12/1 reply=863/13/1 tuples=3 tx speed(Bps/kbps): 2/0 rx speed(Bps/kbps): 4/0 orgin->sink: org pre->post, reply pre->post dev=5->17/17->5 gwy=169.254.55.153/10.100.88.101 hook=post dir=org act=snat 10.100.88.101:55528->10.0.2.21:21(169.254.55.154:55528) hook=pre dir=reply act=dnat 10.0.2.21:21->169.254.55.154:55528(10.100.88.101:55528) hook=post dir=reply act=noop 10.0.2.21:21->10.100.88.101:55528(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b72a5f tos=ff/ff app_list=2000 app=15896 url_cat=0 sdwan_mbr_seq=0 sdwan_service_id=0 rpdb_link_id= ff000004 rpdb_svc_id=2149689849 ngfwid=n/a npu_state=0x3041008 session info: proto=6 proto_state=11 duration=3 expire=3596 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=AWS_VPG/ vlan_cos=0/255 state=log may_dirty ndr npu f00 csf_syncd_log app_valid statistic(bytes/packets/allow_err): org=1496/6/1 reply=1541/5/1 tuples=3 tx speed(Bps/kbps): 416/3 rx speed(Bps/kbps): 429/3 orgin->sink: org pre->post, reply pre->post dev=5->17/17->5 gwy=169.254.55.153/10.100.88.101 hook=post dir=org act=snat 10.100.88.101:55644->10.0.2.21:22(169.254.55.154:55644) hook=pre dir=reply act=dnat 10.0.2.21:22->169.254.55.154:55644(10.100.88.101:55644) hook=post dir=reply act=noop 10.0.2.21:22->10.100.88.101:55644(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b75287 tos=ff/ff app_list=2000 app=16060 url_cat=0 sdwan_mbr_seq=0 sdwan_service_id=0 rpdb_link_id= ff000002 rpdb_svc_id=2149689849 ngfwid=n/a npu_state=0x3041008
To simulate an issue on an overlay VPN tunnel:
On the cloud FortiGate-VM, disable the firewall policy allowing Core_Dialup to port2.
-
Health-checks through the FGT_AWS_Tun tunnel fail:
-
Go to Network > SD-WAN, select the Performance SLAs tab, select Packet Loss, and click the ping_AWS_Gateway SLA:
-
Run the following CLI command:
# diagnose sys sdwan health-check … Seq(1 AWS_VPG): state(alive), packet-loss(0.000%) latency(52.746), jitter(0.713) sla_map=0x0 Seq(2 FGT_AWS_Tun): state(dead), packet-loss(19.000%) sla_map=0x0
-
-
Service rules show that the member is down:
-
Go to Network > SD-WAN and select the SD-WAN Rules tab:
-
Run the following CLI command:
# diagnose sys sdwan service4 Service(1): Address Mode(IPV4) flags=0x0 Gen(2), TOS(0x0/0x0), Protocol(6: 80->80), Mode(manual) Members: 1: Seq_num(2 FGT_AWS_Tun), dead Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(2): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(6: 22->22), Mode(manual) Members: 1: Seq_num(1 AWS_VPG), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(3): Address Mode(IPV4) flags=0x0 Gen(2), TOS(0x0/0x0), Protocol(6: 443->443), Mode(manual) Members: 1: Seq_num(2 FGT_AWS_Tun), dead Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.0-10.0.2.255 Service(4): Address Mode(IPV4) flags=0x0 Gen(1), TOS(0x0/0x0), Protocol(0: 1->65535), Mode(manual) Members: 1: Seq_num(1 AWS_VPG), alive, selected Src address: 0.0.0.0-255.255.255.255 Dst address: 10.0.2.21-10.0.2.21
-
-
Sessions are redirected to the working tunnel:
-
Run the following CLI command:
# diagnose sys session list session info: proto=6 proto_state=11 duration=3 expire=3596 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=AWS_VPG/ vlan_cos=0/255 state=log may_dirty ndr npu f00 csf_syncd_log app_valid statistic(bytes/packets/allow_err): org=504/4/1 reply=620/3/1 tuples=3 tx speed(Bps/kbps): 150/1 rx speed(Bps/kbps): 184/1 orgin->sink: org pre->post, reply pre->post dev=0->17/17->0 gwy=169.254.55.153/0.0.0.0 hook=post dir=org act=snat 10.100.88.101:56373->10.0.2.20:80(169.254.55.154:56373) hook=pre dir=reply act=dnat 10.0.2.20:80->169.254.55.154:56373(10.100.88.101:56373) hook=post dir=reply act=noop 10.0.2.20:80->10.100.88.101:56373(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b87199 tos=ff/ff app_list=2000 app=34050 url_cat=0 rpdb_link_id= 80000000 rpdb_svc_id=0 ngfwid=n/a npu_state=0x3041008 session info: proto=6 proto_state=66 duration=3 expire=1 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= class_id=0 ha_id=0 policy_dir=0 tunnel=AWS_VPG/ vlan_cos=0/255 state=log may_dirty ndr f00 csf_syncd_log statistic(bytes/packets/allow_err): org=48/1/0 reply=40/1/1 tuples=3 tx speed(Bps/kbps): 15/0 rx speed(Bps/kbps): 12/0 orgin->sink: org pre->post, reply pre->post dev=5->17/17->5 gwy=169.254.55.153/10.100.88.101 hook=post dir=org act=snat 10.100.88.101:56383->10.0.2.20:443(169.254.55.154:56383) hook=pre dir=reply act=dnat 10.0.2.20:443->169.254.55.154:56383(10.100.88.101:56383) hook=post dir=reply act=noop 10.0.2.20:443->10.100.88.101:56383(0.0.0.0:0) pos/(before,after) 0/(0,0), 0/(0,0) src_mac=00:09:0f:00:03:01 misc=0 policy_id=32 auth_info=0 chk_client_info=0 vd=0 serial=00b876bb tos=ff/ff app_list=2000 app=0 url_cat=0 rpdb_link_id= 80000000 rpdb_svc_id=0 ngfwid=n/a npu_state=0x3041008 total session 2
-
-
Routes to the FGT_AWS_Tun tunnel are removed:
-
If Optimal dashboards is selected, go to Dashboard > Network and expand the Routing widget to view the routing table.
If Comprehensive dashboards is selected, go to Dashboard > Routing Monitor and select Static & Dynamic in the widget toolbar to view the routing table:
-
Run the following CLI command:
# get router info routing-table all Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default Routing table for VRF=0 S* 0.0.0.0/0 [1/0] via 10.100.64.254, port1 [1/0] via 10.100.65.254, port5 S 10.0.2.0/24 [1/0] via 169.254.55.153, AWS_VPG C 10.0.10.0/24 is directly connected, Branch-HQ-A C 10.0.10.1/32 is directly connected, Branch-HQ-A …
-