Fortinet black logo

New Features

Support AWS transit gateway connect attachment and connect peer 6.4.3

Copy Link
Copy Doc ID de1e129a-0283-11ea-8977-00505692583a:747813
Download PDF

Support AWS transit gateway connect attachment and connect peer 6.4.3

AWS's transit gateway (TGW) connect attachment and connect peer are components for a new way of building a connection between the TGW and a transit VPC using a GRE tunnel overlay. When building a transit VPC that FortiGates are securing, the TGW connect attachment forms a GRE tunnel between the FortiGate-VM and the TGW. This deployment uses border gateway protocol routing, and a default route is advertised from the FortiGate-VM to the TGW peer. Virtual private clouds (VPC) which must be routed to the FortiGate TGW form VPC connect attachments with the TGW, and route traffic through it.

TGW plugin provides a tighter and a more native integration between the partner gateway appliances and TGW via tunnel attachment. TGW plugin supports GRE-based tunnel attachments, which provide higher performance than IPsec connections, which are currently used for the same purpose. Native GRE-based tunnel attachments support triple the bandwidth as IPsec.

The following shows the example topology:

To configure the components in AWS:
  1. Create a TGW:
    1. In the AWS management console, go to VPC > Transit Gateways > Transit Gateways, then click Create new.
    2. In the Amazon side ASN field, enter the AS that the TGW will use. The TGW uses BGP to exchange routes with the FortiGate
    3. In the CIDR field, specify a CIDR. The connect feature has GRE endpoints on the TGW end, which require a CIDR. You can specify this while creating a TGW as shown, or editing an existing TGW to specify the CIDR.
    4. Leave other fields as-is.
    5. Click Create Transit Gateway. This example configures the default 64512 as the AS and 1.0.0.0/24 for the CIDR blocks.
  2. The Security and App VPCs need the transport VPC attachments. For the App VPCs, you can create it in one of the App subnets or on a dedicated subnet. For the Security VPC, since there are two FortiGate-VMs in two AZs, you must create an attachment for each AZ. The architecture includes a dedicated subnet in each AZ for TGW creation. Create VPC attachments:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click Create new.
    2. From the Attachment type dropdown list, select VPC.
    3. In Subnet IDs, select the two dedicated subnets for TGW landing.
    4. Click Create attachment.
  3. Create the connect attachment. A connect attachment supports the GRE tunnel protocol for high performance and BGP for dynamic routing. After you create a connect attachment, you can create one or more GRE tunnels (also referred to as TGW connect peers) on the connect attachment to connect the TGW and the FortiGate:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click Create new.
    2. From the Attachment type dropdown list, select Connect.
    3. From the Transport Attachment ID dropdown list, select the attachment that you created in step 2. Since you selected the subnets when creating the attachments in step 2, you cannot select subnets here.
    4. Click Create attachment.
  4. Connect peers are the combination of GRE and BGP configuration between the FortiGates and in the Security VPC and the TGW. Create a connect peer:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click the connect attachment that you created in step 3.
    2. Configure the connect peer between the TGW and the FortiGate-VM in AZ1:
      1. On the Connect peers tab, click Create Connect peer.
      2. The TGW GRE address is autogenerated. In this example, this value is 1.0.0.68.
      3. In the Peer GRE address field, enter the interface IP address of the FortiGate-VM where the GRE tunnel will terminate. This example uses 10.0.0.21.
      4. In the Peer ASN field, enter the AS number that the FortiGate-VM will use. eBGP will be used. Click Create.
    3. Repeat step b to create the connect peer for the FortiGate-VM in AZ2, using the interface IP address and AS number for this FortiGate-VM. After configuring both connect peers, the tunnel status is down as the FortiGate-VMs are not configured for GRE and BGP yet.
  5. In this example, all three VPCs connect to the same TGW route table. All three VPCs propagate to the same route table. Inspecting the traffic between the App VPCs would require multiple TGW route tables. Create a TGW route table:
    1. Go to VPC > Transit Gateways > Transit Gateway Route Table, then click Create new.
    2. On the Associations tab, click Create association. Add the attachments that you created in step 2. AWS automatically creates an association with the Security VPC connect attachment that you created in step 3.
    3. On the Propagations tab, click Create propagation. Create propagations for the three attachments that you created in step 2 and 3.
To configure this feature in FortiOS:

These instructions configure the following firewall policies:

  • A policy for Internet traffic from application VPCs
  • A policy for east-west traffic between application VPCs
  • A policy for virtual IP address traffic to application PCs
config system gre-tunnel
    edit "tgwc"
        set interface "port2"
        set remote-gw 1.0.0.68
        set local-gw 10.0.0.21
    next
end
config system interface
    edit "port1"
        set vdom "root"
        set mode dhcp
        set allowaccess ping https ssh fgfm
next
    edit "port2"
        set vdom "root"
        set mode dhcp
        set allowaccess ping https ssh snmp http telnet
next
edit "tgwc"
        set vdom "root"
        set ip 169.254.120.1 255.255.255.255
        set allowaccess ping https ssh snmp http
        set type tunnel
        set remote-ip 169.254.120.2 255.255.255.248
        set snmp-index 5
        set interface "port2"
    next
end
config router static
    edit 10
        set dst 1.0.0.0 255.255.255.0
        set gateway 10.0.0.17
        set device "port2"
    next
end
config router bgp
    set as 7115
    set router-id 169.254.101.1
    config neighbor
        edit "169.254.120.2"
            set capability-default-originate enable
            set ebgp-enforce-multihop enable
            set soft-reconfiguration enable
            set remote-as 64512
        next
        edit "169.254.120.3"
            set capability-default-originate enable
            set ebgp-enforce-multihop enable
            set soft-reconfiguration enable
            set remote-as 64512
        next
    end
……
end
config firewall policy
    edit 1
        set srcintf "tgwc"
        set dstintf "port1"
        set srcaddr "all"
        set dstaddr "all"
        set action accept
        set schedule "always"
        set service "ALL"
        set nat enable
    next
    edit 2
        set srcintf "tgwc"
        set dstintf "tgwc"
        set srcaddr "all"
        set dstaddr "all"
        set action accept
        set schedule "always"
        set service "ALL"
    next
    edit 100
        set srcintf "port1"
        set dstintf "tgwc"
        set srcaddr "all"
        set dstaddr "ec2ssh"
        set action accept
        set schedule "always"
        set service "ALL"
        set nat enable
    next
end
To verify the configuration:
  1. Verify Internet traffic from the Application PC in VPC A:

    ping 8.8.8.8 -c 1 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=89 time=10.5 ms diagnose sniffer packet any icmp 4 5.948458 tgwc in 192.168.50.93 -> 8.8.8.8: icmp: echo request 5.948491 port1 out 10.0.0.5 -> 8.8.8.8: icmp: echo request 5.957798 port1 in 8.8.8.8 -> 10.0.0.5: icmp: echo reply 5.957814 tgwc out 8.8.8.8 -> 192.168.50.93: icmp: echo reply

  2. Verify Internet traffic from the Application PC in VPC B:

    ping 8.8.8.8 -c 1 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=89 time=10.7 ms diagnose sniffer packet any icmp 4 1.929761 tgwc in 192.168.100.62 -> 8.8.8.8: icmp: echo request 1.929806 port1 out 10.0.0.5 -> 8.8.8.8: icmp: echo request 1.939127 port1 in 8.8.8.8 -> 10.0.0.5: icmp: echo reply 1.939143 tgwc out 8.8.8.8 -> 192.168.100.62: icmp: echo reply

  3. Verify east-west traffic between the Application PCs in VPCs A and B:

    ping 192.168.100.62 -c 1 PING 192.168.100.62 (192.168.100.62) 56(84) bytes of data. 64 bytes from 192.168.100.62: icmp_seq=1 ttl=252 time=3.34 ms diagnose sniffer packet any icmp 4 2.218833 tgwc in 192.168.50.93 -> 192.168.100.62: icmp: echo request 2.218874 tgwc out 192.168.50.93 -> 192.168.100.62: icmp: echo request 2.220736 tgwc in 192.168.100.62 -> 192.168.50.93: icmp: echo reply 2.220746 tgwc out 192.168.100.62 -> 192.168.50.93: icmp: echo reply

  4. Verify SSH VIP traffic to the Application PC in VPC B. Note that 44.242.126.40 is the FortiGate elastic IP address on port1:

    ssh -i xxxx ec2-user@44.242.126.40 -p 2222 Last login: Fri Apr 9 00:12:18 2021 from 169.254.120.1 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| diagnose sniffer packet any 'host 192.168.100.62 and port 22' 4 Using Original Sniffing Mode interfaces=[any] filters=[host 192.168.100.62 and port 22] 4.904145 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: syn 1737745616 4.905891 tgwc in 192.168.100.62.22 -> 169.254.120.1.47876: syn 1376689826 ack 1737745617 4.919022 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: ack 1376689827 4.919033 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: psh 1737745617 ack 1376689827 4.920199 tgwc in 192.168.100.62.22 -> 169.254.120.1.47876: ack 1737745658

Support AWS transit gateway connect attachment and connect peer 6.4.3

AWS's transit gateway (TGW) connect attachment and connect peer are components for a new way of building a connection between the TGW and a transit VPC using a GRE tunnel overlay. When building a transit VPC that FortiGates are securing, the TGW connect attachment forms a GRE tunnel between the FortiGate-VM and the TGW. This deployment uses border gateway protocol routing, and a default route is advertised from the FortiGate-VM to the TGW peer. Virtual private clouds (VPC) which must be routed to the FortiGate TGW form VPC connect attachments with the TGW, and route traffic through it.

TGW plugin provides a tighter and a more native integration between the partner gateway appliances and TGW via tunnel attachment. TGW plugin supports GRE-based tunnel attachments, which provide higher performance than IPsec connections, which are currently used for the same purpose. Native GRE-based tunnel attachments support triple the bandwidth as IPsec.

The following shows the example topology:

To configure the components in AWS:
  1. Create a TGW:
    1. In the AWS management console, go to VPC > Transit Gateways > Transit Gateways, then click Create new.
    2. In the Amazon side ASN field, enter the AS that the TGW will use. The TGW uses BGP to exchange routes with the FortiGate
    3. In the CIDR field, specify a CIDR. The connect feature has GRE endpoints on the TGW end, which require a CIDR. You can specify this while creating a TGW as shown, or editing an existing TGW to specify the CIDR.
    4. Leave other fields as-is.
    5. Click Create Transit Gateway. This example configures the default 64512 as the AS and 1.0.0.0/24 for the CIDR blocks.
  2. The Security and App VPCs need the transport VPC attachments. For the App VPCs, you can create it in one of the App subnets or on a dedicated subnet. For the Security VPC, since there are two FortiGate-VMs in two AZs, you must create an attachment for each AZ. The architecture includes a dedicated subnet in each AZ for TGW creation. Create VPC attachments:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click Create new.
    2. From the Attachment type dropdown list, select VPC.
    3. In Subnet IDs, select the two dedicated subnets for TGW landing.
    4. Click Create attachment.
  3. Create the connect attachment. A connect attachment supports the GRE tunnel protocol for high performance and BGP for dynamic routing. After you create a connect attachment, you can create one or more GRE tunnels (also referred to as TGW connect peers) on the connect attachment to connect the TGW and the FortiGate:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click Create new.
    2. From the Attachment type dropdown list, select Connect.
    3. From the Transport Attachment ID dropdown list, select the attachment that you created in step 2. Since you selected the subnets when creating the attachments in step 2, you cannot select subnets here.
    4. Click Create attachment.
  4. Connect peers are the combination of GRE and BGP configuration between the FortiGates and in the Security VPC and the TGW. Create a connect peer:
    1. Go to VPC > Transit Gateways > Transit Gateway Attachments, then click the connect attachment that you created in step 3.
    2. Configure the connect peer between the TGW and the FortiGate-VM in AZ1:
      1. On the Connect peers tab, click Create Connect peer.
      2. The TGW GRE address is autogenerated. In this example, this value is 1.0.0.68.
      3. In the Peer GRE address field, enter the interface IP address of the FortiGate-VM where the GRE tunnel will terminate. This example uses 10.0.0.21.
      4. In the Peer ASN field, enter the AS number that the FortiGate-VM will use. eBGP will be used. Click Create.
    3. Repeat step b to create the connect peer for the FortiGate-VM in AZ2, using the interface IP address and AS number for this FortiGate-VM. After configuring both connect peers, the tunnel status is down as the FortiGate-VMs are not configured for GRE and BGP yet.
  5. In this example, all three VPCs connect to the same TGW route table. All three VPCs propagate to the same route table. Inspecting the traffic between the App VPCs would require multiple TGW route tables. Create a TGW route table:
    1. Go to VPC > Transit Gateways > Transit Gateway Route Table, then click Create new.
    2. On the Associations tab, click Create association. Add the attachments that you created in step 2. AWS automatically creates an association with the Security VPC connect attachment that you created in step 3.
    3. On the Propagations tab, click Create propagation. Create propagations for the three attachments that you created in step 2 and 3.
To configure this feature in FortiOS:

These instructions configure the following firewall policies:

  • A policy for Internet traffic from application VPCs
  • A policy for east-west traffic between application VPCs
  • A policy for virtual IP address traffic to application PCs
config system gre-tunnel
    edit "tgwc"
        set interface "port2"
        set remote-gw 1.0.0.68
        set local-gw 10.0.0.21
    next
end
config system interface
    edit "port1"
        set vdom "root"
        set mode dhcp
        set allowaccess ping https ssh fgfm
next
    edit "port2"
        set vdom "root"
        set mode dhcp
        set allowaccess ping https ssh snmp http telnet
next
edit "tgwc"
        set vdom "root"
        set ip 169.254.120.1 255.255.255.255
        set allowaccess ping https ssh snmp http
        set type tunnel
        set remote-ip 169.254.120.2 255.255.255.248
        set snmp-index 5
        set interface "port2"
    next
end
config router static
    edit 10
        set dst 1.0.0.0 255.255.255.0
        set gateway 10.0.0.17
        set device "port2"
    next
end
config router bgp
    set as 7115
    set router-id 169.254.101.1
    config neighbor
        edit "169.254.120.2"
            set capability-default-originate enable
            set ebgp-enforce-multihop enable
            set soft-reconfiguration enable
            set remote-as 64512
        next
        edit "169.254.120.3"
            set capability-default-originate enable
            set ebgp-enforce-multihop enable
            set soft-reconfiguration enable
            set remote-as 64512
        next
    end
……
end
config firewall policy
    edit 1
        set srcintf "tgwc"
        set dstintf "port1"
        set srcaddr "all"
        set dstaddr "all"
        set action accept
        set schedule "always"
        set service "ALL"
        set nat enable
    next
    edit 2
        set srcintf "tgwc"
        set dstintf "tgwc"
        set srcaddr "all"
        set dstaddr "all"
        set action accept
        set schedule "always"
        set service "ALL"
    next
    edit 100
        set srcintf "port1"
        set dstintf "tgwc"
        set srcaddr "all"
        set dstaddr "ec2ssh"
        set action accept
        set schedule "always"
        set service "ALL"
        set nat enable
    next
end
To verify the configuration:
  1. Verify Internet traffic from the Application PC in VPC A:

    ping 8.8.8.8 -c 1 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=89 time=10.5 ms diagnose sniffer packet any icmp 4 5.948458 tgwc in 192.168.50.93 -> 8.8.8.8: icmp: echo request 5.948491 port1 out 10.0.0.5 -> 8.8.8.8: icmp: echo request 5.957798 port1 in 8.8.8.8 -> 10.0.0.5: icmp: echo reply 5.957814 tgwc out 8.8.8.8 -> 192.168.50.93: icmp: echo reply

  2. Verify Internet traffic from the Application PC in VPC B:

    ping 8.8.8.8 -c 1 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=89 time=10.7 ms diagnose sniffer packet any icmp 4 1.929761 tgwc in 192.168.100.62 -> 8.8.8.8: icmp: echo request 1.929806 port1 out 10.0.0.5 -> 8.8.8.8: icmp: echo request 1.939127 port1 in 8.8.8.8 -> 10.0.0.5: icmp: echo reply 1.939143 tgwc out 8.8.8.8 -> 192.168.100.62: icmp: echo reply

  3. Verify east-west traffic between the Application PCs in VPCs A and B:

    ping 192.168.100.62 -c 1 PING 192.168.100.62 (192.168.100.62) 56(84) bytes of data. 64 bytes from 192.168.100.62: icmp_seq=1 ttl=252 time=3.34 ms diagnose sniffer packet any icmp 4 2.218833 tgwc in 192.168.50.93 -> 192.168.100.62: icmp: echo request 2.218874 tgwc out 192.168.50.93 -> 192.168.100.62: icmp: echo request 2.220736 tgwc in 192.168.100.62 -> 192.168.50.93: icmp: echo reply 2.220746 tgwc out 192.168.100.62 -> 192.168.50.93: icmp: echo reply

  4. Verify SSH VIP traffic to the Application PC in VPC B. Note that 44.242.126.40 is the FortiGate elastic IP address on port1:

    ssh -i xxxx ec2-user@44.242.126.40 -p 2222 Last login: Fri Apr 9 00:12:18 2021 from 169.254.120.1 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| diagnose sniffer packet any 'host 192.168.100.62 and port 22' 4 Using Original Sniffing Mode interfaces=[any] filters=[host 192.168.100.62 and port 22] 4.904145 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: syn 1737745616 4.905891 tgwc in 192.168.100.62.22 -> 169.254.120.1.47876: syn 1376689826 ack 1737745617 4.919022 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: ack 1376689827 4.919033 tgwc out 169.254.120.1.47876 -> 192.168.100.62.22: psh 1737745617 ack 1376689827 4.920199 tgwc in 192.168.100.62.22 -> 169.254.120.1.47876: ack 1737745658