Fortinet white logo
Fortinet white logo

Hardware Acceleration

FortiGate 3500F and 3501F fast path architecture

FortiGate 3500F and 3501F fast path architecture

The FortiGate 3500F and 3501F each include three NP7 processors. All front panel data interfaces and the NP7 processors connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processors. Because of the ISF, all supported traffic passing between any two data interfaces can be offloaded by the NP7 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP7 processor to the CPU.

The FortiGate 3500F and 3501F models feature the following front panel interfaces:

  • Two 10G/5G/2.5G/1G/100M BASE-T RJ45 (MGMT1 and MGMT2, not connected to the NP7 processors).
  • Thirty-two 25/10/1 GigE SFP28/SFP+/SFP (HA1, HA2, 1 to 30), the HA interfaces are not connected to the NP7 processors.
  • Six 100/40 GigE QSFP28 (31 to 36). Each of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.

The MGMT interfaces are not connected to the NP7 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interfaces are also not connected to the NP7 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 3500F and 3501F NP7 configuration. The command output shows a that all three NP7s are connected to all interfaces.

diagnose npu np7 port-list
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
port1    25000           10000            NP#0-2          0         23         xe2          
port2    25000           10000            NP#0-2          0         24         xe3          
port3    25000           10000            NP#0-2          0         29         xe4          
port4    25000           10000            NP#0-2          0         30         xe5          
port5    25000           10000            NP#0-2          0         31         xe6          
port6    25000           10000            NP#0-2          0         32         xe7          
port7    25000           10000            NP#0-2          0         33         xe8          
port8    25000           10000            NP#0-2          0         34         xe9          
port9    25000           10000            NP#0-2          0         35         xe10         
port10   25000           10000            NP#0-2          0         36         xe11         
port11   25000           10000            NP#0-2          0         41         xe12         
port12   25000           10000            NP#0-2          0         42         xe13         
port13   25000           10000            NP#0-2          0         43         xe14         
port14   25000           10000            NP#0-2          0         44         xe15         
port15   25000           10000            NP#0-2          0         49         xe16         
port16   25000           10000            NP#0-2          0         50         xe17         
port17   25000           10000            NP#0-2          0         51         xe18         
port18   25000           10000            NP#0-2          0         52         xe19         
port19   25000           10000            NP#0-2          0         61         xe24         
port20   25000           10000            NP#0-2          0         62         xe25         
port21   25000           10000            NP#0-2          0         63         xe26         
port22   25000           10000            NP#0-2          0         64         xe27         
port23   25000           10000            NP#0-2          0         57         xe20         
port24   25000           10000            NP#0-2          0         58         xe21         
port25   25000           10000            NP#0-2          0         59         xe22         
port26   25000           10000            NP#0-2          0         60         xe23         
port27   25000           10000            NP#0-2          0         71         xe29         
port28   25000           10000            NP#0-2          0         72         xe30         
port29   25000           10000            NP#0-2          0         73         xe31         
port30   25000           10000            NP#0-2          0         74         xe32     
port31   100000          100000           NP#0-2          0         79         ce4          
port32   100000          100000           NP#0-2          0         67         ce3          
port33   100000          100000           NP#0-2          0         95         ce6          
port34   100000          100000           NP#0-2          0         87         ce5          
port35   100000          100000           NP#0-2          0         123        ce10         
port36   100000          100000           NP#0-2          0         99         ce7          
-------- --------------- ---------------  --------------- --------- ---------- ------------ 

NP Port:
Name   Switch_id SW_port_id SW_port_name 
------ --------- ---------- ------------ 
np0_0  0         5          ce1          
np0_1  0         13         ce2          
np1_0  0         127        ce11         
np1_1  0         1          ce0          
np2_0  0         107        ce8          
np2_1  0         115        ce9          
------ --------- ---------- ------------ 
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID, SW_port_name: Switch port name

The command output also shows the maximum and default speeds of each interface.

The integrated switch fabric distributes sessions from the data interfaces to the NP7 processors. The three NP7 processors have a bandwidth capacity of 200Gigabit x 3 = 600 Gigabit. If all interfaces were operating at their maximum bandwidth, the NP7 processors would not be able to offload all the traffic. You can use NPU port mapping to control how sessions are distributed to NP7 processors.

You can add LAGs to improve performance. For details, see Increasing NP7 offloading capacity using link aggregation groups (LAGs).

The FortiGate-3500F and 3501F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.

Since the FortiGate-3500F and 3501F have three NP7 processors, the following options are available to configure how the internal switch fabric (ISF) distributes sessions to the NP7 processors:

config system global

config system npu

set hash-config {src-dst-ip | src-ip}

end

For more information, see hash-config {src-dst-ip | 5-tuple | src-ip}.

Splitting the port31 to port36 interfaces

You can use the following command to split each FortiGate 3500F and 3501F 31 to 36 (port31 to port36) 40/100 GigE QSFP28 interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interfaces 33 and 36 (port33 and port36), enter the following command:

config system global

set split-port port33 port36

end

The FortiGate 3500F and 3501F restarts and when it starts up:

  • The port33 interface has been replaced by four SFP28 interfaces named port33/1 to port33/4.

  • The port36 interface has been replaced by four SFP28 interfaces named port36/1 to port36/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 10000full (10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

If you set the speed of one of the split interfaces to 25000full (25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full and you change the speed of one of them to 10000full (10GigE) they are all changed to 10000full (no restart required). When the interfaces are operating at 10000full, you can change the speeds of individual interfaces to operate at 1000full (1GigE).

Configuring FortiGate-3500F and 3501F NPU port mapping

The default FortiGate-3500F and 3501F port mapping configuration results in sessions passing from front panel data interfaces to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabitinterfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to NP7 processors. For example, you can set up NPU port mapping to send all traffic from a front panel data interface or LAG to a specific NP7 processor or group of NP7 processors, or a single NP7 link.

Note

On the FortiGate 3500F and 3501F you can configure ISF load balancing to change the algorithm that the ISF uses to distribute data interface sessions to NP7 processors. ISF load balancing is configured for an interface, and distributes sessions from that interface to all NP7 processor LAGs. If you have configured NPU port mapping, ISF load balancing distributes sessions from the interface to the NP7 processors and links in the NPU port mapping configuration for that interface. See Configuring ISF load balancing.

Use the following command to configure FortiGate-3500F and 3501F NPU port mapping:

config system npu-post

config port-npu-map

edit <interface-name>

set npu-group {All-NP | NP0 | NP1 | NP2 | NP0-to-NP1 | NP1-to-NP2 | NP0-link0 | NP0-link1 | NP1-link0 | NP1-link1 | NP2-link0 | NP2-link1} ...

end

end

end

<interface-name> can be a physical interface or a LAG.

All-NP, (the default) distribute sessions among all three NP7 LAGs.

NP0, distribute sessions to the LAG connected to NP0.

NP1, distribute sessions to the LAG connected to NP1.

NP2, distribute sessions to the LAG connected to NP2.

NP0-to-NP1, distribute sessions between the LAG connected to NP0 and the LAG connected to NP1.

NP1-to-NP2, distribute sessions between the LAG connected to NP1 and the LAG connected to NP2.

NP0-link0, send sessions to NP0 link 0.

NP0-link1, send sessions to NP0 link 1.

NP1-link0, send sessions to NP1 link 0.

NP1-link1, send sessions to NP1 link 1.

NP2-link0, send sessions to NP2 link 0.

NP2-link1, send sessions to NP2 link 1.

You can add multiple group names to map traffic to multiple groups of NP7 processors and NP7 processor links. For example, use the following command to distribute sessions from port23 to NP0, NP1, and NP2-link1:

config system npu-post

config port-npu-map

edit <interface-name>

set npu-group NP0 NP1 NP2-link1

end

end

end

Group names can't overlap, for example you can't map an interface to both NP0 and NP0-link1.

For example, use the following syntax to assign the FortiGate-3500F port21 and port22 interfaces to NP0 and NP1 and port23 and port24 to NP2:

config system npu

config port-npu-map

edit port21

set npu-group NP0-to-NP1

next

edit port22

set npu-group NP0-to-NP1

next

edit port23

set npu-group NP2

next

edit port24

set npu-group NP2

end

end

While the FortiGate-3500F or 3501F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

FortiGate 3500F and 3501F fast path architecture

FortiGate 3500F and 3501F fast path architecture

The FortiGate 3500F and 3501F each include three NP7 processors. All front panel data interfaces and the NP7 processors connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processors. Because of the ISF, all supported traffic passing between any two data interfaces can be offloaded by the NP7 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP7 processor to the CPU.

The FortiGate 3500F and 3501F models feature the following front panel interfaces:

  • Two 10G/5G/2.5G/1G/100M BASE-T RJ45 (MGMT1 and MGMT2, not connected to the NP7 processors).
  • Thirty-two 25/10/1 GigE SFP28/SFP+/SFP (HA1, HA2, 1 to 30), the HA interfaces are not connected to the NP7 processors.
  • Six 100/40 GigE QSFP28 (31 to 36). Each of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.

The MGMT interfaces are not connected to the NP7 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interfaces are also not connected to the NP7 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 3500F and 3501F NP7 configuration. The command output shows a that all three NP7s are connected to all interfaces.

diagnose npu np7 port-list
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
port1    25000           10000            NP#0-2          0         23         xe2          
port2    25000           10000            NP#0-2          0         24         xe3          
port3    25000           10000            NP#0-2          0         29         xe4          
port4    25000           10000            NP#0-2          0         30         xe5          
port5    25000           10000            NP#0-2          0         31         xe6          
port6    25000           10000            NP#0-2          0         32         xe7          
port7    25000           10000            NP#0-2          0         33         xe8          
port8    25000           10000            NP#0-2          0         34         xe9          
port9    25000           10000            NP#0-2          0         35         xe10         
port10   25000           10000            NP#0-2          0         36         xe11         
port11   25000           10000            NP#0-2          0         41         xe12         
port12   25000           10000            NP#0-2          0         42         xe13         
port13   25000           10000            NP#0-2          0         43         xe14         
port14   25000           10000            NP#0-2          0         44         xe15         
port15   25000           10000            NP#0-2          0         49         xe16         
port16   25000           10000            NP#0-2          0         50         xe17         
port17   25000           10000            NP#0-2          0         51         xe18         
port18   25000           10000            NP#0-2          0         52         xe19         
port19   25000           10000            NP#0-2          0         61         xe24         
port20   25000           10000            NP#0-2          0         62         xe25         
port21   25000           10000            NP#0-2          0         63         xe26         
port22   25000           10000            NP#0-2          0         64         xe27         
port23   25000           10000            NP#0-2          0         57         xe20         
port24   25000           10000            NP#0-2          0         58         xe21         
port25   25000           10000            NP#0-2          0         59         xe22         
port26   25000           10000            NP#0-2          0         60         xe23         
port27   25000           10000            NP#0-2          0         71         xe29         
port28   25000           10000            NP#0-2          0         72         xe30         
port29   25000           10000            NP#0-2          0         73         xe31         
port30   25000           10000            NP#0-2          0         74         xe32     
port31   100000          100000           NP#0-2          0         79         ce4          
port32   100000          100000           NP#0-2          0         67         ce3          
port33   100000          100000           NP#0-2          0         95         ce6          
port34   100000          100000           NP#0-2          0         87         ce5          
port35   100000          100000           NP#0-2          0         123        ce10         
port36   100000          100000           NP#0-2          0         99         ce7          
-------- --------------- ---------------  --------------- --------- ---------- ------------ 

NP Port:
Name   Switch_id SW_port_id SW_port_name 
------ --------- ---------- ------------ 
np0_0  0         5          ce1          
np0_1  0         13         ce2          
np1_0  0         127        ce11         
np1_1  0         1          ce0          
np2_0  0         107        ce8          
np2_1  0         115        ce9          
------ --------- ---------- ------------ 
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID, SW_port_name: Switch port name

The command output also shows the maximum and default speeds of each interface.

The integrated switch fabric distributes sessions from the data interfaces to the NP7 processors. The three NP7 processors have a bandwidth capacity of 200Gigabit x 3 = 600 Gigabit. If all interfaces were operating at their maximum bandwidth, the NP7 processors would not be able to offload all the traffic. You can use NPU port mapping to control how sessions are distributed to NP7 processors.

You can add LAGs to improve performance. For details, see Increasing NP7 offloading capacity using link aggregation groups (LAGs).

The FortiGate-3500F and 3501F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.

Since the FortiGate-3500F and 3501F have three NP7 processors, the following options are available to configure how the internal switch fabric (ISF) distributes sessions to the NP7 processors:

config system global

config system npu

set hash-config {src-dst-ip | src-ip}

end

For more information, see hash-config {src-dst-ip | 5-tuple | src-ip}.

Splitting the port31 to port36 interfaces

You can use the following command to split each FortiGate 3500F and 3501F 31 to 36 (port31 to port36) 40/100 GigE QSFP28 interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interfaces 33 and 36 (port33 and port36), enter the following command:

config system global

set split-port port33 port36

end

The FortiGate 3500F and 3501F restarts and when it starts up:

  • The port33 interface has been replaced by four SFP28 interfaces named port33/1 to port33/4.

  • The port36 interface has been replaced by four SFP28 interfaces named port36/1 to port36/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 10000full (10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

If you set the speed of one of the split interfaces to 25000full (25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full and you change the speed of one of them to 10000full (10GigE) they are all changed to 10000full (no restart required). When the interfaces are operating at 10000full, you can change the speeds of individual interfaces to operate at 1000full (1GigE).

Configuring FortiGate-3500F and 3501F NPU port mapping

The default FortiGate-3500F and 3501F port mapping configuration results in sessions passing from front panel data interfaces to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabitinterfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to NP7 processors. For example, you can set up NPU port mapping to send all traffic from a front panel data interface or LAG to a specific NP7 processor or group of NP7 processors, or a single NP7 link.

Note

On the FortiGate 3500F and 3501F you can configure ISF load balancing to change the algorithm that the ISF uses to distribute data interface sessions to NP7 processors. ISF load balancing is configured for an interface, and distributes sessions from that interface to all NP7 processor LAGs. If you have configured NPU port mapping, ISF load balancing distributes sessions from the interface to the NP7 processors and links in the NPU port mapping configuration for that interface. See Configuring ISF load balancing.

Use the following command to configure FortiGate-3500F and 3501F NPU port mapping:

config system npu-post

config port-npu-map

edit <interface-name>

set npu-group {All-NP | NP0 | NP1 | NP2 | NP0-to-NP1 | NP1-to-NP2 | NP0-link0 | NP0-link1 | NP1-link0 | NP1-link1 | NP2-link0 | NP2-link1} ...

end

end

end

<interface-name> can be a physical interface or a LAG.

All-NP, (the default) distribute sessions among all three NP7 LAGs.

NP0, distribute sessions to the LAG connected to NP0.

NP1, distribute sessions to the LAG connected to NP1.

NP2, distribute sessions to the LAG connected to NP2.

NP0-to-NP1, distribute sessions between the LAG connected to NP0 and the LAG connected to NP1.

NP1-to-NP2, distribute sessions between the LAG connected to NP1 and the LAG connected to NP2.

NP0-link0, send sessions to NP0 link 0.

NP0-link1, send sessions to NP0 link 1.

NP1-link0, send sessions to NP1 link 0.

NP1-link1, send sessions to NP1 link 1.

NP2-link0, send sessions to NP2 link 0.

NP2-link1, send sessions to NP2 link 1.

You can add multiple group names to map traffic to multiple groups of NP7 processors and NP7 processor links. For example, use the following command to distribute sessions from port23 to NP0, NP1, and NP2-link1:

config system npu-post

config port-npu-map

edit <interface-name>

set npu-group NP0 NP1 NP2-link1

end

end

end

Group names can't overlap, for example you can't map an interface to both NP0 and NP0-link1.

For example, use the following syntax to assign the FortiGate-3500F port21 and port22 interfaces to NP0 and NP1 and port23 and port24 to NP2:

config system npu

config port-npu-map

edit port21

set npu-group NP0-to-NP1

next

edit port22

set npu-group NP0-to-NP1

next

edit port23

set npu-group NP2

next

edit port24

set npu-group NP2

end

end

While the FortiGate-3500F or 3501F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.