Fortinet white logo
Fortinet white logo

Hardware Acceleration

FortiGate 1000F and 1001F fast path architecture

FortiGate 1000F and 1001F fast path architecture

The FortiGate 1000F and 1001F each include one NP7 processor. All front panel data interfaces and the NP7 processor connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processor. All supported traffic passing between any two data interfaces can be offloaded by the NP7 processor. Data traffic processed by the CPU takes a dedicated data path through the ISF and the NP7 processor to the CPU.

The FortiGate 1000F and 1001F models feature the following front panel interfaces:

  • One 10/100/1000/2.5GBASE-T RJ45 (HA) not connected to the NP7 processor.
  • One 10/100/1000BASE-T RJ45 (MGMT) not connected to the NP7 processor.
  • Eight 10G/5G/2.5G/1G/100M BASE-T RJ45 (1 to 8).
  • Sixteen 10/1 GigE SFP+/SFP (9 to 24).
  • Eight 25/10/1 GigE SFP28/SFP+/SFP (25 to 32), interface groups: 25 - 28, 29 - 32.
  • Two 100/40 GigE QSFP28/QSFP+ (33 and 34). Both of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.

The MGMT interface is not connected to the NP7 processor. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interface is also not connected to the NP7 processor. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 1000F or 1001F NP7 configuration. The command output shows a single NP7 named NP#0 is connected to all interfaces. This interface to NP7 mapping is also shown in the diagram above.

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
port1    10000           10000            NP#0            0         58                      
port2    10000           10000            NP#0            0         59                      
port3    10000           10000            NP#0            0         56                      
port4    10000           10000            NP#0            0         57                      
port5    10000           10000            NP#0            0         54                      
port6    10000           10000            NP#0            0         55                      
port7    10000           10000            NP#0            0         52                      
port8    10000           10000            NP#0            0         53                      
port9    10000           10000            NP#0            0         51                      
port10   10000           10000            NP#0            0         50                      
port11   10000           10000            NP#0            0         49                      
port12   10000           10000            NP#0            0         48                      
port13   10000           10000            NP#0            0         35                      
port14   10000           10000            NP#0            0         34                      
port15   10000           10000            NP#0            0         33                      
port16   10000           10000            NP#0            0         32                      
port17   10000           10000            NP#0            0         31                      
port18   10000           10000            NP#0            0         30                      
port19   10000           10000            NP#0            0         29                      
port20   10000           10000            NP#0            0         28                      
port21   10000           10000            NP#0            0         27                      
port22   10000           10000            NP#0            0         26                      
port23   10000           10000            NP#0            0         25                      
port24   10000           10000            NP#0            0         24                      
port25   25000           10000            NP#0            0         23                      
port26   25000           10000            NP#0            0         22                      
port27   25000           10000            NP#0            0         20                      
port28   25000           10000            NP#0            0         21                      
port29   25000           10000            NP#0            0         19                      
port30   25000           10000            NP#0            0         17                      
port31   25000           10000            NP#0            0         18                      
port32   25000           10000            NP#0            0         16                      
port33   100000          100000           NP#0            0         12                      
port34   100000          100000           NP#0            0         8                       
-------- --------------- ---------------  --------------- --------- ---------- ------------ 

NP Port:
Name   Switch_id SW_port_id SW_port_name 
------ --------- ---------- ------------ 
np0_0  0         4                       
np0_1  0         0                       
------ --------- ---------- ------------ 
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID, SW_port_name: Switch port name

The command output also shows the maximum speeds of each interface. Also, interfaces 1 to 24 are connected to one switch and interfaces 25 to 40 are connected to another switch. Both of these switches make up the internal switch fabric, which connects the interfaces to the NP7 processor, the CPU, and the four CP9 processors.

The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.

The FortiGate-1000F and 1001F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.

Interface groups and changing data interface speeds

FortiGate-1000F and 1001F front panel data interfaces 25 to 32 are divided into the following groups:

  • port25 - port28
  • port29 - port32

All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 10Gbps to 25Gbps the speeds of port25 to port28 are also changed to 25Gbps.

Another example, the default speed of the port25 to port32 interfaces is 10Gbps. If you want to install 25GigE transceivers in port25 to port32 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:

config system interface

edit port25

set speed 25000full

next

edit port33

set speed 25000full

end

Every time you change a data interface speed, when you enter the end command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:

config system interface

edit port29

set speed 25000full

end

port29-port32 speed will be changed to 25000full due to hardware limit.

Do you want to continue? (y/n)

Splitting the port33 and port34 interfaces

You can use the following command to split each FortiGate 1000F and 1001F 33 and 34 (port33 and port34) 100/40 GigE QSFP28/QSFP+ interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interface 33 (port33), enter the following command:

config system global

set split-port port33

end

The FortiGate 1000F and 1001F restarts and when it starts up the port33 interface has been replaced by four SFP28 interfaces named port33/1 to port33/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 10000full (10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

If you set the speed of one of the split interfaces to 25000full (25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full and you change the speed of one of them to 10000full (10GigE) they are all changed to 10000full (no restart required). When the interfaces are operating at 10000full, you can change the speeds of individual interfaces to operate at 1000full (1GigE).

Configuring NPU port mapping

You can use the following command to configure FortiGate-1000F and 1001F NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

You can use the port map to assign data interfaces to NP7 links.

Each NP7 has two 100-Gigabit KR links, numbered 0 and 1. Traffic passes to the NP7 over these links. By default the two links operate as a LAG that distributes sessions to the NP7 processor. You can configure the NPU port map to assign interfaces to use one or the other of the NP7 links instead of sending sessions over the LAG.

<index> varies depending on the NP7 processors available in your FortiGate.

For the FortiGate-1000F <index> can be 0, 1, or 2:

  • 0, assign the interface to NP#0, the default, the interface is connected to the LAG. Traffic from the interface is distributed to both links.
  • 1, assign the interface to NP#0-link0, to connect the interface to NP7 link 0. Traffic from the interface is set to link 0.
  • 2, assign the interface to NP#0-link1, to connect the interface to NP7 link 1. Traffic from the interface is set to link 1.

For example, use the following syntax to assign FortiGate-1000F front panel 40 Gigabit interface 33 to NP#0- link0 and interface 34 to NP#0-link1. The resulting configuration splits traffic from the 40Gigabit interfaces between the two NP7 links:

config system npu

config port-npu-map

edit port33

set npu-group-index 1

next

edit port34

set npu-group-index 2

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FortiGate-1000F or 1001F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the np_group column of the diagnose npu np7 port-list command output for port33 and port34 shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
.
.
.       
port33   100000          100000           NP#0-link0      0         12                      
port34   100000          100000           NP#0-link1      0         8                       
-------- --------------- ---------------  --------------- --------- ---------- ------------ 

FortiGate 1000F and 1001F fast path architecture

FortiGate 1000F and 1001F fast path architecture

The FortiGate 1000F and 1001F each include one NP7 processor. All front panel data interfaces and the NP7 processor connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processor. All supported traffic passing between any two data interfaces can be offloaded by the NP7 processor. Data traffic processed by the CPU takes a dedicated data path through the ISF and the NP7 processor to the CPU.

The FortiGate 1000F and 1001F models feature the following front panel interfaces:

  • One 10/100/1000/2.5GBASE-T RJ45 (HA) not connected to the NP7 processor.
  • One 10/100/1000BASE-T RJ45 (MGMT) not connected to the NP7 processor.
  • Eight 10G/5G/2.5G/1G/100M BASE-T RJ45 (1 to 8).
  • Sixteen 10/1 GigE SFP+/SFP (9 to 24).
  • Eight 25/10/1 GigE SFP28/SFP+/SFP (25 to 32), interface groups: 25 - 28, 29 - 32.
  • Two 100/40 GigE QSFP28/QSFP+ (33 and 34). Both of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.

The MGMT interface is not connected to the NP7 processor. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interface is also not connected to the NP7 processor. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 1000F or 1001F NP7 configuration. The command output shows a single NP7 named NP#0 is connected to all interfaces. This interface to NP7 mapping is also shown in the diagram above.

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
port1    10000           10000            NP#0            0         58                      
port2    10000           10000            NP#0            0         59                      
port3    10000           10000            NP#0            0         56                      
port4    10000           10000            NP#0            0         57                      
port5    10000           10000            NP#0            0         54                      
port6    10000           10000            NP#0            0         55                      
port7    10000           10000            NP#0            0         52                      
port8    10000           10000            NP#0            0         53                      
port9    10000           10000            NP#0            0         51                      
port10   10000           10000            NP#0            0         50                      
port11   10000           10000            NP#0            0         49                      
port12   10000           10000            NP#0            0         48                      
port13   10000           10000            NP#0            0         35                      
port14   10000           10000            NP#0            0         34                      
port15   10000           10000            NP#0            0         33                      
port16   10000           10000            NP#0            0         32                      
port17   10000           10000            NP#0            0         31                      
port18   10000           10000            NP#0            0         30                      
port19   10000           10000            NP#0            0         29                      
port20   10000           10000            NP#0            0         28                      
port21   10000           10000            NP#0            0         27                      
port22   10000           10000            NP#0            0         26                      
port23   10000           10000            NP#0            0         25                      
port24   10000           10000            NP#0            0         24                      
port25   25000           10000            NP#0            0         23                      
port26   25000           10000            NP#0            0         22                      
port27   25000           10000            NP#0            0         20                      
port28   25000           10000            NP#0            0         21                      
port29   25000           10000            NP#0            0         19                      
port30   25000           10000            NP#0            0         17                      
port31   25000           10000            NP#0            0         18                      
port32   25000           10000            NP#0            0         16                      
port33   100000          100000           NP#0            0         12                      
port34   100000          100000           NP#0            0         8                       
-------- --------------- ---------------  --------------- --------- ---------- ------------ 

NP Port:
Name   Switch_id SW_port_id SW_port_name 
------ --------- ---------- ------------ 
np0_0  0         4                       
np0_1  0         0                       
------ --------- ---------- ------------ 
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID, SW_port_name: Switch port name

The command output also shows the maximum speeds of each interface. Also, interfaces 1 to 24 are connected to one switch and interfaces 25 to 40 are connected to another switch. Both of these switches make up the internal switch fabric, which connects the interfaces to the NP7 processor, the CPU, and the four CP9 processors.

The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.

The FortiGate-1000F and 1001F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.

Interface groups and changing data interface speeds

FortiGate-1000F and 1001F front panel data interfaces 25 to 32 are divided into the following groups:

  • port25 - port28
  • port29 - port32

All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 10Gbps to 25Gbps the speeds of port25 to port28 are also changed to 25Gbps.

Another example, the default speed of the port25 to port32 interfaces is 10Gbps. If you want to install 25GigE transceivers in port25 to port32 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:

config system interface

edit port25

set speed 25000full

next

edit port33

set speed 25000full

end

Every time you change a data interface speed, when you enter the end command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:

config system interface

edit port29

set speed 25000full

end

port29-port32 speed will be changed to 25000full due to hardware limit.

Do you want to continue? (y/n)

Splitting the port33 and port34 interfaces

You can use the following command to split each FortiGate 1000F and 1001F 33 and 34 (port33 and port34) 100/40 GigE QSFP28/QSFP+ interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interface 33 (port33), enter the following command:

config system global

set split-port port33

end

The FortiGate 1000F and 1001F restarts and when it starts up the port33 interface has been replaced by four SFP28 interfaces named port33/1 to port33/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 10000full (10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

If you set the speed of one of the split interfaces to 25000full (25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full and you change the speed of one of them to 10000full (10GigE) they are all changed to 10000full (no restart required). When the interfaces are operating at 10000full, you can change the speeds of individual interfaces to operate at 1000full (1GigE).

Configuring NPU port mapping

You can use the following command to configure FortiGate-1000F and 1001F NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

You can use the port map to assign data interfaces to NP7 links.

Each NP7 has two 100-Gigabit KR links, numbered 0 and 1. Traffic passes to the NP7 over these links. By default the two links operate as a LAG that distributes sessions to the NP7 processor. You can configure the NPU port map to assign interfaces to use one or the other of the NP7 links instead of sending sessions over the LAG.

<index> varies depending on the NP7 processors available in your FortiGate.

For the FortiGate-1000F <index> can be 0, 1, or 2:

  • 0, assign the interface to NP#0, the default, the interface is connected to the LAG. Traffic from the interface is distributed to both links.
  • 1, assign the interface to NP#0-link0, to connect the interface to NP7 link 0. Traffic from the interface is set to link 0.
  • 2, assign the interface to NP#0-link1, to connect the interface to NP7 link 1. Traffic from the interface is set to link 1.

For example, use the following syntax to assign FortiGate-1000F front panel 40 Gigabit interface 33 to NP#0- link0 and interface 34 to NP#0-link1. The resulting configuration splits traffic from the 40Gigabit interfaces between the two NP7 links:

config system npu

config port-npu-map

edit port33

set npu-group-index 1

next

edit port34

set npu-group-index 2

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FortiGate-1000F or 1001F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the np_group column of the diagnose npu np7 port-list command output for port33 and port34 shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group        Switch_id SW_port_id SW_port_name 
-------- --------------- ---------------  --------------- --------- ---------- ------------ 
.
.
.       
port33   100000          100000           NP#0-link0      0         12                      
port34   100000          100000           NP#0-link1      0         8                       
-------- --------------- ---------------  --------------- --------- ---------- ------------