FortiGate 2600F and 2601F fast path architecture
The FortiGate 2600F and 2601F models feature the following front panel interfaces:
- Two 1 GigE RJ45 (MGMT1 and MGMT2, not connected to the NP7 processors)
- Two 10 GigE SFP+ (HA1 and HA2, not connected to the NP7 processor)
- Sixteen 10 GigE RJ45 (1 to 16)
- Sixteen 10/25 GigE SFP+/SFP28 (17 to 32), interface groups: 17 - 20, 21 - 24, 25 - 28, and 29 - 32
- Four 100/40 GigE QSFP28/QSFP+ (33 to 36)
The FortiGate 2600F and 2601F each include one NP7 processor. All front panel data interfaces and the NP7 processor connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processor. All supported traffic passing between any two data interfaces can be offloaded by the NP7 processor. Data traffic processed by the CPU takes a dedicated data path through the ISF and the NP7 processor to the CPU.
The MGMT interfaces are not connected to the NP7 processor. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).
The HA interfaces are also not connected to the NP7 processor. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.
The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.
You can use the following command to display the FortiGate 2600F or 2601F NP7 configuration. The command output shows a single NP7 named NP#0 is connected to all interfaces. This interface to NP7 mapping is also shown in the diagram above.
diagnose npu np7 port-list Front Panel Port: Name Max_speed(Mbps) Dflt_speed(Mbps) NP_group Switch_id SW_port_id SW_port_name -------- --------------- --------------- --------------- --------- ---------- ------------ port1 10000 10000 NP#0 0 54 ge4 port2 10000 10000 NP#0 0 53 ge3 port3 10000 10000 NP#0 0 56 ge6 port4 10000 10000 NP#0 0 55 ge5 port5 10000 10000 NP#0 0 58 ge7 port6 10000 10000 NP#0 0 57 xe25 port7 10000 10000 NP#0 0 60 ge9 port8 10000 10000 NP#0 0 59 ge8 port9 10000 10000 NP#0 0 7 xe6 port10 10000 10000 NP#0 0 8 xe7 port11 10000 10000 NP#0 0 5 xe4 port12 10000 10000 NP#0 0 6 xe5 port13 10000 10000 NP#0 0 11 ge1 port14 10000 10000 NP#0 0 12 ge2 port15 10000 10000 NP#0 0 9 ge0 port16 10000 10000 NP#0 0 10 xe8 port17 25000 10000 NP#0 0 15 xe11 port18 25000 10000 NP#0 0 16 xe12 port19 25000 10000 NP#0 0 13 xe9 port20 25000 10000 NP#0 0 14 xe10 port21 25000 10000 NP#0 0 19 xe15 port22 25000 10000 NP#0 0 20 xe16 port23 25000 10000 NP#0 0 17 xe13 port24 25000 10000 NP#0 0 18 xe14 port25 25000 10000 NP#0 0 23 xe19 port26 25000 10000 NP#0 0 24 xe20 port27 25000 10000 NP#0 0 21 xe17 port28 25000 10000 NP#0 0 22 xe18 port29 25000 10000 NP#0 0 27 xe23 port30 25000 10000 NP#0 0 28 xe24 port31 25000 10000 NP#0 0 25 xe21 port32 25000 10000 NP#0 0 26 xe22 port33 100000 100000 NP#0 0 33 ce1 port34 100000 100000 NP#0 0 29 ce0 port35 100000 100000 NP#0 0 37 ce2 port36 100000 100000 NP#0 0 41 ce3 -------- --------------- --------------- --------------- --------- ---------- ------------ NP Port: Name Switch_id SW_port_id SW_port_name ------ --------- ---------- ------------ np0_0 0 45 ce4 np0_1 0 49 ce5 ------ --------- ---------- ------------ * Max_speed: Maximum speed, Dflt_speed: Default speed * SW_port_id: Switch port ID, SW_port_name: Switch port name
The command output also shows the maximum and default speeds of each interface.
The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.
The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.
The FortiGate-2600F and 2601F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.
Interface groups and changing data interface speeds
FortiGate-2600F and 2601F front panel data interfaces 17 to 32 are divided into the following groups:
- port17 - port20
- port21 - port24
- port25 - port28
- port29 - port32
All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 10Gbps to 25Gbps, the speeds of port25 to port28 are also changed to 25Gbps.
Another example, the default speed of the port25 to port32 interfaces is 10Gbps. If you want to install 25GigE transceivers in port25 to port32 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:
config system interface
edit port25
set speed 25000full
next
edit port29
set speed 25000full
end
Every time you change a data interface speed, when you enter the end
command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:
config system interface
edit port29
set speed 25000full
end
port29-port32 speed will be changed to 25000full due to hardware limit.
Do you want to continue? (y/n)
Configuring NPU port mapping
You can use the following command to configure FortiGate-2600F and 2601F NPU port mapping:
config system npu
config port-npu-map
edit <interface-name>
set npu-group-index <index>
end
You can use the port map to assign data interfaces to NP7 links.
Each NP7 has two 100-Gigabit KR links, numbered 0 and 1. Traffic passes to the NP7 over these links. By default the two links operate as a LAG that distributes sessions to the NP7 processor. You can configure the NPU port map to assign interfaces to use one or the other of the NP7 links instead of sending sessions over the LAG.
<index>
varies depending on the NP7 processors available in your FortGate.
For the FortiGate-2600F <index>
can be 0, 1, or 2:
0
, assign the interface toNP#0
, the default, the interface is connected to the LAG. Traffic from the interface is distributed to both links.1
, assign the interface toNP#0-link0
, to connect the interface to NP7 link 0. Traffic from the interface is set to link 0.2
, assign the interface toNP#0-link1
, to connect the interface to NP7 link 1. Traffic from the interface is set to link 1.
For example, use the following syntax to assign the FortiGate-2600F front panel 100Gigabit interfaces 33 and 34 to NP#0-link0 and interfaces 35 and 36 to NP#0-link1. The resulting configuration splits traffic from the 40Gigabit interfaces between the two NP7 links:
config system npu
config port-npu-map
edit port33
set npu-group-index 1
next
edit port34
set npu-group-index 1
next
edit port35
set npu-group-index 2
next
edit port36
set npu-group-index 2
end
end
You can use the diagnose npu np7 port-list
command to see the current NPU port map configuration. While the FortiGate-2600F or 2601F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id>
command to show how traffic is distributed to the NP7 links.
For example, after making the changes described in the example, the np_group
column of the diagnose npu np7 port-list
command output for port33 to port36 shows the new mapping:
diagnose npu np7 port-list Front Panel Port: Name Max_speed(Mbps) Dflt_speed(Mbps) NP_group Switch_id SW_port_id SW_port_name -------- --------------- --------------- --------------- --------- ---------- ------------ . . . port33 100000 100000 NP#0-link0 0 33 ce1 port34 100000 100000 NP#0-link0 0 29 ce0 port35 100000 100000 NP#0-link1 0 37 ce2 port36 100000 100000 NP#0-link1 0 41 ce3