FortiGate 1800F and 1801F fast path architecture
The FortiGate 1800F and 1801F models feature the following front panel interfaces:
- Two 10/100/1000BASE-T Copper (MGMT1 and MGMT2, not connected to the NP7 processors)
- Two 10 GigE SFP+ (HA1 and HA2, not connected to the NP7 processor)
- Sixteen 10/100/1000BASE-T Copper (1 to 16)
- Eight 1 GigE SFP (17 to 24)
- Twelve 10/25 GigE SFP+/SFP28 (25 to 36), interface groups: 25 - 28, 29 - 32, and 33 - 36
- Four 40 GigE QSFP+ (37 to 40)
The FortiGate 1800F and 1801F each include one NP7 processor. All front panel data interfaces and the NP7 processor connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processor. All supported traffic passing between any two data interfaces can be offloaded by the NP7 processor. Data traffic processed by the CPU takes a dedicated data path through the ISF and the NP7 processor to the CPU.
The MGMT interfaces are not connected to the NP7 processor. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Dedicated management CPU).
The HA interfaces are also not connected to the NP7 processor. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.
The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.
You can use the following command to display the FortiGate 1800F or 1801F NP7 configuration. The command output shows a single NP7 named NP#0 is connected to all interfaces. This interface to NP7 mapping is also shown in the diagram above.
diagnose npu np7 port-list name max_speed(Mbps) np_group switch_id sw_port_id sw_port_name ------ --------------- --------------- --------- ---------- ------------ port1 1000 NP#0 0 3 ge1 port2 1000 NP#0 0 2 ge0 port3 1000 NP#0 0 5 ge3 port4 1000 NP#0 0 4 ge2 port5 1000 NP#0 0 7 ge5 port6 1000 NP#0 0 6 ge4 port7 1000 NP#0 0 9 ge7 port8 1000 NP#0 0 8 ge6 port9 1000 NP#0 0 11 ge9 port10 1000 NP#0 0 10 ge8 port11 1000 NP#0 0 13 ge11 port12 1000 NP#0 0 12 ge10 port13 1000 NP#0 0 15 ge13 port14 1000 NP#0 0 14 ge12 port15 1000 NP#0 0 17 ge15 port16 1000 NP#0 0 16 ge14 port17 1000 NP#0 0 18 ge16 port18 1000 NP#0 0 19 ge17 port19 1000 NP#0 0 20 ge18 port20 1000 NP#0 0 21 ge19 port21 1000 NP#0 0 22 ge20 port22 1000 NP#0 0 23 ge21 port23 1000 NP#0 0 24 ge22 port24 1000 NP#0 0 25 ge23 port25 25000 NP#0 1 15 xe14 port26 25000 NP#0 1 16 xe15 port27 25000 NP#0 1 13 xe12 port28 25000 NP#0 1 14 xe13 port29 25000 NP#0 1 19 xe18 port30 25000 NP#0 1 20 xe19 port31 25000 NP#0 1 17 xe16 port32 25000 NP#0 1 18 xe17 port33 25000 NP#0 1 23 xe22 port34 25000 NP#0 1 24 xe23 port35 25000 NP#0 1 21 xe20 port36 25000 NP#0 1 22 xe21 port37 40000 NP#0 1 29 xe25 port38 40000 NP#0 1 25 xe24 port39 40000 NP#0 1 33 xe26 port40 40000 NP#0 1 37 xe27 NP PORTS: name switch_id sw_port_id sw_port_name ------ --------- ---------- ------------ np0_0 1 41 ce0 np0_1 1 45 ce1
The command output also shows the maximum speeds of each interface. Also, interfaces 1 to 24 are connected to one switch and interfaces 25 to 40 are connected to another switch. Both of these switches make up the internal switch fabric, which connects the interfaces to the NP7 processor, the CPU, and the four CP9 processors.
The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.
Interface groups and changing data interface speeds
FortiGate-1800F and 1801F front panel data interfaces 25 to 36 are divided into the following groups:
- port25 - port28
- port29 - port32
- port33 - port36
All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 10Gbps to 25Gbps the speeds of port25 to port28 are also changed to 25Gbps.
Another example, the default speed of the port25 to port36 interfaces is 10Gbps. If you want to install 25GigE transceivers in port29 to port36 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:
config system interface
edit port29
set speed 25000full
next
edit port33
set speed 25000full
end
Every time you change a data interface speed, when you enter the end
command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:
config system interface
edit port29
set speed 25000full
end
port29-port32 speed will be changed to 25000full due to hardware limit.
Do you want to continue? (y/n)
Configuring NPU port mapping
You can use the following command to configure FortiGate-1800F and 1801F NPU port mapping:
config system npu
config port-npu-map
edit <interface-name>
set npu-group-index <index>
end
You can use the port map to assign data interfaces to NP7 links.
Each NP7 has two 100-Gigabit KR links, numbered 0 and 1. Traffic passes to the NP7 over these links. By default the two links operate as a LAG that distributes sessions to the NP7 processor. You can configure the NPU port map to assign interfaces to use one or the other of the NP7 links instead of sending sessions over the LAG.
<index>
varies depending on the NP7 processors available in your FortGate.
For the FortiGate-1800F <index>
can be 0, 1, or 2:
-
0
, assign the interface toNP#0
, the default, the interface is connected to the LAG. Traffic from the interface is distributed to both links. -
1
, assign the interface toNP#0-link0
, to connect the interface to NP7 link 0. Traffic from the interface is set to link 0. -
2
, assign the interface toNP#0-link1
, to connect the interface to NP7 link 1. Traffic from the interface is set to link 1.
For example, use the following syntax to assign the FortiGate-1800F front panel 40Gigabit interfaces 37 and 38 to NPU link0 and interfaces 39 and 40 to NPU link 2. The resulting configuration splits traffic from the 40Gigabit interfaces between the two NP7 links:
config system npu
config port-npu-map
edit port37
set npu-group-index 1
next
edit port38
set npu-group-index 1
next
edit port39
set npu-group-index 2
next
edit port40
set npu-group-index 2
end
end
You can use the diagnose npu np7 port-list
command to see the current NPU port map configuration. While the FortiGate-1800F or 1801F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id>
command to show how traffic is distributed to the NP7 links.
For example, after making the changes described in the example, the np_group
column of the diagnose npu np7 port-list
command output for port37 to port 40 shows the new mapping:
diagnose npu np7 port-list name max_speed(Mbps) np_group switch_id sw_port_id sw_port_name ------ --------------- --------------- --------- ---------- ------------ . . . port37 40000 NP#0-link0 1 29 xe25 port38 40000 NP#0-link0 1 25 xe24 port39 40000 NP#0-link1 1 33 xe26 port40 40000 NP#0-link1 1 37 xe27