FortiGate 4400F and 4401F fast path architecture
The FortiGate 4400F and 4401F each include six NP7 processors. All front panel data interfaces and the NP7 processors connect to the integrated switch fabric (ISF). All supported traffic passing between any two data interfaces can be offloaded by the NP7 processors. Because of the ISF, all data traffic passes from the data interfaces through the ISF to the NP7 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP7 processor to the CPU.
The FortiGate 4400F and 4401F models feature the following front panel interfaces:
- Two 1GigE RJ45 (MGMT1 and MGMT2, not connected to the NP7 processors).
- Twenty 25/10/1 GigE SFP28 (HA1, HA2 , AUX1, AUX2, 1 to 16) HA1, HA2 , AUX1, AUX2 not connected to the NP7 processors. Interface groups: HA1, HA2, AUX1, and AUX2, 1 - 4, 5 - 8, 9 - 12, and 13 - 16.
- Twelve 100/40 GigE QSFP28 (17 to 28). Each of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.
The MGMT interfaces are not connected to the NP7 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).
The HA interfaces are also not connected to the NP7 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.
The AUX interfaces are also not connected to the NP7 processors. Fortinet recommends using these interfaces for HA session synchronization.
The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.
You can use the |
You can use the following command to display the FortiGate 4400F and 4401F NP7 configuration. The command output shows that all six np7s are connected to all interfaces.
diagnose npu np7 port-list Front Panel Port: Name Max_speed(Mbps) Dflt_speed(Mbps) NP_group Switch_id SW_port_id SW_port_name -------- --------------- --------------- --------------- --------- ---------- ------------ port1 25000 10000 NP#0-5 0 37 xe12 port2 25000 10000 NP#0-5 0 38 xe13 port3 25000 10000 NP#0-5 0 39 xe14 port4 25000 10000 NP#0-5 0 40 xe15 port5 25000 10000 NP#0-5 0 41 xe16 port6 25000 10000 NP#0-5 0 42 xe17 port7 25000 10000 NP#0-5 0 43 xe18 port8 25000 10000 NP#0-5 0 44 xe19 port9 25000 10000 NP#0-5 0 45 xe20 port10 25000 10000 NP#0-5 0 46 xe21 port11 25000 10000 NP#0-5 0 47 xe22 port12 25000 10000 NP#0-5 0 48 xe23 port13 25000 10000 NP#0-5 0 49 xe24 port14 25000 10000 NP#0-5 0 50 xe25 port15 25000 10000 NP#0-5 0 51 xe26 port16 25000 10000 NP#0-5 0 52 xe27 port17 100000 100000 NP#0-5 0 57 ce7 port18 100000 100000 NP#0-5 0 53 ce6 port19 100000 100000 NP#0-5 0 67 ce9 port20 100000 100000 NP#0-5 0 61 ce8 port21 100000 100000 NP#0-5 0 75 ce11 port22 100000 100000 NP#0-5 0 71 ce10 port23 100000 100000 NP#0-5 0 83 ce13 port24 100000 100000 NP#0-5 0 79 ce12 port25 100000 100000 NP#0-5 0 91 ce15 port26 100000 100000 NP#0-5 0 87 ce14 port27 100000 100000 NP#0-5 0 99 ce17 port28 100000 100000 NP#0-5 0 95 ce16 -------- --------------- --------------- --------------- --------- ---------- ------------ NP Port: Name Switch_id SW_port_id SW_port_name ------ --------- ---------- ------------ np0_0 0 5 ce0 np0_1 0 9 ce1 np1_0 0 13 ce2 np1_1 0 17 ce3 np2_0 0 21 ce4 np2_1 0 25 ce5 np3_0 0 115 ce21 np3_1 0 111 ce20 np4_0 0 107 ce19 np4_1 0 103 ce18 np5_0 0 123 ce23 np5_1 0 119 ce22 ------ --------- ---------- ------------ * Max_speed: Maximum speed, Dflt_speed: Default speed * SW_port_id: Switch port ID, SW_port_name: Switch port name
The command output also shows the maximum and default speeds of each interface.
The integrated switch fabric distributes sessions from the data interfaces to the NP7 processors. The six NP7 processors have a bandwidth capacity of 200Gigabit x 6 = 1200 Gigabit. If all interfaces were operating at their maximum bandwidth, the NP7 processors would not be able to offload all the traffic. You can use NPU port mapping to control how sessions are distributed to NP7 processors.
You can add LAGs to improve performance. For details, see Increasing NP7 offloading capacity using link aggregation groups (LAGs).
The FortiGate-4400F and 4401F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.
Interface groups and changing data interface speeds
FortiGate-4400F and 4401F front panel data interfaces are divided into the following groups:
- ha1, ha2, aux1, and aux2
- port1 - port4
- port5 - port8
- port9 - port12
- port13 - port16
All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you want to install 25GigE transceivers in port1 to port8 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:
config system interface
edit port1
set speed 25000full
next
edit port5
set speed 25000full
end
Every time you change a data interface speed, when you enter the end
command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port5, the following message appears:
config system interface
edit port5
set speed 25000full
end
port5-port8 speed will be changed to 25000full due to hardware limit.
Do you want to continue? (y/n)
Splitting the port17 to port28 interfaces
You can use the following command to split each FortiGate 4400F or 4401F 17 to 28 (port17 to port28) 100/40 GigE QSFP28 interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interfaces 19 and 26 (port19 and port26), enter the following command:
config system global
set split-port port19 port26
end
The FortiGate 4400F or 4401F restarts and when it starts up:
-
The port19 interface has been replaced by four SFP28 interfaces named port19/1 to port19/4.
-
The port26 interface has been replaced by four SFP28 interfaces named port26/1 to port26/4.
A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate. |
By default, the speed of each split interface is set to 10000full
(10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface
command to change the speeds of the split interfaces.
If you set the speed of one of the split interfaces to 25000full
(25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full
and you change the speed of one of them to 10000full
(10GigE) they are all changed to 10000full
(no restart required). When the interfaces are operating at 10000full
, you can change the speeds of individual interfaces to operate at 1000full
(1GigE).
Configuring FortiGate 4400F and 4401F NPU port mapping
The default FortiGate-4400F and 4401F port mapping configuration results in sessions passing from front panel data interfaces to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabit interfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.
You can use NPU port mapping to override how data network interface sessions are distributed to NP7 processors. For example, you can set up NPU port mapping to send all traffic from a front panel data interface or LAG to a specific NP7 processor or group of NP7 processors, or a single NP7 link.
On the FortiGate 4400F and 4401F you can configure ISF load balancing to change the algorithm that the ISF uses to distribute data interface sessions to NP7 processors. ISF load balancing is configured for an interface, and distributes sessions from that interface to all NP7 processor LAGs. If you have configured NPU port mapping, ISF load balancing distributes sessions from the interface to the NP7 processors and links in the NPU port mapping configuration for that interface. See Configuring ISF load balancing. |
Use the following command to configure FortiGate 4400F and 4401F NPU port mapping:
config system npu-post
config port-npu-map
edit <interface-name>
set npu-group {All-NP | NP0 | NP1 | NP2 | NP3 | NP4 | NP5 | NP0-to-NP1 | NP2-to-NP3 | NP4-to-NP5 | NP0-to-NP3 | NP0-to-NP2 | NP3-to-NP5
| NP0-link0 | NP0-link1 | NP1-link0 | NP1-link1 | NP2-link0 | NP2-link1 | NP3-link0 | NP3-link1 | NP4-link0 | NP4-link1 | NP5-link0 | NP5-link1}
end
end
end
<interface-name>
can be a physical interface or a LAG.
All-NP
, (the default) distribute sessions among all six NP7 LAGs.
NP0
, distribute sessions to the LAG connected to NP0.
NP1
, distribute sessions to the LAG connected to NP1.
NP2
, distribute sessions to the LAG connected to NP2.
NP3
, distribute sessions to the LAG connected to NP3.
NP4
, distribute sessions to the LAG connected to NP4.
NP5
, distribute sessions to the LAG connected to NP5.
NP0-to-NP1
, distribute sessions between the LAGs connected to NP0 and NP1.
NP2-to-NP3
, distribute sessions between the LAGs connected to NP2 and NP3.
NP4-to-NP5
, distribute sessions between the LAGs connected to NP4 and NP5.
NP0-to-NP3
, distribute sessions among the LAGs connected to NP0, NP1, NP2, and NP3.
NP0-to-NP2
, distribute sessions among the LAGs connected to NP0, NP1, and NP2.
NP3-to-NP5
, distribute sessions among the LAGs connected to NP3, NP4, and NP5.
NP0-link0
, send sessions from the front panel data interface to NP0 link 0.
NP0-link1
, send sessions from the front panel data interface to NP0 link 1.
NP1-link0
, send sessions from the front panel data interface toNP1 link 0.
NP1-link1
, send sessions from the front panel data interface to NP1 link 1.
NP2-link0
, send sessions from the front panel data interface to NP2 link 0.
NP2-link1
, send sessions from the front panel data interface to NP2 link 1.
NP3-link0
, send sessions from the front panel data interface to NP3 link 0.
NP3-link1
, send sessions from the front panel data interface to NP3 link 1.
NP4-link0
, send sessions from the front panel data interface to NP4 link 0.
NP4-link1
, send sessions from the front panel data interface to NP4 link 1.
NP5-link0
, send sessions from the front panel data interface to NP5 link 0.
NP5-link1
, send sessions from the front panel data interface to NP5 link 1.
For example, use the following syntax to assign the FortiGate-4400F interfaces 25 and 26 to NP3, NP4 and NP5 and interfaces 27 and 28 to NP2 and NP5:
config system npu
config port-npu-map
edit port25
set npu-group NP3-to-NP5
next
edit port26
set npu-group NP3-to-NP5
next
edit port27
set npu-group NP2 NP5
next
edit port28
set npu-group NP2 NP5
end
end
While the FortiGate-4400F or 4401F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id>
command to show how traffic is distributed to the NP7 links.