FortiGate 2600F and 2601F fast path architecture
The FortiGate 2600F and 2601F each include one NP7 processor. All front panel data interfaces and the NP7 processor connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP7 processor. All supported traffic passing between any two data interfaces can be offloaded by the NP7 processor. Data traffic processed by the CPU takes a dedicated data path through the ISF and the NP7 processor to the CPU.
The FortiGate 2600F and 2601F feature the following front panel interfaces:
- Two 1 GigE RJ45 (MGMT1 and MGMT2, not connected to the NP7 processors).
- Two 10 GigE SFP+/SFP (HA1 and HA2, not connected to the NP7 processor).
- Sixteen 10/1 GigE RJ45 (1 to 16).
- Sixteen 25/10/1 GigE SFP28/SFP+ (17 to 32), interface groups: 17 - 20, 21 - 24, 25 - 28, and 29 - 32.
- Four 100/40 GigE QSFP28/QSFP+ (33 to 36). Each of these interfaces can be split into four 25/10/1 GigE SFP28 interfaces.
The MGMT interfaces are not connected to the NP7 processor. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).
The HA interfaces are also not connected to the NP7 processor. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.
The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.
You can use the following command to display the FortiGate 2600F or 2601F NP7 configuration. The command output shows a single NP7 named NP#0 is connected to all interfaces. This interface to NP7 mapping is also shown in the diagram above.
diagnose npu np7 port-list Front Panel Port: Name Max_speed(Mbps) Dflt_speed(Mbps) NP_group Switch_id SW_port_id SW_port_name -------- --------------- --------------- --------------- --------- ---------- ------------ port1 10000 10000 NP#0 0 54 ge4 port2 10000 10000 NP#0 0 53 ge3 port3 10000 10000 NP#0 0 56 ge6 port4 10000 10000 NP#0 0 55 ge5 port5 10000 10000 NP#0 0 58 ge7 port6 10000 10000 NP#0 0 57 xe25 port7 10000 10000 NP#0 0 60 ge9 port8 10000 10000 NP#0 0 59 ge8 port9 10000 10000 NP#0 0 7 xe6 port10 10000 10000 NP#0 0 8 xe7 port11 10000 10000 NP#0 0 5 xe4 port12 10000 10000 NP#0 0 6 xe5 port13 10000 10000 NP#0 0 11 ge1 port14 10000 10000 NP#0 0 12 ge2 port15 10000 10000 NP#0 0 9 ge0 port16 10000 10000 NP#0 0 10 xe8 port17 25000 10000 NP#0 0 15 xe11 port18 25000 10000 NP#0 0 16 xe12 port19 25000 10000 NP#0 0 13 xe9 port20 25000 10000 NP#0 0 14 xe10 port21 25000 10000 NP#0 0 19 xe15 port22 25000 10000 NP#0 0 20 xe16 port23 25000 10000 NP#0 0 17 xe13 port24 25000 10000 NP#0 0 18 xe14 port25 25000 10000 NP#0 0 23 xe19 port26 25000 10000 NP#0 0 24 xe20 port27 25000 10000 NP#0 0 21 xe17 port28 25000 10000 NP#0 0 22 xe18 port29 25000 10000 NP#0 0 27 xe23 port30 25000 10000 NP#0 0 28 xe24 port31 25000 10000 NP#0 0 25 xe21 port32 25000 10000 NP#0 0 26 xe22 port33 100000 100000 NP#0 0 33 ce1 port34 100000 100000 NP#0 0 29 ce0 port35 100000 100000 NP#0 0 37 ce2 port36 100000 100000 NP#0 0 41 ce3 -------- --------------- --------------- --------------- --------- ---------- ------------ NP Port: Name Switch_id SW_port_id SW_port_name ------ --------- ---------- ------------ np0_0 0 45 ce4 np0_1 0 49 ce5 ------ --------- ---------- ------------ * Max_speed: Maximum speed, Dflt_speed: Default speed * SW_port_id: Switch port ID, SW_port_name: Switch port name
The command output also shows the maximum and default speeds of each interface.
The NP7 processor has a bandwidth capacity of 200 Gigabits. You can see from the command output that if all interfaces were operating at their maximum bandwidth the NP7 processor would not be able to offload all the traffic.
The FortiGate-2600F and 2601F can be licensed for hyperscale firewall support, see the Hyperscale Firewall Guide.
Interface groups and changing data interface speeds
FortiGate-2600F and 2601F front panel data interfaces 17 to 32 are divided into the following groups:
- port17 - port20
- port21 - port24
- port25 - port28
- port29 - port32
All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 10Gbps to 25Gbps, the speeds of port25 to port28 are also changed to 25Gbps.
Another example, the default speed of the port25 to port32 interfaces is 10Gbps. If you want to install 25GigE transceivers in port25 to port32 to convert all of these data interfaces to connect to 25Gbps networks, you can enter the following from the CLI:
config system interface
edit port25
set speed 25000full
next
edit port29
set speed 25000full
end
Every time you change a data interface speed, when you enter the end
command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:
config system interface
edit port29
set speed 25000full
end
port29-port32 speed will be changed to 25000full due to hardware limit.
Do you want to continue? (y/n)
Splitting the port33 to port36 interfaces
You can use the following command to split each FortiGate 2600F and 2601F 33 to 36 (port33 to port36) 100/40 GigE QSFP28 interface into four 25/10/1 GigE SFP28 interfaces. For example, to split interfaces 34 and 35 (port34 and port35), enter the following command:
config system global
set split-port port34 port35
end
The FortiGate 2600F and 2601F restarts and when it starts up:
-
The port34 interface has been replaced by four SFP28 interfaces named port34/1 to port34/4.
-
The port35 interface has been replaced by four SFP28 interfaces named port35/1 to port35/4.
A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate. |
By default, the speed of each split interface is set to 10000full
(10GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface
command to change the speeds of the split interfaces.
If you set the speed of one of the split interfaces to 25000full
(25GigE), all of the interfaces are changed to operate at this speed (no restart required). If the split interfaces are set to 25000full
and you change the speed of one of them to 10000full
(10GigE) they are all changed to 10000full
(no restart required). When the interfaces are operating at 10000full
, you can change the speeds of individual interfaces to operate at 1000full
(1GigE).
Configuring FortiGate-2600F and 2601F NPU port mapping
You can use the following command to configure FortiGate-2600F and 2601F NPU port mapping:
config system npu-post
config port-npu-map
edit <interface-name>
set npu-group {All-NP | NP0-link0 | NP0-link1}
end
end
end
You can use port mapping to assign data interfaces or LAGs to send traffic to selected NP7 processor links.
<interface-name>
can be a physical interface or a LAG.
All-NP
, (the default) distribute sessions to the LAG connected to NP0.
NP0-link0
, send sessions to NP0 link 0.
NP0-link1
, send sessions to NP0 link 1.
NP0-link0 NP0-link1
, send sessions to both NP0 link 0 and NP0 link 1.
For example, use the following syntax to assign the FortiGate-2600F front panel 100Gigabit interfaces 37 and 38 to NP0-link0 and interfaces 39 and 40 to NP0-link 1. The resulting configuration splits traffic from the 100Gigabit interfaces between the two NP7 links:
config system npu-post
config port-npu-map
edit port33
set npu-group NP0-link0
next
edit port34
set npu-group NP0-link0
next
edit port35
set npu-group NP0-link1
next
edit port36
set npu-group NP0-link1
end
end
While the FortiGate-2600F and 2601F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id>
command to show how traffic is distributed to the NP7 links.
On the FortiGate 2600F and 2601F you can configure ISF load balancing to change the algorithm that the ISF uses to distribute data interface sessions to NP7 processor links. See Configuring ISF load balancing. |