The FortiGate 2200E and 2201E models feature the following front panel interfaces:
- Two 10/100/1000BASE-T Copper (MGMT1 and MGMT2)
- Twelve 10/100/1000BASE-T Copper (1 to 12)
- Eighteen 10/25 GigE SFP+/SFP28 (13 to 28), interface groups: 13 - 16, 17 - 20, 21 - 24, and 25 - 28
- Four 10/25 GigE SFP+/SFP28 (29, 30, HA1 and HA2), interface groups: 29 ha1 and 30 ha2 (the HA interfaces are not connected to the NP6 processor)
- Four 40 GigE QSFP+ (31 to 34)
The FortiGate 2200E and 2201E each include four NP6 processors. All front panel data interfaces and all of the NP6 processors connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP6 processors. Because of the ISF, all supported traffic passing between any two data interfaces can be offloaded by the NP6 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP6 processor to the CPU.
The MGMT interfaces are not connected to the NP6 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Dedicated management CPU).
The HA interfaces are also not connected to the NP6 processors. To help provide better HA stability and resiliency, the HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.
The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.
You can use the following command to display the FortiGate 2200E or 2201E NP6 configuration. The command output shows four NP6s named NP6_0, NP6_1, and NP6_2 and the interfaces (ports) connected to each NP6. This interface to NP6 mapping is also shown in the diagram above.
The command output also shows the XAUI configuration for each NP6 processor. Each NP6 processor has a 40-Gigabit bandwidth capacity. Traffic passes to each NP6 processor over four 10-Gigabit XAUI links. The XAUI links are numbered 0 to 3.
You can also use the
diagnose npu np6 port-list command to display this information.
get hardware npu np6 port-list Chip XAUI Ports Max Cross-chip Speed offloading ------ ---- ------- ----- ---------- np6_0 0 port1 1G Yes 1 port2 1G Yes 2 port3 1G Yes 3 0-3 port13 25G Yes 0-3 port14 25G Yes 0-3 port15 25G Yes 0-3 port16 25G Yes 0-3 port17 25G Yes 0-3 port31 40G Yes ------ ---- ------- ----- ---------- np6_1 0 port4 1G Yes 1 port5 1G Yes 2 port6 1G Yes 3 0-3 port18 25G Yes 0-3 port19 25G Yes 0-3 port20 25G Yes 0-3 port24 25G Yes 0-3 port23 25G Yes 0-3 port32 40G Yes ------ ---- ------- ----- ---------- np6_2 0 port7 1G Yes 1 port8 1G Yes 2 port9 1G Yes 3 0-3 port22 25G Yes 0-3 port21 25G Yes 0-3 port26 25G Yes 0-3 port25 25G Yes 0-3 port28 25G Yes 0-3 port33 40G Yes ------ ---- ------- ----- ---------- np6_3 0 port10 1G Yes 1 port11 1G Yes 2 port12 1G Yes 2 port29 10G Yes 3 port30 10G Yes 0-3 port27 25G Yes 0-3 port34 40G Yes ------ ---- ------- ----- ----------
Distributing traffic evenly among the NP6 processors can optimize performance. For details, see Optimizing NP6 performance by distributing traffic to XAUI links.
You can also add LAGs to improve performance. For details, see Increasing NP6 offloading capacity using link aggregation groups (LAGs).
FortiGate-2200E and 2201E front panel data interfaces 13 to 30, HA1, and HA2 are divided into the following groups:
- port13 - port16
- port17 - port20
- port21 - port24
- port25 - port28
- port29 and ha1
- port30 and ha2
All of the interfaces in a group operate at the same speed. Changing the speed of an interface changes the speeds of all of the interfaces in the same group. For example, if you change the speed of port26 from 25Gbps to 10Gbps the speeds of port25 to port28 are also changed to 10Gbps.
Another example, port17 to port24 interfaces are operating at 25Gbps. If you want to install 10GigE transceivers in port17 to port24 to convert all of these data interfaces to connect to 10Gbps networks, you can enter the following from the CLI:
config system interface
set speed 10000full
set speed 10000full
Every time you change a data interface speed, when you enter the
end command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:
config system interface
set speed 25000full
port29 ha1 speed will be changed to 25000full due to hardware limit.
Do you want to continue? (y/n)