Fortinet black logo

Hardware Acceleration

FortiGate 1100E and 1101E fast path architecture

FortiGate 1100E and 1101E fast path architecture

The FortiGate 1100E and 1101E models feature the following front panel interfaces:

  • Two 10/100/1000BASE-T Copper (HA and MGMT, not connected to the NP6 processors)
  • Sixteen 10/100/1000BASE-T Copper (1 to 16)
  • Eight 1 GigE SFP (17 - 24)
  • Four 10 GigE SFP+ (25 - 28)
  • Four 25 GigE SFP28 (29 - 32) interface group: 29 - 32
  • Two 40 GigE QSFP+ (33 and 34)

The FortiGate 1100E and 1101E each include two NP6 processors. All front panel data interfaces and both NP6 processors connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP6 processors. Because of the ISF, all supported traffic passing between any two data interfaces can be offloaded by the NP6 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP6 processor to the CPU.

The MGMT interface is not connected to the NP6 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Dedicated management CPU).

The HA interface is also not connected to the NP6 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 1100E or 1101E NP6 configuration. The command output shows two NP6s named NP6_0 and NP6_1 and the interfaces (ports) connected to each NP6. This interface to NP6 mapping is also shown in the diagram above.

The command output also shows the XAUI configuration for each NP6 processor. Each NP6 processor has a 40-Gigabit bandwidth capacity. Traffic passes to each NP6 processor over four 10-Gigabit XAUI links. The XAUI links are numbered 0 to 3.

You can also use the diagnose npu np6 port-list command to display this information.

get hardware npu np6 port-list
Chip   XAUI Ports            QSGMII Max   Cross-chip
                                    Speed offloading
------ ---- -------          ------ ----- ----------
np6_0  0    port20           NA     1G    Yes
       0    port1            NA     1G    Yes
       0    port2            NA     1G    Yes
       1    port19           NA     1G    Yes
       1    port3            NA     1G    Yes
       1    port4            NA     1G    Yes
       2    port18           NA     1G    Yes
       2    port5            NA     1G    Yes
       2    port6            NA     1G    Yes
       3    port17           NA     1G    Yes
       3    port7            NA     1G    Yes
       3    port8            NA     1G    Yes
       0-3  port25           NA     10G   Yes
       0-3  port26           NA     10G   Yes
       0-3  port29           NA     25G   Yes
       0-3  port30           NA     25G   Yes
       0-3  port33           NA     40G   Yes
------ ---- -------          ------ ----- ----------
np6_1  0    port24           NA     1G    Yes
       0    port9            NA     1G    Yes
       0    port10           NA     1G    Yes
       1    port23           NA     1G    Yes
       1    port11           NA     1G    Yes
       1    port12           NA     1G    Yes
       2    port22           NA     1G    Yes
       2    port13           NA     1G    Yes
       2    port14           NA     1G    Yes
       3    port21           NA     1G    Yes
       3    port15           NA     1G    Yes
       3    port16           NA     1G    Yes
       0-3  port27           NA     10G   Yes
       0-3  port28           NA     10G   Yes
       0-3  port31           NA     25G   Yes
       0-3  port32           NA     25G   Yes
       0-3  port34           NA     40G   Yes
------ ---- -------          ------ ----- ----------

Distributing traffic evenly among the NP6 processors can optimize performance. For details, see Optimizing NP6 performance by distributing traffic to XAUI links.

You can also add LAGs to improve performance. For details, see Increasing NP6 offloading capacity using link aggregation groups (LAGs).

Interface groups and changing data interface speeds

FortiGate-1100E and 1101E front panel data interfaces 29 to 32 are in an interface group and all operate at the same speed. Changing the speed of an interface in this group changes the speeds of all of the interfaces in the group.

For example, the default speed of the port29 to port32 interfaces is 25Gbps. If you want to install 10GigE transceivers in port29 to port32 to convert all of these data interfaces to connect to 10Gbps networks, you can enter the following from the CLI:

config system interface

edit port29

set speed 10000full

end

Every time you change a data interface speed, when you enter the end command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:

config system interface

edit port29

set speed 10000full

end

port29-port32 speed will be changed to 10000full due to hardware limit.

Do you want to continue? (y/n)

FortiGate 1100E and 1101E fast path architecture

The FortiGate 1100E and 1101E models feature the following front panel interfaces:

  • Two 10/100/1000BASE-T Copper (HA and MGMT, not connected to the NP6 processors)
  • Sixteen 10/100/1000BASE-T Copper (1 to 16)
  • Eight 1 GigE SFP (17 - 24)
  • Four 10 GigE SFP+ (25 - 28)
  • Four 25 GigE SFP28 (29 - 32) interface group: 29 - 32
  • Two 40 GigE QSFP+ (33 and 34)

The FortiGate 1100E and 1101E each include two NP6 processors. All front panel data interfaces and both NP6 processors connect to the integrated switch fabric (ISF). All data traffic passes from the data interfaces through the ISF to the NP6 processors. Because of the ISF, all supported traffic passing between any two data interfaces can be offloaded by the NP6 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP6 processor to the CPU.

The MGMT interface is not connected to the NP6 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Dedicated management CPU).

The HA interface is also not connected to the NP6 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

You can use the following command to display the FortiGate 1100E or 1101E NP6 configuration. The command output shows two NP6s named NP6_0 and NP6_1 and the interfaces (ports) connected to each NP6. This interface to NP6 mapping is also shown in the diagram above.

The command output also shows the XAUI configuration for each NP6 processor. Each NP6 processor has a 40-Gigabit bandwidth capacity. Traffic passes to each NP6 processor over four 10-Gigabit XAUI links. The XAUI links are numbered 0 to 3.

You can also use the diagnose npu np6 port-list command to display this information.

get hardware npu np6 port-list
Chip   XAUI Ports            QSGMII Max   Cross-chip
                                    Speed offloading
------ ---- -------          ------ ----- ----------
np6_0  0    port20           NA     1G    Yes
       0    port1            NA     1G    Yes
       0    port2            NA     1G    Yes
       1    port19           NA     1G    Yes
       1    port3            NA     1G    Yes
       1    port4            NA     1G    Yes
       2    port18           NA     1G    Yes
       2    port5            NA     1G    Yes
       2    port6            NA     1G    Yes
       3    port17           NA     1G    Yes
       3    port7            NA     1G    Yes
       3    port8            NA     1G    Yes
       0-3  port25           NA     10G   Yes
       0-3  port26           NA     10G   Yes
       0-3  port29           NA     25G   Yes
       0-3  port30           NA     25G   Yes
       0-3  port33           NA     40G   Yes
------ ---- -------          ------ ----- ----------
np6_1  0    port24           NA     1G    Yes
       0    port9            NA     1G    Yes
       0    port10           NA     1G    Yes
       1    port23           NA     1G    Yes
       1    port11           NA     1G    Yes
       1    port12           NA     1G    Yes
       2    port22           NA     1G    Yes
       2    port13           NA     1G    Yes
       2    port14           NA     1G    Yes
       3    port21           NA     1G    Yes
       3    port15           NA     1G    Yes
       3    port16           NA     1G    Yes
       0-3  port27           NA     10G   Yes
       0-3  port28           NA     10G   Yes
       0-3  port31           NA     25G   Yes
       0-3  port32           NA     25G   Yes
       0-3  port34           NA     40G   Yes
------ ---- -------          ------ ----- ----------

Distributing traffic evenly among the NP6 processors can optimize performance. For details, see Optimizing NP6 performance by distributing traffic to XAUI links.

You can also add LAGs to improve performance. For details, see Increasing NP6 offloading capacity using link aggregation groups (LAGs).

Interface groups and changing data interface speeds

FortiGate-1100E and 1101E front panel data interfaces 29 to 32 are in an interface group and all operate at the same speed. Changing the speed of an interface in this group changes the speeds of all of the interfaces in the group.

For example, the default speed of the port29 to port32 interfaces is 25Gbps. If you want to install 10GigE transceivers in port29 to port32 to convert all of these data interfaces to connect to 10Gbps networks, you can enter the following from the CLI:

config system interface

edit port29

set speed 10000full

end

Every time you change a data interface speed, when you enter the end command, the CLI confirms the range of interfaces affected by the change. For example, if you change the speed of port29 the following message appears:

config system interface

edit port29

set speed 10000full

end

port29-port32 speed will be changed to 10000full due to hardware limit.

Do you want to continue? (y/n)