Fortinet white logo
Fortinet white logo

FortiGate-7000F Administration Guide

FPM-7620F processing module

FPM-7620F processing module

The FPM-7620F processor module is a high-performance worker module that processes sessions load balanced to it by FIMs over the chassis fabric backplane. The FPM-7620F includes two 400Gbps data connections to the FIMs over the chassis fabric backplane and two 50Gbps management connections to the FIMs over base backplane. FPM-7620Fs are installed in chassis slots 3 and up.

The FPM-7620F also includes two front panel 400GigE QSFP-DD fabric channel data interfaces (1 and 2) and eight 10/25GigE SFP28 fabric channel data interfaces (3 to 10). Interfaces 1 and 2 can be connected to 400Gbps data networks. Interfaces 3 to 10 can be connected to 25Gbps data networks. You an also change the speeds of the front panel data interfaces.

FPM fabric channel data interfaces increase the number of data interfaces supported by FortiGate 7000F. Data traffic received by these interfaces is sent over the fabric backplane to the FIM NP7 processors to be load balanced back to the FPMs.

The FPM-7620F processes sessions using a dual CPU configuration, accelerates network traffic processing with two NP7 processors and accelerates content processing with eight CP9 processors. The NP7 network processors are connected by the FIM switch fabric so all supported traffic types can be fast path accelerated by the NP7 processors.

FPM-7620F front panel

FPM-7620F front panel interfaces

You can connect the FPM-7620F to your networks using the front panel fabric channel data interfaces described in the following table. You can create link aggregation groups (LAGs) that can include data interfaces from multiple FIMs and FPMs in the same chassis.

Connector Type Speed Protocol Description
1 and 2 QSFP-DD

400Gbps

100Gbps

40Gbps

4 x 100Gbps (split)

4 x 25Gbps (split)

4 x 10Gbps (split)

Ethernet Two front panel 400GigE QSFP-DD fabric channel data interfaces can be connected to 400Gbps data networks to distribute sessions to the FPMs in chassis slots 3 and up. These interfaces can also operate as 100GigE QSFP28 or 40GigE QSFP+ interfaces. If the FortiGate 7000F includes two FIM-7941Fs, these interfaces can be split into four interfaces that can operate at 100Gbps, 25Gbps, or 10Gbps.

3 to 10

SFP28

25Gbps

10Gbps

Ethernet

Eight front panel 25GigE SFP28 fabric channel data interfaces that can be connected to 25Gbps data networks to distribute sessions to the FPMs in chassis slots 3 and up. These interfaces can also operate as 10GigE SFP+ interfaces.

Changing the FPM-7620F 1 and 2 (P1 and P2) interfaces

You can change the speed of the 1 and 2 (P1 and P2) interfaces to 400G, 100G, or 40G using the config system interface command.

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can also make the following changes:

  • Split the interface into four 100GigE CR2 interfaces.

  • Split the interface into four 25GigE CR or 10GigE SR interfaces.

All of these operations, except changing the interface speed using the config system interface command, require a system restart. Fortinet recommends that you perform these operations during a maintenance window and plan the changes to avoid traffic disruption.

Note

You should change interface types or split interfaces on both FortiGate 7000Fs before forming an FGCP HA cluster. If you decide to change interface type or split interfaces after forming a cluster, you need to remove the secondary FortiGate 7000F from the cluster and change interfaces as required on both FortiGate 7000Fs separately. After the FortiGate 7000Fs restart, you can re-form the cluster. This process will cause traffic interruptions.

Splitting the P1 or P2 interfaces into four 100GigE CR2 interfaces

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can use the following command to split the P1 or P2 interfaces into four 100GigE CR2 interfaces. To split P1 of the FPM-7620F in slot 6 (6-P1) and P2 of the FPM-7620F in slot 7 (7-P2) enter the following command:

config system global

set split-port 6-P1 7-P2

end

The FortiGate 7000F reboots and when it starts up:

  • Interface 6-P1 has been replaced by four 100GigE CR2 interfaces named 6-P1/1 to 6-P1/4.

  • Interface 7-P2 has been replaced by four 100GigE CR2 interfaces named 7-P2/1 to 7-P2/4.

Splitting the P1 or P2 interfaces into four 25GigE CR or 10GigE SR interfaces

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can use the following command to split the P1 or P2 interfaces into four 25GigE CR interfaces. The following command converts the interface into a 100GigE QSFP28 interface then splits this interface into four 25 GigE CR interfaces. To split P1 of the FPM-7620F in slot 8 (8-P1) and P2 of the FPM-7620F in slot 9 (9-P2) enter the following command:

config system global

set qsfpdd-100g-port 8-P1 9-P2

set split-port 8-P1 9-P2

end

The FortiGate 7000F reboots and when it starts up:

  • Interface 8-P1 has been replaced by four 25GigE CR interfaces named 8-P1/1 to 8-P1/4.

  • Interface 9-P2 has been replaced by four 25GigE CR interfaces named 9-P2/1 to 9-P2/4.

If you want some or all of these interfaces to operate as 10GigE SR interfaces you can use the config system interface command to change the interface speed. You can change the speed of some or all of the individual split interfaces depending on whether the transceiver installed in the interface slot supports different speeds for the split interfaces.

FPM-7620F hardware schematic

The two FPM-7620F NP7 network processors provide hardware acceleration by offloading data traffic from the FPM-7620F CPUs. The result is enhanced network performance provided by the NP7 processors plus the network processing load is removed from the CPU. The NP7 processor can also handle some CPU intensive tasks, like IPsec VPN encryption/decryption. Because of the integrated switch fabric, all sessions are fast-pathed and accelerated.

Traffic from FPM-7620F front panel data interfaces is sent over the fabric channel backplane to the FIMs where NP7 processors use SLBC to distribute sessions to individual FPMs. The FPM-7620F can process traffic received from FIM data interfaces and from FPM data interfaces.

FPM-7620F hardware architecture

FPM-7620F and HPE

FortiGate 7000F HPE protection is applied by the FPM NP7 processors. For information about HPE, see NP7 Host Protection Engine (HPE).

You can use the following formula to calculate the total number of packets per second allowed by a FortiGate 7081F or FortiGate 7121F for a configured HPE threshold.

packets per second = number of NP7 processors x host queues x HPE threshold

Note

The FortiGate 7081F and the FortiGate 7121F both have 64 host queues.

For a FortiGate 7081F or FortiGate 7121F with four FPMs (eight NP7 processors) with the following HPE configuration:

config hpe

set all-protocol 0

set udp-max 1000

set enable-shaper enable

end

packets per second = 8 x 64 x 1000 = 32000pps

Configuring NPU port mapping

When you configure FortiGate 7081F or FortiGate 7121F port mapping, each FPM-7620F will have the same port mapping configuration, based on two NP7 processors.

The default FPM-7620F port mapping configuration results in sessions passing from the chassis fabric backplane to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabitinterfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to each NP7 processor. For example, you can sent up NPU port mapping to send all traffic from a front panel data interface to a specific NP7 processor LAG or even to just one of the interfaces in that LAG.

Use the following command to configure NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

<interface-name> the name of a front panel data interface.

<index> select different values of <index> to change how sessions from the selected front panel data interface are handled by the integrated switch fabric. The list of available <index> options depends on the NP7 configuration of your FortiGate. For the FPM-7620F <index> can be:

  • 0: NP#0-1, distribute sessions from the front panel data interface among all three NP7 LAGs.

  • 1: NP#0, send sessions from the front panel data interface to the LAG connected to NP#0.

  • 2: NP#1, send sessions from the front panel data interface to the LAG connected to NP#1.

  • 3: NP#0-link0, send sessions from the front panel data interface to np0_0, which is one of the interfaces connected to NP#0.

  • 4: NP#0-link1, send sessions from the front panel data interface to np0_0, which is one of the interfaces connected to NP#0.

  • 5: NP#1-link0, send sessions from the front panel data interface to np1_0, which is one of the interfaces connected to NP#1.

  • 6: NP#1-link1, send sessions from the front panel data interface to np1_1, which is one of the interfaces connected to NP#1.

For example, use the following syntax to assign the FIM 1-P1 and 1-P2 interfaces to NP#0 and 2-P1 and 2-P2 interfaces to NP#1:

config system npu

config port-npu-map

edit 1-P1

set npu-group-index 1

next

edit 1-P2

set npu-group-index 1

next

edit 2-P1

set npu-group-index 2

next

edit 2-P2

set npu-group-index 2

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FPM-7620F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the NP_group column of the diagnose npu np7 port-list command output shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
. 
.
.
1-P1     100000          100000           NP#0            0         35                      
1-P2     100000          100000           NP#0            0         34                      
2-P1     100000          100000           NP#1            0         33                      
2-P2     100000          100000           NP#1            0         32      
. 
.
.                

FPM-7620F processing module

FPM-7620F processing module

The FPM-7620F processor module is a high-performance worker module that processes sessions load balanced to it by FIMs over the chassis fabric backplane. The FPM-7620F includes two 400Gbps data connections to the FIMs over the chassis fabric backplane and two 50Gbps management connections to the FIMs over base backplane. FPM-7620Fs are installed in chassis slots 3 and up.

The FPM-7620F also includes two front panel 400GigE QSFP-DD fabric channel data interfaces (1 and 2) and eight 10/25GigE SFP28 fabric channel data interfaces (3 to 10). Interfaces 1 and 2 can be connected to 400Gbps data networks. Interfaces 3 to 10 can be connected to 25Gbps data networks. You an also change the speeds of the front panel data interfaces.

FPM fabric channel data interfaces increase the number of data interfaces supported by FortiGate 7000F. Data traffic received by these interfaces is sent over the fabric backplane to the FIM NP7 processors to be load balanced back to the FPMs.

The FPM-7620F processes sessions using a dual CPU configuration, accelerates network traffic processing with two NP7 processors and accelerates content processing with eight CP9 processors. The NP7 network processors are connected by the FIM switch fabric so all supported traffic types can be fast path accelerated by the NP7 processors.

FPM-7620F front panel

FPM-7620F front panel interfaces

You can connect the FPM-7620F to your networks using the front panel fabric channel data interfaces described in the following table. You can create link aggregation groups (LAGs) that can include data interfaces from multiple FIMs and FPMs in the same chassis.

Connector Type Speed Protocol Description
1 and 2 QSFP-DD

400Gbps

100Gbps

40Gbps

4 x 100Gbps (split)

4 x 25Gbps (split)

4 x 10Gbps (split)

Ethernet Two front panel 400GigE QSFP-DD fabric channel data interfaces can be connected to 400Gbps data networks to distribute sessions to the FPMs in chassis slots 3 and up. These interfaces can also operate as 100GigE QSFP28 or 40GigE QSFP+ interfaces. If the FortiGate 7000F includes two FIM-7941Fs, these interfaces can be split into four interfaces that can operate at 100Gbps, 25Gbps, or 10Gbps.

3 to 10

SFP28

25Gbps

10Gbps

Ethernet

Eight front panel 25GigE SFP28 fabric channel data interfaces that can be connected to 25Gbps data networks to distribute sessions to the FPMs in chassis slots 3 and up. These interfaces can also operate as 10GigE SFP+ interfaces.

Changing the FPM-7620F 1 and 2 (P1 and P2) interfaces

You can change the speed of the 1 and 2 (P1 and P2) interfaces to 400G, 100G, or 40G using the config system interface command.

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can also make the following changes:

  • Split the interface into four 100GigE CR2 interfaces.

  • Split the interface into four 25GigE CR or 10GigE SR interfaces.

All of these operations, except changing the interface speed using the config system interface command, require a system restart. Fortinet recommends that you perform these operations during a maintenance window and plan the changes to avoid traffic disruption.

Note

You should change interface types or split interfaces on both FortiGate 7000Fs before forming an FGCP HA cluster. If you decide to change interface type or split interfaces after forming a cluster, you need to remove the secondary FortiGate 7000F from the cluster and change interfaces as required on both FortiGate 7000Fs separately. After the FortiGate 7000Fs restart, you can re-form the cluster. This process will cause traffic interruptions.

Splitting the P1 or P2 interfaces into four 100GigE CR2 interfaces

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can use the following command to split the P1 or P2 interfaces into four 100GigE CR2 interfaces. To split P1 of the FPM-7620F in slot 6 (6-P1) and P2 of the FPM-7620F in slot 7 (7-P2) enter the following command:

config system global

set split-port 6-P1 7-P2

end

The FortiGate 7000F reboots and when it starts up:

  • Interface 6-P1 has been replaced by four 100GigE CR2 interfaces named 6-P1/1 to 6-P1/4.

  • Interface 7-P2 has been replaced by four 100GigE CR2 interfaces named 7-P2/1 to 7-P2/4.

Splitting the P1 or P2 interfaces into four 25GigE CR or 10GigE SR interfaces

When the FPM-7620F is installed in a FortiGate 7000F with two FIM-7941Fs, you can use the following command to split the P1 or P2 interfaces into four 25GigE CR interfaces. The following command converts the interface into a 100GigE QSFP28 interface then splits this interface into four 25 GigE CR interfaces. To split P1 of the FPM-7620F in slot 8 (8-P1) and P2 of the FPM-7620F in slot 9 (9-P2) enter the following command:

config system global

set qsfpdd-100g-port 8-P1 9-P2

set split-port 8-P1 9-P2

end

The FortiGate 7000F reboots and when it starts up:

  • Interface 8-P1 has been replaced by four 25GigE CR interfaces named 8-P1/1 to 8-P1/4.

  • Interface 9-P2 has been replaced by four 25GigE CR interfaces named 9-P2/1 to 9-P2/4.

If you want some or all of these interfaces to operate as 10GigE SR interfaces you can use the config system interface command to change the interface speed. You can change the speed of some or all of the individual split interfaces depending on whether the transceiver installed in the interface slot supports different speeds for the split interfaces.

FPM-7620F hardware schematic

The two FPM-7620F NP7 network processors provide hardware acceleration by offloading data traffic from the FPM-7620F CPUs. The result is enhanced network performance provided by the NP7 processors plus the network processing load is removed from the CPU. The NP7 processor can also handle some CPU intensive tasks, like IPsec VPN encryption/decryption. Because of the integrated switch fabric, all sessions are fast-pathed and accelerated.

Traffic from FPM-7620F front panel data interfaces is sent over the fabric channel backplane to the FIMs where NP7 processors use SLBC to distribute sessions to individual FPMs. The FPM-7620F can process traffic received from FIM data interfaces and from FPM data interfaces.

FPM-7620F hardware architecture

FPM-7620F and HPE

FortiGate 7000F HPE protection is applied by the FPM NP7 processors. For information about HPE, see NP7 Host Protection Engine (HPE).

You can use the following formula to calculate the total number of packets per second allowed by a FortiGate 7081F or FortiGate 7121F for a configured HPE threshold.

packets per second = number of NP7 processors x host queues x HPE threshold

Note

The FortiGate 7081F and the FortiGate 7121F both have 64 host queues.

For a FortiGate 7081F or FortiGate 7121F with four FPMs (eight NP7 processors) with the following HPE configuration:

config hpe

set all-protocol 0

set udp-max 1000

set enable-shaper enable

end

packets per second = 8 x 64 x 1000 = 32000pps

Configuring NPU port mapping

When you configure FortiGate 7081F or FortiGate 7121F port mapping, each FPM-7620F will have the same port mapping configuration, based on two NP7 processors.

The default FPM-7620F port mapping configuration results in sessions passing from the chassis fabric backplane to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabitinterfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to each NP7 processor. For example, you can sent up NPU port mapping to send all traffic from a front panel data interface to a specific NP7 processor LAG or even to just one of the interfaces in that LAG.

Use the following command to configure NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

<interface-name> the name of a front panel data interface.

<index> select different values of <index> to change how sessions from the selected front panel data interface are handled by the integrated switch fabric. The list of available <index> options depends on the NP7 configuration of your FortiGate. For the FPM-7620F <index> can be:

  • 0: NP#0-1, distribute sessions from the front panel data interface among all three NP7 LAGs.

  • 1: NP#0, send sessions from the front panel data interface to the LAG connected to NP#0.

  • 2: NP#1, send sessions from the front panel data interface to the LAG connected to NP#1.

  • 3: NP#0-link0, send sessions from the front panel data interface to np0_0, which is one of the interfaces connected to NP#0.

  • 4: NP#0-link1, send sessions from the front panel data interface to np0_0, which is one of the interfaces connected to NP#0.

  • 5: NP#1-link0, send sessions from the front panel data interface to np1_0, which is one of the interfaces connected to NP#1.

  • 6: NP#1-link1, send sessions from the front panel data interface to np1_1, which is one of the interfaces connected to NP#1.

For example, use the following syntax to assign the FIM 1-P1 and 1-P2 interfaces to NP#0 and 2-P1 and 2-P2 interfaces to NP#1:

config system npu

config port-npu-map

edit 1-P1

set npu-group-index 1

next

edit 1-P2

set npu-group-index 1

next

edit 2-P1

set npu-group-index 2

next

edit 2-P2

set npu-group-index 2

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FPM-7620F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the NP_group column of the diagnose npu np7 port-list command output shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
. 
.
.
1-P1     100000          100000           NP#0            0         35                      
1-P2     100000          100000           NP#0            0         34                      
2-P1     100000          100000           NP#1            0         33                      
2-P2     100000          100000           NP#1            0         32      
. 
.
.