Fortinet white logo
Fortinet white logo

Hardware Acceleration

FortiGate 4800F and 4801F fast path architecture

FortiGate 4800F and 4801F fast path architecture

The FortiGate 4800F and 4801F each include sixteen NP7 processors. All front panel data interfaces and the NP7 processors connect to the integrated switch fabric (ISF). All supported traffic passing between any two data interfaces can be offloaded by the NP7 processors. Because of the ISF, all data traffic passes from the data interfaces through the ISF to the NP7 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP7 processor to the CPU.

The FortiGate 4800F and 4801F models feature the following front panel interfaces:

  • Two 10/100/1000BASE-T RJ45 (MGMT1 and MGMT2, not connected to the NP7 processor).
  • Twelve 50/25/10/1 GigE SFP56 (HA1, HA2 , AUX1, AUX2, 1 to 8) HA1, HA2 , AUX1, AUX2 not connected to the NP7 processors.
  • Twelve 200/100/40 GigE QSFP56 (9 to 20). Each of these interfaces can be split into four 50 GigE SFP28/SFP56 interfaces.
  • Eight 400/200/100/40 GigE QSFP-DD (21 to 28). Each of these interfaces can be split into eight 50GigE interfaces, four 100GigE interfaces, or two 200 GigE interfaces.

The MGMT interfaces are not connected to the NP7 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interfaces are also not connected to the NP7 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

The AUX interfaces are connected to the NP7 processors and function similar to data interfaces. You can use the AUX interfaces for FGSP or FGCP HA session synchronization. If you license the FortiGate 4800F or 4801F for hyperscale firewall, you can use the AUX interfaces for hardware logging or FGGP or FGSP HA hardware session synchronization.

Note

You can use the port-path-option option of the config system npu command to connect or disconnect the HA and AUX interfaces to the NP7 processors. See config port-path-option.

You can use the following command to display the FortiGate 4800F and 4801F NP7 configuration. The command output shows that all sixteen NP7s are connected to all front panel data interfaces and to the AUX 1 and AUX2 interfaces.

FortiGate-4801F # diagnose npu np7 port-list
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group      group_from_vdom Switch_id SW_port_id
-------- --------------- ---------------  ------------- --------------- --------- ----------
port1    50000           50000            NP#0-15       1               0         167
port2    50000           50000            NP#0-15       1               0         168
port3    50000           50000            NP#0-15       1               0         169
port4    50000           50000            NP#0-15       1               0         170
port5    50000           50000            NP#0-15       1               0         172
port6    50000           50000            NP#0-15       1               0         173
port7    50000           50000            NP#0-15       1               0         174
port8    50000           50000            NP#0-15       1               0         175
port9    200000          200000           NP#0-15       1               0         179
port10   200000          200000           NP#0-15       1               0         183
port11   200000          200000           NP#0-15       1               0         191
port12   200000          200000           NP#0-15       1               0         187
port13   200000          200000           NP#0-15       1               0         195
port14   200000          200000           NP#0-15       1               0         199
port15   200000          200000           NP#0-15       1               0         207
port16   200000          200000           NP#0-15       1               0         203
port17   200000          200000           NP#0-15       1               0         211
port18   200000          200000           NP#0-15       1               0         215
port19   200000          200000           NP#0-15       1               0         223
port20   200000          200000           NP#0-15       1               0         219
port21   400000          400000           NP#0-15       1               0         227
port22   400000          400000           NP#0-15       1               0         235
port23   400000          400000           NP#0-15       1               0         243
port24   400000          400000           NP#0-15       1               0         251
port25   400000          400000           NP#0-15       1               0         0
port26   400000          400000           NP#0-15       1               0         8
port27   400000          400000           NP#0-15       1               0         16
port28   400000          400000           NP#0-15       1               0         24
aux1     50000           50000            NP#0-15       1               0         165
aux2     50000           50000            NP#0-15       1               0         166
-------- --------------- ---------------  ------------- --------- ----------

NP Port:
Name   Switch_id SW_port_id
------ --------- ----------
np0_0  0         95
np0_1  0         91
np1_0  0         83
np1_1  0         87
np2_0  0         56
np2_1  0         60
np3_0  0         52
np3_1  0         48
np4_0  0         79
np4_1  0         75
np5_0  0         67
np5_1  0         71
np6_0  0         40
np6_1  0         44
np7_0  0         36
np7_1  0         32
np8_0  0         159
np8_1  0         155
np9_0  0         147
np9_1  0         151
np10_0 0         127
np10_1 0         123
np11_0 0         115
np11_1 0         119
np12_0 0         143
np12_1 0         139
np13_0 0         131
np13_1 0         135
np14_0 0         111
np14_1 0         107
np15_0 0         99
np15_1 0         103
------ --------- ----------
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID

The command output also shows the maximum and default speeds of each interface.

The integrated switch fabric distributes sessions from the data interfaces to the NP7 processors. The sixteen NP7 processors have a bandwidth capacity of 200Gigabit x 16 = 3200Gigabit. If all data interfaces were operating at their maximum bandwidth, the NP7 processors would not be able to offload all the traffic. You can use NPU port mapping to control how sessions are distributed to NP7 processors.

You can add LAGs to improve performance. For details, see Increasing NP7 offloading capacity using link aggregation groups (LAGs).

For information about hyperscale firewall support, see the Hyperscale Firewall Guide.

Assigning an NP7 processor group to a hyperscale firewall VDOM

If a FortiGate has more than one NP7 processor, to support a hyperscale firewall VDOM, the NP7 processors need to be connected together using RLT channels. Due to the number of channels in an NP7 processor, a maximum of six NP7 processors can be connected together to support a hyperscale firewall VDOM. Any FortiGate with more than six NP7 processors has to be configured to limit the number of NP7 processors that a hyperscale firewall VDOM can send sessions to. FortiGate models with six or fewer NP7 processors can support hyperscale firewall VDOMs without dividing the NP7 processors into different groups.

Note

Because of the NP7 processor groups feature, if you have applied a hyperscale firewall license to a FortiGate 4800F or 4801F you should not configure NPU port mapping. Instead you should use the npu-group-id command to assign NP7 processor groups to hyperscale firewall VDOMs.

Since the FortiGate 4800F and 4801F each have sixteen NP7 processors, these models include a new configuration option to assign a group of NP7 processors to to each hyperscale firewall VDOM. In the current implementation, the FortiGate 4800F or 4801F supports assigning a group of four NP7 processors to a hyperscale firewall VDOM. Fortinet could have created different group sizes, but choose four groups of four to make it easier to evenly distribute the processing load among all sixteen of the NP7 processors.

After creating a hyperscale firewall VDOM on the FortiGate 4800F or 4801F, the first thing you should do is use the following options to enable hyperscale firewall features by setting the policy-offload-level to full-offload and to assign an NPU processor group (also called an NP group or an NPU group) to the newly created hyperscale firewall VDOM. You must set the NP7 processor group for the hyperscale firewall VDOM before adding any interfaces to the VDOM. The CLI will not accept the command to assign an NP7 processor group if interfaces have already been added to the hyperscale firewall VDOM.

config system settings

set policy-offload-level full-offload

set npu-group-id {0 | 1 | 2 | 3}

end

0 assign this hyperscale firewall VDOM to NP7 processor group NP#0-3, which includes NP#0, NP#1, NP#2, and NP#3.

1 assign this hyperscale firewall VDOM to NP7 processor group NP#4-7, which includes NP#4, NP#5, NP#6, and NP#7.

2 assign this hyperscale firewall VDOM to NP7 processor group NP#8-11, which includes NP#8, NP#9, NP#10, and NP#11.

3 assign this hyperscale firewall VDOM to NP7 processor group NP#12-15, which includes NP#12, NP#13, NP#14, and NP#15.

You can only assign one NP7 processor group to a hyperscale firewall VDOM.

You can assign the same NP7 processor group to multiple hyperscale firewall VDOMs.

You can now add interfaces to the hyperscale firewall VDOM and continue with creating hyperscale firewall policies and so on.

When you add an interface to a hyperscale firewall VDOM, that interface is assigned to the same NP7 processor group as the VDOM. You can use the diagnose npu np7 port-list command to see the NP7 processor group (check the NP_group column) assigned to each interface.

A VLAN is assigned to the same NP7 processor group as the physical interface that the VLAN is added to. You can move a VLAN to a different hyperscale firewall VDOM than the physical interface, as long as the VDOM is assigned to the same NP7 group as the hyperscale firewall VDOM containing the physical interface.

All of the members of a LAG must be assigned to the same NP7 group.

NP7 processor groups do not affect NP7 offloading of sessions in non-hyperscale firewall VDOMs (such as the root VDOM).

NP7 processor groups and hyperscale hardware logging

On a FortiGate 4800F or 4801F, hyperscale hardware logging can only send logs to interfaces in the same NP7 processor group as the NP7 processors that are handling the hyperscale sessions.

This means that, on a FortiGate 4800F or 4801F, hyperscale hardware logging servers must include a hyperscale firewall VDOM. This VDOM must be assigned the same NP7 processor group as the hyperscale firewall VDOM that is processing the hyperscale traffic being logged. This can be the same hyperscale VDOM or another hyperscale firewall VDOM that is assigned the same NP7 processor group.

For more information about hyperscale firewall hardware logging, see Configuring hardware logging.

The following example hyperscale hardware logging configuration could be created for the hyperscale VDOM named Test-hw12. The configuration is a syslog configuration that includes three logging servers each assigned to the Test-hw12 hyperscale firewall VDOM.

config log npu-server

set log-processor host

set syslog-facility 0

set syslog-severity 7

config server-info

edit 1

set vdom Test-hw12

set ipv4-server 10.10.10.72

set source-port 2002

set dest-port 514

next

edit 2

set vdom Test-hw12

set ipv4-server 10.10.10.73

set source-port 2003

set dest-port 514

next

edit 3

set vdom Test-hw12

set ipv4-server 10.10.10.74

set source-port 2004

set dest-port 514

end

config server-group

edit HyperScale_Syslog

set log-format syslog

set server-number 13

set server-start-id 1

end

end

Splitting the port9 to port20 interfaces

You can use the following command to split each FortiGate 4800F or 4801F 9 to 20 (port9 to port20) QSFP56 interface into four 50 GigE SFP28/SFP56 interfaces. For example, to split interfaces 9 and 16 (port9 and port16), enter the following command:

config system global

config split-port-mode

edit port9

set split-mode 4x50G

next

edit port16

set split-mode 4x50G

end

The FortiGate 4800F or 4801F restarts and when it starts up:

  • The port9 interface has been replaced by four 50 GigE interfaces named port9/1 to port9/4.

  • The port16 interface has been replaced by four 50 Gig E interfaces named port16/1 to port16/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 50000full (50GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

You can use the following command to restore a split interface to the default (not split) configuration:

config system global

config split-port-mode

edit port9

set split-mode disable

end

Splitting the port21 to port28 interfaces

You can use the following command to split each FortiGate 4800F or 4801F 21 to 28 (port21 to port28) GigE QSFP-DD interface.

config system global

config split-port-mode

edit port21

set split-mode {disable | 8x50G | 4x100G | 2x200G}

end

disable restore a split interface to the default (not split) configuration.

8x50G split the interface into eight 50GigE interfaces.

4x100G split the interface into four 100GigE interfaces.

2x200G split the interface into two 200 GigE interfaces.

After splitting one or more interfaces, the FortiGate 4800F or 4801F restarts and when it starts up the split interfaces are available.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

For example, use the following command to split the port24 interface into eight 50GigE interfaces:

config system global

config split-port-mode

edit port24

set split-mode 8x50G

end

The FortiGate 4800F or 4801F restarts and when it starts up the port24 interface has been replaced by eight 50 GigE interfaces named port24/1 to port24/8.

By default, the speed of each split interface is set to 50000full (50GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

Configuring NPU port mapping

The default FortiGate-4800F and 4801F port mapping configuration results in sessions passing from front panel data interfaces to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabit CAUI-4 interfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to each NP7 processor. For example, you can sent up NPU port mapping to send all traffic from a front panel data interface to a specific NP7 processor LAG or even to just one of the interfaces in that LAG.

Note

If you have applied a hyperscale firewall license to a FortiGate 4800F or 4801F you should not configure NPU port mapping. Instead you should use the npu-group-id command to assign NP7 processor groups to hyperscale firewall VDOMs.

Use the following command to configure NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

<interface-name> the name of a front panel data interface.

<index> select different values of <index> to change how sessions from the selected front panel data interface are handled by the integrated switch fabric. The list of available <index> options depends on the NP7 configuration of your FortiGate. For the FortiGate-4800F or 4801F <index> can be 0 to 64. Use the ? to see the effect of each <index> value.

Here are some examples of <index> values for the FortiGate-4800F and 4801F:

  • 0, assign the front panel data interface to NP#0-15, the default. Sessions from the front panel data interface are distributed among all sixteen NP7 LAGs.
  • 1, assign the front panel data interface to the LAG connected to NP#0. Sessions from the front panel data interface are sent to the LAG connected to NP#0.
  • 18, assign the front panel data interface to NP#2-3. Sessions from the front panel data interface are distributed between the LAGs connected to NP#2 and NP#3.
  • 28, assign the front panel data interface to NP#12-15. Sessions from the front panel data interface are distributed between the LAGs connected to NP#12, NP#13, NP#14, and NP#15.
  • 39, assign the front panel data interface to NP#3-link0. Sessions from the front panel data interface are sent to np3_link0, which is one of the interfaces connected to NP#3.
  • 62, assign the front panel data interface to NP#14-link1. Sessions from the front panel data interface are sent to np14_link1, which is one of the interfaces connected to NP#14.

For example, use the following syntax to assign the FortiGate-4800F interfaces 25 and 26 to NP7#10 and NP7#11 and interfaces 27 and 28 to NP7#14 and NP7#15:

config system npu

config port-npu-map

edit port25

set npu-group-index 22

next

edit port26

set npu-group-index 22

next

edit port27

set npu-group-index 24

next

edit port28

set npu-group-index 24

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FortiGate-4800F or 4801F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the NP_group column of the diagnose npu np7 port-list command output for port25 to port28 shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group      group_from_vdom Switch_id SW_port_id
-------- --------------- ---------------  ------------- --------------- --------- ----------
.
.
.
port25   400000          400000           NP#10-11       1               0         0
port26   400000          400000           NP#10-11       1               0         8
port27   400000          400000           NP#14-15       1               0         16
port28   400000          400000           NP#14-15       1               0         24

FortiGate 4800F and 4801F fast path architecture

FortiGate 4800F and 4801F fast path architecture

The FortiGate 4800F and 4801F each include sixteen NP7 processors. All front panel data interfaces and the NP7 processors connect to the integrated switch fabric (ISF). All supported traffic passing between any two data interfaces can be offloaded by the NP7 processors. Because of the ISF, all data traffic passes from the data interfaces through the ISF to the NP7 processors. Data traffic processed by the CPU takes a dedicated data path through the ISF and an NP7 processor to the CPU.

The FortiGate 4800F and 4801F models feature the following front panel interfaces:

  • Two 10/100/1000BASE-T RJ45 (MGMT1 and MGMT2, not connected to the NP7 processor).
  • Twelve 50/25/10/1 GigE SFP56 (HA1, HA2 , AUX1, AUX2, 1 to 8) HA1, HA2 , AUX1, AUX2 not connected to the NP7 processors.
  • Twelve 200/100/40 GigE QSFP56 (9 to 20). Each of these interfaces can be split into four 50 GigE SFP28/SFP56 interfaces.
  • Eight 400/200/100/40 GigE QSFP-DD (21 to 28). Each of these interfaces can be split into eight 50GigE interfaces, four 100GigE interfaces, or two 200 GigE interfaces.

The MGMT interfaces are not connected to the NP7 processors. Management traffic passes to the CPU over a dedicated management path that is separate from the data path. You can also dedicate separate CPU resources for management traffic to further isolate management processing from data processing (see Improving GUI and CLI responsiveness (dedicated management CPU)).

The HA interfaces are also not connected to the NP7 processors. To help provide better HA stability and resiliency, HA traffic uses a dedicated physical control path that provides HA control traffic separation from data traffic processing.

The separation of management and HA traffic from data traffic keeps management and HA traffic from affecting the stability and performance of data traffic processing.

The AUX interfaces are connected to the NP7 processors and function similar to data interfaces. You can use the AUX interfaces for FGSP or FGCP HA session synchronization. If you license the FortiGate 4800F or 4801F for hyperscale firewall, you can use the AUX interfaces for hardware logging or FGGP or FGSP HA hardware session synchronization.

Note

You can use the port-path-option option of the config system npu command to connect or disconnect the HA and AUX interfaces to the NP7 processors. See config port-path-option.

You can use the following command to display the FortiGate 4800F and 4801F NP7 configuration. The command output shows that all sixteen NP7s are connected to all front panel data interfaces and to the AUX 1 and AUX2 interfaces.

FortiGate-4801F # diagnose npu np7 port-list
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group      group_from_vdom Switch_id SW_port_id
-------- --------------- ---------------  ------------- --------------- --------- ----------
port1    50000           50000            NP#0-15       1               0         167
port2    50000           50000            NP#0-15       1               0         168
port3    50000           50000            NP#0-15       1               0         169
port4    50000           50000            NP#0-15       1               0         170
port5    50000           50000            NP#0-15       1               0         172
port6    50000           50000            NP#0-15       1               0         173
port7    50000           50000            NP#0-15       1               0         174
port8    50000           50000            NP#0-15       1               0         175
port9    200000          200000           NP#0-15       1               0         179
port10   200000          200000           NP#0-15       1               0         183
port11   200000          200000           NP#0-15       1               0         191
port12   200000          200000           NP#0-15       1               0         187
port13   200000          200000           NP#0-15       1               0         195
port14   200000          200000           NP#0-15       1               0         199
port15   200000          200000           NP#0-15       1               0         207
port16   200000          200000           NP#0-15       1               0         203
port17   200000          200000           NP#0-15       1               0         211
port18   200000          200000           NP#0-15       1               0         215
port19   200000          200000           NP#0-15       1               0         223
port20   200000          200000           NP#0-15       1               0         219
port21   400000          400000           NP#0-15       1               0         227
port22   400000          400000           NP#0-15       1               0         235
port23   400000          400000           NP#0-15       1               0         243
port24   400000          400000           NP#0-15       1               0         251
port25   400000          400000           NP#0-15       1               0         0
port26   400000          400000           NP#0-15       1               0         8
port27   400000          400000           NP#0-15       1               0         16
port28   400000          400000           NP#0-15       1               0         24
aux1     50000           50000            NP#0-15       1               0         165
aux2     50000           50000            NP#0-15       1               0         166
-------- --------------- ---------------  ------------- --------- ----------

NP Port:
Name   Switch_id SW_port_id
------ --------- ----------
np0_0  0         95
np0_1  0         91
np1_0  0         83
np1_1  0         87
np2_0  0         56
np2_1  0         60
np3_0  0         52
np3_1  0         48
np4_0  0         79
np4_1  0         75
np5_0  0         67
np5_1  0         71
np6_0  0         40
np6_1  0         44
np7_0  0         36
np7_1  0         32
np8_0  0         159
np8_1  0         155
np9_0  0         147
np9_1  0         151
np10_0 0         127
np10_1 0         123
np11_0 0         115
np11_1 0         119
np12_0 0         143
np12_1 0         139
np13_0 0         131
np13_1 0         135
np14_0 0         111
np14_1 0         107
np15_0 0         99
np15_1 0         103
------ --------- ----------
* Max_speed: Maximum speed, Dflt_speed: Default speed
* SW_port_id: Switch port ID

The command output also shows the maximum and default speeds of each interface.

The integrated switch fabric distributes sessions from the data interfaces to the NP7 processors. The sixteen NP7 processors have a bandwidth capacity of 200Gigabit x 16 = 3200Gigabit. If all data interfaces were operating at their maximum bandwidth, the NP7 processors would not be able to offload all the traffic. You can use NPU port mapping to control how sessions are distributed to NP7 processors.

You can add LAGs to improve performance. For details, see Increasing NP7 offloading capacity using link aggregation groups (LAGs).

For information about hyperscale firewall support, see the Hyperscale Firewall Guide.

Assigning an NP7 processor group to a hyperscale firewall VDOM

If a FortiGate has more than one NP7 processor, to support a hyperscale firewall VDOM, the NP7 processors need to be connected together using RLT channels. Due to the number of channels in an NP7 processor, a maximum of six NP7 processors can be connected together to support a hyperscale firewall VDOM. Any FortiGate with more than six NP7 processors has to be configured to limit the number of NP7 processors that a hyperscale firewall VDOM can send sessions to. FortiGate models with six or fewer NP7 processors can support hyperscale firewall VDOMs without dividing the NP7 processors into different groups.

Note

Because of the NP7 processor groups feature, if you have applied a hyperscale firewall license to a FortiGate 4800F or 4801F you should not configure NPU port mapping. Instead you should use the npu-group-id command to assign NP7 processor groups to hyperscale firewall VDOMs.

Since the FortiGate 4800F and 4801F each have sixteen NP7 processors, these models include a new configuration option to assign a group of NP7 processors to to each hyperscale firewall VDOM. In the current implementation, the FortiGate 4800F or 4801F supports assigning a group of four NP7 processors to a hyperscale firewall VDOM. Fortinet could have created different group sizes, but choose four groups of four to make it easier to evenly distribute the processing load among all sixteen of the NP7 processors.

After creating a hyperscale firewall VDOM on the FortiGate 4800F or 4801F, the first thing you should do is use the following options to enable hyperscale firewall features by setting the policy-offload-level to full-offload and to assign an NPU processor group (also called an NP group or an NPU group) to the newly created hyperscale firewall VDOM. You must set the NP7 processor group for the hyperscale firewall VDOM before adding any interfaces to the VDOM. The CLI will not accept the command to assign an NP7 processor group if interfaces have already been added to the hyperscale firewall VDOM.

config system settings

set policy-offload-level full-offload

set npu-group-id {0 | 1 | 2 | 3}

end

0 assign this hyperscale firewall VDOM to NP7 processor group NP#0-3, which includes NP#0, NP#1, NP#2, and NP#3.

1 assign this hyperscale firewall VDOM to NP7 processor group NP#4-7, which includes NP#4, NP#5, NP#6, and NP#7.

2 assign this hyperscale firewall VDOM to NP7 processor group NP#8-11, which includes NP#8, NP#9, NP#10, and NP#11.

3 assign this hyperscale firewall VDOM to NP7 processor group NP#12-15, which includes NP#12, NP#13, NP#14, and NP#15.

You can only assign one NP7 processor group to a hyperscale firewall VDOM.

You can assign the same NP7 processor group to multiple hyperscale firewall VDOMs.

You can now add interfaces to the hyperscale firewall VDOM and continue with creating hyperscale firewall policies and so on.

When you add an interface to a hyperscale firewall VDOM, that interface is assigned to the same NP7 processor group as the VDOM. You can use the diagnose npu np7 port-list command to see the NP7 processor group (check the NP_group column) assigned to each interface.

A VLAN is assigned to the same NP7 processor group as the physical interface that the VLAN is added to. You can move a VLAN to a different hyperscale firewall VDOM than the physical interface, as long as the VDOM is assigned to the same NP7 group as the hyperscale firewall VDOM containing the physical interface.

All of the members of a LAG must be assigned to the same NP7 group.

NP7 processor groups do not affect NP7 offloading of sessions in non-hyperscale firewall VDOMs (such as the root VDOM).

NP7 processor groups and hyperscale hardware logging

On a FortiGate 4800F or 4801F, hyperscale hardware logging can only send logs to interfaces in the same NP7 processor group as the NP7 processors that are handling the hyperscale sessions.

This means that, on a FortiGate 4800F or 4801F, hyperscale hardware logging servers must include a hyperscale firewall VDOM. This VDOM must be assigned the same NP7 processor group as the hyperscale firewall VDOM that is processing the hyperscale traffic being logged. This can be the same hyperscale VDOM or another hyperscale firewall VDOM that is assigned the same NP7 processor group.

For more information about hyperscale firewall hardware logging, see Configuring hardware logging.

The following example hyperscale hardware logging configuration could be created for the hyperscale VDOM named Test-hw12. The configuration is a syslog configuration that includes three logging servers each assigned to the Test-hw12 hyperscale firewall VDOM.

config log npu-server

set log-processor host

set syslog-facility 0

set syslog-severity 7

config server-info

edit 1

set vdom Test-hw12

set ipv4-server 10.10.10.72

set source-port 2002

set dest-port 514

next

edit 2

set vdom Test-hw12

set ipv4-server 10.10.10.73

set source-port 2003

set dest-port 514

next

edit 3

set vdom Test-hw12

set ipv4-server 10.10.10.74

set source-port 2004

set dest-port 514

end

config server-group

edit HyperScale_Syslog

set log-format syslog

set server-number 13

set server-start-id 1

end

end

Splitting the port9 to port20 interfaces

You can use the following command to split each FortiGate 4800F or 4801F 9 to 20 (port9 to port20) QSFP56 interface into four 50 GigE SFP28/SFP56 interfaces. For example, to split interfaces 9 and 16 (port9 and port16), enter the following command:

config system global

config split-port-mode

edit port9

set split-mode 4x50G

next

edit port16

set split-mode 4x50G

end

The FortiGate 4800F or 4801F restarts and when it starts up:

  • The port9 interface has been replaced by four 50 GigE interfaces named port9/1 to port9/4.

  • The port16 interface has been replaced by four 50 Gig E interfaces named port16/1 to port16/4.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

By default, the speed of each split interface is set to 50000full (50GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

You can use the following command to restore a split interface to the default (not split) configuration:

config system global

config split-port-mode

edit port9

set split-mode disable

end

Splitting the port21 to port28 interfaces

You can use the following command to split each FortiGate 4800F or 4801F 21 to 28 (port21 to port28) GigE QSFP-DD interface.

config system global

config split-port-mode

edit port21

set split-mode {disable | 8x50G | 4x100G | 2x200G}

end

disable restore a split interface to the default (not split) configuration.

8x50G split the interface into eight 50GigE interfaces.

4x100G split the interface into four 100GigE interfaces.

2x200G split the interface into two 200 GigE interfaces.

After splitting one or more interfaces, the FortiGate 4800F or 4801F restarts and when it starts up the split interfaces are available.

Note

A configuration change that causes a FortiGate to restart can disrupt the operation of an FGCP cluster. If possible, you should make this configuration change to the individual FortiGates before setting up the cluster. If the cluster is already operating, you should temporarily remove the secondary FortiGate(s) from the cluster, change the configuration of the individual FortiGates and then re-form the cluster. You can remove FortiGate(s) from a cluster using the Remove Device from HA cluster button on the System > HA GUI page. For more information, see Disconnecting a FortiGate.

For example, use the following command to split the port24 interface into eight 50GigE interfaces:

config system global

config split-port-mode

edit port24

set split-mode 8x50G

end

The FortiGate 4800F or 4801F restarts and when it starts up the port24 interface has been replaced by eight 50 GigE interfaces named port24/1 to port24/8.

By default, the speed of each split interface is set to 50000full (50GigE). These interfaces can operate as 25GigE, 10GigE, or 1GigE interfaces depending on the transceivers and breakout cables. You can use the config system interface command to change the speeds of the split interfaces.

Configuring NPU port mapping

The default FortiGate-4800F and 4801F port mapping configuration results in sessions passing from front panel data interfaces to the integrated switch fabric. The integrated switch fabric distributes these sessions among the NP7 processors. Each NP7 processor is connected to the switch fabric with a LAG that consists of two 100-Gigabit CAUI-4 interfaces. The integrated switch fabric distributes sessions to the LAGs and each LAG distributes sessions between the two interfaces connected to the NP7 processor.

You can use NPU port mapping to override how data network interface sessions are distributed to each NP7 processor. For example, you can sent up NPU port mapping to send all traffic from a front panel data interface to a specific NP7 processor LAG or even to just one of the interfaces in that LAG.

Note

If you have applied a hyperscale firewall license to a FortiGate 4800F or 4801F you should not configure NPU port mapping. Instead you should use the npu-group-id command to assign NP7 processor groups to hyperscale firewall VDOMs.

Use the following command to configure NPU port mapping:

config system npu

config port-npu-map

edit <interface-name>

set npu-group-index <index>

end

<interface-name> the name of a front panel data interface.

<index> select different values of <index> to change how sessions from the selected front panel data interface are handled by the integrated switch fabric. The list of available <index> options depends on the NP7 configuration of your FortiGate. For the FortiGate-4800F or 4801F <index> can be 0 to 64. Use the ? to see the effect of each <index> value.

Here are some examples of <index> values for the FortiGate-4800F and 4801F:

  • 0, assign the front panel data interface to NP#0-15, the default. Sessions from the front panel data interface are distributed among all sixteen NP7 LAGs.
  • 1, assign the front panel data interface to the LAG connected to NP#0. Sessions from the front panel data interface are sent to the LAG connected to NP#0.
  • 18, assign the front panel data interface to NP#2-3. Sessions from the front panel data interface are distributed between the LAGs connected to NP#2 and NP#3.
  • 28, assign the front panel data interface to NP#12-15. Sessions from the front panel data interface are distributed between the LAGs connected to NP#12, NP#13, NP#14, and NP#15.
  • 39, assign the front panel data interface to NP#3-link0. Sessions from the front panel data interface are sent to np3_link0, which is one of the interfaces connected to NP#3.
  • 62, assign the front panel data interface to NP#14-link1. Sessions from the front panel data interface are sent to np14_link1, which is one of the interfaces connected to NP#14.

For example, use the following syntax to assign the FortiGate-4800F interfaces 25 and 26 to NP7#10 and NP7#11 and interfaces 27 and 28 to NP7#14 and NP7#15:

config system npu

config port-npu-map

edit port25

set npu-group-index 22

next

edit port26

set npu-group-index 22

next

edit port27

set npu-group-index 24

next

edit port28

set npu-group-index 24

end

end

You can use the diagnose npu np7 port-list command to see the current NPU port map configuration. While the FortiGate-4800F or 4801F is processing traffic, you can use the diagnose npu np7 cgmac-stats <npu-id> command to show how traffic is distributed to the NP7 links.

For example, after making the changes described in the example, the NP_group column of the diagnose npu np7 port-list command output for port25 to port28 shows the new mapping:

diagnose npu np7 port-list 
Front Panel Port:
Name     Max_speed(Mbps) Dflt_speed(Mbps) NP_group      group_from_vdom Switch_id SW_port_id
-------- --------------- ---------------  ------------- --------------- --------- ----------
.
.
.
port25   400000          400000           NP#10-11       1               0         0
port26   400000          400000           NP#10-11       1               0         8
port27   400000          400000           NP#14-15       1               0         16
port28   400000          400000           NP#14-15       1               0         24