Fortinet black logo

VMware ESXi Administration Guide

Packet-distribution affinity

Copy Link
Copy Doc ID 8e95af8b-8674-11eb-9995-00505692583a:632235
Download PDF

Packet-distribution affinity

With SR-IOV enabled on the VM host and interrupt affinity configured on your FortiGate-VM there is one additional configuration you can add that may improve performance. Most common network interface hardware has restrictions on the number of RX/TX queues that it can process. This can result in some CPUs being much busier than others and the busy CPUs may develop extensive queues.

You can get around this potential bottleneck by configuring affinity packet redistribution to allow overloaded CPUs to redistribute packets they receive to other less busy CPUs. The may result in a more even distribution of packet processing to all available CPUs.

You configure packet redistribution for interfaces by associating an interface with an affinity CPU mask. This configuration distributes packets set and received by that interface to the CPUs defined by the CPU affinity mask associated with the interface.

You can use the following CLI command to configure affinity packet redistribution for your FortiGate-VM:

config system affinity-packet-redistribution

edit <index>

set interface <interface-name>

set affinity-cpumask <cpu-affinity-mask>

next

end

Where:

  • <interface-name> the name of the interface to associate with a CPU affinity mast.
  • <cpu-affinity-mask> the CPU affinity mask for the CPUs that will process packets to and from the associated interface.

For example, you can improve the performance of the interrupt affinity example shown in the following command to allow packets sent and received by the port3 interface to be redistributed to CPUs according to the 0xE CPU affinity mask.

config system affinity-packet-redistribution

edit 1

set interface port3

set affinity-cpumask "0xE"

next

end

Packet-distribution affinity

With SR-IOV enabled on the VM host and interrupt affinity configured on your FortiGate-VM there is one additional configuration you can add that may improve performance. Most common network interface hardware has restrictions on the number of RX/TX queues that it can process. This can result in some CPUs being much busier than others and the busy CPUs may develop extensive queues.

You can get around this potential bottleneck by configuring affinity packet redistribution to allow overloaded CPUs to redistribute packets they receive to other less busy CPUs. The may result in a more even distribution of packet processing to all available CPUs.

You configure packet redistribution for interfaces by associating an interface with an affinity CPU mask. This configuration distributes packets set and received by that interface to the CPUs defined by the CPU affinity mask associated with the interface.

You can use the following CLI command to configure affinity packet redistribution for your FortiGate-VM:

config system affinity-packet-redistribution

edit <index>

set interface <interface-name>

set affinity-cpumask <cpu-affinity-mask>

next

end

Where:

  • <interface-name> the name of the interface to associate with a CPU affinity mast.
  • <cpu-affinity-mask> the CPU affinity mask for the CPUs that will process packets to and from the associated interface.

For example, you can improve the performance of the interrupt affinity example shown in the following command to allow packets sent and received by the port3 interface to be redistributed to CPUs according to the 0xE CPU affinity mask.

config system affinity-packet-redistribution

edit 1

set interface port3

set affinity-cpumask "0xE"

next

end