Interrupt affinity
In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you must configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the interrupts are assigned to.
A common use of this feature is to improve your FortiGate-VM's networking performance by:
- On the VM host, add multiple host CPUs to your FortiGate-VM.
- On the VM host, configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
- On the VM host, configure other VM clients on the VM host to use other CPUs.
- On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the FortiGate-VM can use.
In this way, all available CPU interrupts for the configured host CPUs are used to process traffic on your FortiGate interfaces. This configuration could lead to improve FortiGate-VM network performance because you have dedicated VM host CPU cycles to processing your FortiGate-VM's network traffic.
You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:
config system affinity-interrupt
edit <index>
set interrupt <interrupt-name>
set affinity-cpumask <cpu-affinity-mask>
next
end
Where:
-
<interrupt-name>
is the name of the interrupt to associate with a CPU affinity mask. You can view your FortiGate-VM interrupts using thediagnose hardware sysinfo interrupts
command. Usually you associate all of the interrupts for a given interface with the same CPU affinity mask. -
<cpu-affinity-mask>
is the CPU affinity mask for the CPUs that will process the associated interrupt.
For example, consider the following configuration:
- The port2 and port3 interfaces of a FortiGate-VM send and receive most of the traffic.
- On the VM host you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3).
- SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.
The output from the diagnose hardware sysinfo interrupts
command shows that port2 has the following transmit and receive interrupts:
i40evf-port2-TxRx-0
i40evf-port2-TxRx-1
i40evf-port2-TxRx-2
i40evf-port2-TxRx-3
The output from the diagnose hardware sysinfo interrupts
command shows that port3 has the following transmit and receive interrupts:
i40evf-port3-TxRx-0
i40evf-port3-TxRx-1
i40evf-port3-TxRx-2
i40evf-port3-TxRx-3
Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3.
config system affinity-interrupt
edit 1
set interrupt "i40evf-port2-TxRx-0"
set affinity-cpumask "0x0000000000000001"
next
edit 2
set interrupt "i40evf-port2-TxRx-1"
set affinity-cpumask "0x0000000000000002"
next
edit 3
set interrupt "i40evf-port2-TxRx-2"
set affinity-cpumask "0x0000000000000004"
next
edit 4
set interrupt "i40evf-port2-TxRx-3"
set affinity-cpumask "0x0000000000000008"
next
edit 1
set interrupt "i40evf-port3-TxRx-0"
set affinity-cpumask "0x0000000000000001"
next
edit 2
set interrupt "i40evf-port3-TxRx-1"
set affinity-cpumask "0x0000000000000002"
next
edit 3
set interrupt "i40evf-port3-TxRx-2"
set affinity-cpumask "0x0000000000000004"
next
edit 4
set interrupt "i40evf-port3-TxRx-3"
set affinity-cpumask "0x0000000000000008"
next
end