Fortinet white logo
Fortinet white logo

OpenStack Administration Guide

FortiGate-VM interrupt affinity

FortiGate-VM interrupt affinity

In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you must configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the interrupts are assigned to.

A common use of this feature would be to improve your FortiGate-VM's networking performance by doing the following:

  • On the VM host:
    • Add multiple host CPUs to your FortiGate-VM.
    • Configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
    • Configure other VM clients on the VM host to use other CPUs.
  • On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the FortiGate-VM can use.

In this way, the configuration uses all available CPU interrupts for the configured host CPUs to process traffic on your FortiGate interfaces. This configuration could lead to improved FortiGate-VM network performance because you have dedicated VM host CPU cycles processing your FortiGate-VM's network traffic.

You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:

config system affinity-interrupt

edit <index>

set interrupt <interrupt-name>

set affinity-cpumask <cpu-affinity-mask>

next

The following defines the values for the commands:

Value

Description

<interrupt-name>

Name of the interrupt to associate with a CPU affinity mask. You can view your FortiGate-VM interrupts using the diagnose hardware sysinfo interrupts command. Usually you would associate all of the interrupts for a given interface with the same CPU affinity mask.

<cpu-affinity-mask>

CPU affinity mask for the CPUs that process the associated interrupt.

For example, consider the following configuration:

  • The FortiGate-VM port2 and port3 interfaces send and receive most of the traffic.
  • On the VM host, you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3).
  • SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.

Output from diagnose hardware sysinfo interrupts shows that port2 has the following transmit and receive interrupts:

i40evf-port2-TxRx-0

i40evf-port2-TxRx-1

i40evf-port2-TxRx-2

i40evf-port2-TxRx-3

Output from diagnose hardware sysinfo interrupts shows that port3 has the following transmit and receive interrupts:

i40evf-port3-TxRx-0

i40evf-port3-TxRx-1

i40evf-port3-TxRx-2

i40evf-port3-TxRx-3

Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3:

config system affinity-interrupt

edit 1

set interrupt "i40evf-port2-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port2-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port2-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port2-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

edit 1

set interrupt "i40evf-port3-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port3-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port3-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port3-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

end

FortiGate-VM interrupt affinity

FortiGate-VM interrupt affinity

In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you must configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the interrupts are assigned to.

A common use of this feature would be to improve your FortiGate-VM's networking performance by doing the following:

  • On the VM host:
    • Add multiple host CPUs to your FortiGate-VM.
    • Configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
    • Configure other VM clients on the VM host to use other CPUs.
  • On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the FortiGate-VM can use.

In this way, the configuration uses all available CPU interrupts for the configured host CPUs to process traffic on your FortiGate interfaces. This configuration could lead to improved FortiGate-VM network performance because you have dedicated VM host CPU cycles processing your FortiGate-VM's network traffic.

You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:

config system affinity-interrupt

edit <index>

set interrupt <interrupt-name>

set affinity-cpumask <cpu-affinity-mask>

next

The following defines the values for the commands:

Value

Description

<interrupt-name>

Name of the interrupt to associate with a CPU affinity mask. You can view your FortiGate-VM interrupts using the diagnose hardware sysinfo interrupts command. Usually you would associate all of the interrupts for a given interface with the same CPU affinity mask.

<cpu-affinity-mask>

CPU affinity mask for the CPUs that process the associated interrupt.

For example, consider the following configuration:

  • The FortiGate-VM port2 and port3 interfaces send and receive most of the traffic.
  • On the VM host, you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3).
  • SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.

Output from diagnose hardware sysinfo interrupts shows that port2 has the following transmit and receive interrupts:

i40evf-port2-TxRx-0

i40evf-port2-TxRx-1

i40evf-port2-TxRx-2

i40evf-port2-TxRx-3

Output from diagnose hardware sysinfo interrupts shows that port3 has the following transmit and receive interrupts:

i40evf-port3-TxRx-0

i40evf-port3-TxRx-1

i40evf-port3-TxRx-2

i40evf-port3-TxRx-3

Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3:

config system affinity-interrupt

edit 1

set interrupt "i40evf-port2-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port2-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port2-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port2-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

edit 1

set interrupt "i40evf-port3-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port3-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port3-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port3-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

end