Fortinet Document Library

Version:


Table of Contents

FortiGate-VM on VMware ESXi

Resources

Upgrade Path Tool
  • Select version:
  • 6.0
6.0.0
Download PDF
Copy Link

SR-IOV

FortiGate VMs installed on VMware platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate VMs with direct access to hardware devices. Enabling SR-IOV means that one PCIe device (CPU or network card) can function for a FortiGate VM as multiple separate physical devices (CPUs are network devices). SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate VM and a network card without passing through the VMware kernel and without using virtual switching.

FortiGate VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency. FortiGate VMs do not use VMware features that are incompatible with SR-IOV so you can enable SR-IOV without negatively affecting your FortiGate VM.

SR-IOV hardware compatibility

SR-IOV requires that the hardware on which your VMware host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your VMware platform must be running on hardware that is compatible with SR-IOV and with FortiGate VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers.

Install optimal network card drivers

To support SR-IOV and other optimal performance techniques, install the most up to date network drivers.

You can find information about the network cards installed in your host hardware from the OpenStack Horizon client or from the OpenStack CLI.

Then research the most up to date drivers for your hardware and install them from the Horizon client or the CLI.

Create SR-IOV virtual interfaces

Complete the following procedure to enable SR-IOV. This procedure requires restarting the VMware host and powering down the FortiGate-VM and should only be done during a maintenance window or when the network is not very busy.

For example, if you are using the VMware host client:

  1. Navigate to Manage > Hardware > PCI Devices to view all of the PCI devices on the host.
  2. Select the SR-IOV capable filter to view the PCI devices (network adapters) that are compatible with SR-IOV.
  3. Select a network adapter and select Configure SR-IOV.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host

For example, if you are using the vSphere web client:

  1. Navigate to the host with the SR-IOV physical network adapter that you want to add virtual interfaces to.
  2. In the Networking part of the Manage tab, select Physical Adapters.
  3. Select the physical adapter for which to enable SR-IOV settings.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host.

You can also use the following command from the ESXi host CLI to add virtual interfaces to one or more compatible network adapters:

$ esxcli system module parameters set -m <driver-name> -p “max_vfs=<virtual-interfaces>”

Where <driver-name> is the name of the network adapter driver (for example ixgbevf or i40evf) and <virtual-interfaces> is a comma-separated list of number of virtual interfaces to allow for each physical interface.

For example, if your VMware host includes three i40evf network adapters and you want to enable 6 virtual interfaces on each network adapter, enter the following:

$ esxcli system module parameters set -m <i40evf> -p “max_vfs=6,6,6”

Assign SR-IOV virtual interfaces to a FortiGate VM

  1. Power off the FortiGate-VM and open its virtual hardware settings.
  2. Create or edit a network adapter and set its type to SR-IOV passthrough.
  3. Select the physical network adapter for which you have enabled SR-IOV.
  4. Optionally associate the FortiGate VM network adapter with the port group on a standard or distributed switch.
  5. To guarantee that the pass-through device can access all virtual machine memory, in the Memory section select Reserve all guest memory.
  6. Save your changes and power on the FortiGate VM.

Set up VMware CPU affinity

 

Configuring CPU affinity on your FortiGate VM further builds on the benefits of SR-IOV by enabling the FortiGate-VM to align interrupts from interfaces to specific CPUs.

By specifying a CPU affinity setting for each virtual machine, you can restrict the assignment of virtual machines to a subset of the available processors in multiprocessor systems. By using this feature, you can assign each virtual machine to processors in the specified affinity set.

Using CPU affinity, you can assign a virtual machine to a specific processor. This assignment allows you to restrict the assignment of virtual machines to a specific available processor in multiprocessor systems.

For example, if you are using the vSphere web client use the following steps:

  1. Power off the FortiGate VM.
  2. Edit the FortiGate VM hardware settings and select Virtual Hardware.
  3. Select CPU options.
  4. In Scheduling Affinity, specify the CPUs to have affinity with the FortiGate VM. For best results, the affinity list should include one entry for each of the FortiGate VM's virtual CPUs.
  5. Save your changes.

Setting up FortiGate-VM interrupt affinity

In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you need to configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the interrupts are assigned to.

A common use of this feature would be to improve your FortiGate-VM's networking performance by:

  • On the VM host, add multiple host CPUs to your FortiGate-VM.
  • On the VM host, configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
  • On the VM host, configure other VM clients on the VM host to use other CPUs.
  • On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the FortiGate-VM can use.

In this way, all of the available CPU interrupts for the configured host CPUs are used to process traffic on your FortiGate interfaces. This configuration could lead to improve FortiGate-VM network performance because you have dedicated VM host CPU cycles to processing your FortiGate-VM's network traffic.

You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:

config system affinity-interrupt

edit <index>

set interrupt <interrupt-name>

set affinity-cpumask <cpu-affinity-mask>

next

Where:

<interrupt-name> the name of the interrupt to associate with a CPU affinity mask. You can view your FortiGate-VM interrupts using the diagnose hardware sysinfo interrupts command. Usually you would associate all of the interrupts for a given interface with the same CPU affinity mask.

<cpu-affinity-mask> the CPU affinity mask for the CPUs that will process the associated interrupt.

For example, consider the following configuration:

  • The port2 and port3 interfaces of a FortiGate-VM send and receive most of the traffic.
  • On the VM host you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3)
  • SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.

The output from the diagnose hardware sysinfo interrupts command shows that port2 has the following transmit and receive interrupts:

i40evf-port2-TxRx-0

i40evf-port2-TxRx-1

i40evf-port2-TxRx-2

i40evf-port2-TxRx-3

The output from the diagnose hardware sysinfo interrupts command shows that port3 has the following transmit and receive interrupts:

i40evf-port3-TxRx-0

i40evf-port3-TxRx-1

i40evf-port3-TxRx-2

i40evf-port3-TxRx-3

Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3.

config system affinity-interrupt

edit 1

set interrupt "i40evf-port2-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port2-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port2-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port2-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

edit 1

set interrupt "i40evf-port3-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port3-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port3-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port3-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

end

Configuring FortiGate-VM affinity packet re-distribution

With SR-IOV enabled on the VM host and interrupt affinity configured on your FortiGate-VM there is one additional configuration you can add that may improve performance. Most common network interface hardware has restrictions on the number of RX/TX queues that it can process. This can result in some CPUs being much busier than others and the busy CPUs may develop extensive queues.

You can get around this potential bottleneck by configuring affinity packet re-distribution to allow overloaded CPUs to redistribute packets they receive to other less busy CPUs. The may result in a more even distribution of packet processing to all of the available CPUs.

You configure packet redistribution for interfaces by associating an interface with an affinity CPU mask. This configuration distributes packets set and received by that interface to the CPUs defined by the CPU affinity mask associated with the interface.

You can use the following CLI command to configure affinity packet redistribution for your FortiGate-VM:

config system affinity-packet-redistribution

edit <index>

set interface <interface-name>

set affinity-cpumask <cpu-affinity-mask>

next

Where:

<interface-name> the name of the interface to associate with a CPU affinity mast.

<cpu-affinity-mask> the CPU affinity mask for the CPUs that will process packets to and from the associated interface.

For example, you can improve the performance of the interrupt affinity example shown in the following command to allow packets sent and received by the port3 interface to be re-distributed to CPUs according to the 0xE CPU affinity mask.

config system affinity-packet-redistribution

edit 1

set interface port3

set affinity-cpumask "0xE"

next

 

Resources

SR-IOV

FortiGate VMs installed on VMware platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate VMs with direct access to hardware devices. Enabling SR-IOV means that one PCIe device (CPU or network card) can function for a FortiGate VM as multiple separate physical devices (CPUs are network devices). SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate VM and a network card without passing through the VMware kernel and without using virtual switching.

FortiGate VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency. FortiGate VMs do not use VMware features that are incompatible with SR-IOV so you can enable SR-IOV without negatively affecting your FortiGate VM.

SR-IOV hardware compatibility

SR-IOV requires that the hardware on which your VMware host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your VMware platform must be running on hardware that is compatible with SR-IOV and with FortiGate VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers.

Install optimal network card drivers

To support SR-IOV and other optimal performance techniques, install the most up to date network drivers.

You can find information about the network cards installed in your host hardware from the OpenStack Horizon client or from the OpenStack CLI.

Then research the most up to date drivers for your hardware and install them from the Horizon client or the CLI.

Create SR-IOV virtual interfaces

Complete the following procedure to enable SR-IOV. This procedure requires restarting the VMware host and powering down the FortiGate-VM and should only be done during a maintenance window or when the network is not very busy.

For example, if you are using the VMware host client:

  1. Navigate to Manage > Hardware > PCI Devices to view all of the PCI devices on the host.
  2. Select the SR-IOV capable filter to view the PCI devices (network adapters) that are compatible with SR-IOV.
  3. Select a network adapter and select Configure SR-IOV.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host

For example, if you are using the vSphere web client:

  1. Navigate to the host with the SR-IOV physical network adapter that you want to add virtual interfaces to.
  2. In the Networking part of the Manage tab, select Physical Adapters.
  3. Select the physical adapter for which to enable SR-IOV settings.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host.

You can also use the following command from the ESXi host CLI to add virtual interfaces to one or more compatible network adapters:

$ esxcli system module parameters set -m <driver-name> -p “max_vfs=<virtual-interfaces>”

Where <driver-name> is the name of the network adapter driver (for example ixgbevf or i40evf) and <virtual-interfaces> is a comma-separated list of number of virtual interfaces to allow for each physical interface.

For example, if your VMware host includes three i40evf network adapters and you want to enable 6 virtual interfaces on each network adapter, enter the following:

$ esxcli system module parameters set -m <i40evf> -p “max_vfs=6,6,6”

Assign SR-IOV virtual interfaces to a FortiGate VM

  1. Power off the FortiGate-VM and open its virtual hardware settings.
  2. Create or edit a network adapter and set its type to SR-IOV passthrough.
  3. Select the physical network adapter for which you have enabled SR-IOV.
  4. Optionally associate the FortiGate VM network adapter with the port group on a standard or distributed switch.
  5. To guarantee that the pass-through device can access all virtual machine memory, in the Memory section select Reserve all guest memory.
  6. Save your changes and power on the FortiGate VM.

Set up VMware CPU affinity

 

Configuring CPU affinity on your FortiGate VM further builds on the benefits of SR-IOV by enabling the FortiGate-VM to align interrupts from interfaces to specific CPUs.

By specifying a CPU affinity setting for each virtual machine, you can restrict the assignment of virtual machines to a subset of the available processors in multiprocessor systems. By using this feature, you can assign each virtual machine to processors in the specified affinity set.

Using CPU affinity, you can assign a virtual machine to a specific processor. This assignment allows you to restrict the assignment of virtual machines to a specific available processor in multiprocessor systems.

For example, if you are using the vSphere web client use the following steps:

  1. Power off the FortiGate VM.
  2. Edit the FortiGate VM hardware settings and select Virtual Hardware.
  3. Select CPU options.
  4. In Scheduling Affinity, specify the CPUs to have affinity with the FortiGate VM. For best results, the affinity list should include one entry for each of the FortiGate VM's virtual CPUs.
  5. Save your changes.

Setting up FortiGate-VM interrupt affinity

In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you need to configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the interrupts are assigned to.

A common use of this feature would be to improve your FortiGate-VM's networking performance by:

  • On the VM host, add multiple host CPUs to your FortiGate-VM.
  • On the VM host, configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
  • On the VM host, configure other VM clients on the VM host to use other CPUs.
  • On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the FortiGate-VM can use.

In this way, all of the available CPU interrupts for the configured host CPUs are used to process traffic on your FortiGate interfaces. This configuration could lead to improve FortiGate-VM network performance because you have dedicated VM host CPU cycles to processing your FortiGate-VM's network traffic.

You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:

config system affinity-interrupt

edit <index>

set interrupt <interrupt-name>

set affinity-cpumask <cpu-affinity-mask>

next

Where:

<interrupt-name> the name of the interrupt to associate with a CPU affinity mask. You can view your FortiGate-VM interrupts using the diagnose hardware sysinfo interrupts command. Usually you would associate all of the interrupts for a given interface with the same CPU affinity mask.

<cpu-affinity-mask> the CPU affinity mask for the CPUs that will process the associated interrupt.

For example, consider the following configuration:

  • The port2 and port3 interfaces of a FortiGate-VM send and receive most of the traffic.
  • On the VM host you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3)
  • SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.

The output from the diagnose hardware sysinfo interrupts command shows that port2 has the following transmit and receive interrupts:

i40evf-port2-TxRx-0

i40evf-port2-TxRx-1

i40evf-port2-TxRx-2

i40evf-port2-TxRx-3

The output from the diagnose hardware sysinfo interrupts command shows that port3 has the following transmit and receive interrupts:

i40evf-port3-TxRx-0

i40evf-port3-TxRx-1

i40evf-port3-TxRx-2

i40evf-port3-TxRx-3

Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3.

config system affinity-interrupt

edit 1

set interrupt "i40evf-port2-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port2-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port2-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port2-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

edit 1

set interrupt "i40evf-port3-TxRx-0"

set affinity-cpumask "0x0000000000000001"

next

edit 2

set interrupt "i40evf-port3-TxRx-1"

set affinity-cpumask "0x0000000000000002"

next

edit 3

set interrupt "i40evf-port3-TxRx-2"

set affinity-cpumask "0x0000000000000004"

next

edit 4

set interrupt "i40evf-port3-TxRx-3"

set affinity-cpumask "0x0000000000000008"

next

end

Configuring FortiGate-VM affinity packet re-distribution

With SR-IOV enabled on the VM host and interrupt affinity configured on your FortiGate-VM there is one additional configuration you can add that may improve performance. Most common network interface hardware has restrictions on the number of RX/TX queues that it can process. This can result in some CPUs being much busier than others and the busy CPUs may develop extensive queues.

You can get around this potential bottleneck by configuring affinity packet re-distribution to allow overloaded CPUs to redistribute packets they receive to other less busy CPUs. The may result in a more even distribution of packet processing to all of the available CPUs.

You configure packet redistribution for interfaces by associating an interface with an affinity CPU mask. This configuration distributes packets set and received by that interface to the CPUs defined by the CPU affinity mask associated with the interface.

You can use the following CLI command to configure affinity packet redistribution for your FortiGate-VM:

config system affinity-packet-redistribution

edit <index>

set interface <interface-name>

set affinity-cpumask <cpu-affinity-mask>

next

Where:

<interface-name> the name of the interface to associate with a CPU affinity mast.

<cpu-affinity-mask> the CPU affinity mask for the CPUs that will process packets to and from the associated interface.

For example, you can improve the performance of the interrupt affinity example shown in the following command to allow packets sent and received by the port3 interface to be re-distributed to CPUs according to the 0xE CPU affinity mask.

config system affinity-packet-redistribution

edit 1

set interface port3

set affinity-cpumask "0xE"

next