Fortinet black logo

SR-IOV

Copy Link
Copy Doc ID ef8a5052-6955-11ea-9384-00505692583a:553137
Download PDF

SR-IOV

FortiGate-VMs installed on VMware ESXi platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate-VMs with direct access to physical network cards. Enabling SR-IOV means that one PCIe network card or CPU can function for a FortiGate-VM as multiple separate physical devices. SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate-VM and a network card, bypassing VMware ESXi host software and without using virtual switching.

FortiGate-VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency and CPU usage. FortiGate-VMs do not use VMware ESXi features that are incompatible with SR-IOV, so you can enable SR-IOV without negatively affecting your FortiGate-VM. SR-IOV implements an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the physical functions (PF) and virtual functions (VF).

Setting up SR-IOV on VMware ESXi involves creating a PF for each physical network card in the hardware platform. Then, you create VFs that allow FortiGate-VMs to communicate through the PF to the physical network card. VFs are actual PCIe hardware resources and only a limited number of VFs are available for each PF.

SR-IOV hardware compatibility

SR-IOV requires that the hardware and operating system on which your VMware ESXi host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your VMware ESXi platform must be running on hardware that is compatible with SR-IOV and with FortiGate-VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers. As well, the host hardware CPUs must support second level address translation (SLAT).

For optimal SR-IOV support, install the most up to date ixgbevf or i40e/i40evf network drivers. Fortinet recommends i40e/i40evf drivers because they provide four TxRx queues for each VF and ixgbevf only provides two TxRx queues.

Create SR-IOV virtual interfaces

Complete the following procedure to enable SR-IOV. This procedure requires restarting the VMware host and powering down the FortiGate-VM and should only be done during a maintenance window or when the network is not very busy.

For example, if you are using the VMware host client:

  1. Go to Manage > Hardware > PCI Devices to view all of the PCI devices on the host.
  2. Select the SR-IOV capable filter to view the PCI devices (network adapters) that are compatible with SR-IOV.
  3. Select a network adapter and select Configure SR-IOV.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host

For example, if you are using the vSphere web client:

  1. Go to the host with the SR-IOV physical network adapter that you want to add virtual interfaces to.
  2. In the Networking part of the Manage tab, select Physical Adapters.
  3. Select the physical adapter for which to enable SR-IOV settings.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host.

You can also use the following command from the ESXi host CLI to add virtual interfaces to one or more compatible network adapters:

$ esxcli system module parameters set -m <driver-name> -p “max_vfs=<virtual-interfaces>”

Where <driver-name> is the name of the network adapter driver (for example ixgbevf or i40evf) and <virtual-interfaces> is a comma-separated list of number of virtual interfaces to allow for each physical interface.

For example, if your VMware host includes three i40evf network adapters and you want to enable 6 virtual interfaces on each network adapter, enter the following:

$ esxcli system module parameters set -m <i40evf> -p “max_vfs=6,6,6”

Assign SR-IOV virtual interfaces to a FortiGate-VM

  1. Power off the FortiGate-VM and open its virtual hardware settings.
  2. Create or edit a network adapter and set its type to SR-IOV passthrough.
  3. Select the physical network adapter for which you have enabled SR-IOV.
  4. Optionally associate the FortiGate-VM network adapter with the port group on a standard or distributed switch.
  5. To guarantee that the pass-through device can access all VM memory, in the Memory section select Reserve all guest memory.
  6. Save your changes and power on the FortiGate-VM.

Set up VMware CPU affinity

Configuring CPU affinity on your FortiGate-VM further builds on the benefits of SR-IOV by enabling the FortiGate-VM to align interrupts from interfaces to specific CPUs.

By specifying a CPU affinity setting for each VM, you can restrict the assignment of VMs to a subset of the available processors in multiprocessor systems. By using this feature, you can assign each VM to processors in the specified affinity set.

Using CPU affinity, you can assign a VM to a specific processor. This assignment allows you to restrict the assignment of VMs to a specific available processor in multiprocessor systems.

For example, if you are using the vSphere web client use the following steps:

  1. Power off the FortiGate-VM.
  2. Edit the FortiGate-VM hardware settings and select Virtual Hardware.
  3. Select CPU options.
  4. In Scheduling Affinity, specify the CPUs to have affinity with the FortiGate-VM. For best results, the affinity list should include one entry for each of the FortiGate-VM's virtual CPUs.
  5. Save your changes.

SR-IOV

FortiGate-VMs installed on VMware ESXi platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate-VMs with direct access to physical network cards. Enabling SR-IOV means that one PCIe network card or CPU can function for a FortiGate-VM as multiple separate physical devices. SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate-VM and a network card, bypassing VMware ESXi host software and without using virtual switching.

FortiGate-VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency and CPU usage. FortiGate-VMs do not use VMware ESXi features that are incompatible with SR-IOV, so you can enable SR-IOV without negatively affecting your FortiGate-VM. SR-IOV implements an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the physical functions (PF) and virtual functions (VF).

Setting up SR-IOV on VMware ESXi involves creating a PF for each physical network card in the hardware platform. Then, you create VFs that allow FortiGate-VMs to communicate through the PF to the physical network card. VFs are actual PCIe hardware resources and only a limited number of VFs are available for each PF.

SR-IOV hardware compatibility

SR-IOV requires that the hardware and operating system on which your VMware ESXi host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your VMware ESXi platform must be running on hardware that is compatible with SR-IOV and with FortiGate-VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers. As well, the host hardware CPUs must support second level address translation (SLAT).

For optimal SR-IOV support, install the most up to date ixgbevf or i40e/i40evf network drivers. Fortinet recommends i40e/i40evf drivers because they provide four TxRx queues for each VF and ixgbevf only provides two TxRx queues.

Create SR-IOV virtual interfaces

Complete the following procedure to enable SR-IOV. This procedure requires restarting the VMware host and powering down the FortiGate-VM and should only be done during a maintenance window or when the network is not very busy.

For example, if you are using the VMware host client:

  1. Go to Manage > Hardware > PCI Devices to view all of the PCI devices on the host.
  2. Select the SR-IOV capable filter to view the PCI devices (network adapters) that are compatible with SR-IOV.
  3. Select a network adapter and select Configure SR-IOV.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host

For example, if you are using the vSphere web client:

  1. Go to the host with the SR-IOV physical network adapter that you want to add virtual interfaces to.
  2. In the Networking part of the Manage tab, select Physical Adapters.
  3. Select the physical adapter for which to enable SR-IOV settings.
  4. Enable SR-IOV and specify the Number of virtual functions.
  5. Save your changes and restart the VMware host.

You can also use the following command from the ESXi host CLI to add virtual interfaces to one or more compatible network adapters:

$ esxcli system module parameters set -m <driver-name> -p “max_vfs=<virtual-interfaces>”

Where <driver-name> is the name of the network adapter driver (for example ixgbevf or i40evf) and <virtual-interfaces> is a comma-separated list of number of virtual interfaces to allow for each physical interface.

For example, if your VMware host includes three i40evf network adapters and you want to enable 6 virtual interfaces on each network adapter, enter the following:

$ esxcli system module parameters set -m <i40evf> -p “max_vfs=6,6,6”

Assign SR-IOV virtual interfaces to a FortiGate-VM

  1. Power off the FortiGate-VM and open its virtual hardware settings.
  2. Create or edit a network adapter and set its type to SR-IOV passthrough.
  3. Select the physical network adapter for which you have enabled SR-IOV.
  4. Optionally associate the FortiGate-VM network adapter with the port group on a standard or distributed switch.
  5. To guarantee that the pass-through device can access all VM memory, in the Memory section select Reserve all guest memory.
  6. Save your changes and power on the FortiGate-VM.

Set up VMware CPU affinity

Configuring CPU affinity on your FortiGate-VM further builds on the benefits of SR-IOV by enabling the FortiGate-VM to align interrupts from interfaces to specific CPUs.

By specifying a CPU affinity setting for each VM, you can restrict the assignment of VMs to a subset of the available processors in multiprocessor systems. By using this feature, you can assign each VM to processors in the specified affinity set.

Using CPU affinity, you can assign a VM to a specific processor. This assignment allows you to restrict the assignment of VMs to a specific available processor in multiprocessor systems.

For example, if you are using the vSphere web client use the following steps:

  1. Power off the FortiGate-VM.
  2. Edit the FortiGate-VM hardware settings and select Virtual Hardware.
  3. Select CPU options.
  4. In Scheduling Affinity, specify the CPUs to have affinity with the FortiGate-VM. For best results, the affinity list should include one entry for each of the FortiGate-VM's virtual CPUs.
  5. Save your changes.