Fortinet Document Library

Version:


Table of Contents

FortiGate-VM on KVM

Resources

Upgrade Path Tool
  • Select version:
  • 6.0
6.0.0
Download PDF
Copy Link

SR-IOV

FortiGate VMs installed on KVM platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate VMs with direct access to physical network cards. Enabling SR-IOV means that one PCIe network card can function for a FortiGate VM as multiple separate physical devices. SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate VM and a network card; effectively bypassing KVM host software and without using virtual switching.

FortiGate VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency and CPU usage. FortiGate VMs do not use KVM features that are incompatible with SR-IOV so you can enable SR-IOV without negatively affecting your FortiGate VM.

Setting up SR-IOV on KVM involves creating a physical functions (PF) for each physical network card in the hardware platform. Then, you create virtual functions (VFs) that allow FortiGate VMs to communicate through the PF to the physical network card. VFs are actual PCIe hardware resources and only a limited number of VFs are available for each PF.

SR-IOV implements an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the PF and VFs.

SR-IOV hardware compatibility

SR-IOV requires that the hardware and operating system on which your KVM host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your KVM platform must be running on hardware that is compatible with SR-IOV and with FortiGate-VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers. As well, the host hardware CPUs must support Second Level Address Translation (SLAT).

For optimal SR-IOV support, install the most up to date ixgbevf or i40e/i40evf network drivers. Fortinet recommends i40e/i40evf drivers because they provide four TxRx queues for each VF and ixgbevf only provides two TxRx queues.

Enable SR-IOV support for Intel systems

Use the following steps to enable SR-IOV support for KVM host systems that use Intel CPUs. These steps involve enabling and verifying Intel VT-d specifications in the BIOS and Linux kernel. You can skip these steps if VT-d is already enabled.

On an Intel host PC, Intel VT-d BIOS settings provide hardware support for directly assigning a physical device to a virtual machine.

  1. View the BIOS settings of the host machine and enable VT-d settings if they are not already enabled.
    You may have to review the manufacturer's documentation for details.
  2. Activate Intel VT-d in the Linux kernel by adding the intel_iommu=on parameter to the kernel line in the /boot/grub/grub.conf file. For example:

default=0

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

hiddenmenu

title Red Hat Enterprise Linux Server (2.6.32-330.x86_645)

root (hd0,0)

kernel /vmlinuz-2.6.32-330.x86_64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet intel_iommu=on

initrd /initrd-2.6.32-330.x86_64.img

  1. Restart the system.

Enable SR-IOV support for AMD systems

Use the following steps to enable SR-IOV support for KVM host systems that use AMD CPUs. These steps involve enabling the AMD IOMMU specifications in the BIOS and Linux kernel. You can skip these steps if AMD IOMMU is already enabled.

On an AMD host PC, IOMMU BIOS settings provide hardware support for directly assigning a physical device to a virtual machine.

  1. View the BIOS settings of the host machine and enable IOMMU settings if they are not already enabled.
    You may have to review the manufacturer's documentation for details.
  2. Append amd_iommu=on to the kernel command line in /boot/grub/grub.conf so that AMD IOMMU specifications are enabled when the system starts up.
  3. Restart the system.

Verify that Linux and KVM can find SR-IOV-enabled PCI devices

You can use the lspci command to view the list of PCI devices and verify that your SR-IOV supporting network cards are on the list. The following output example shows some example entries for the Intel 82576 network card:

# lspci
03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

Optionally modify the SR-IOV kernel modules

If the device is supported the driver kernel module should be loaded automatically by the kernel. You can enable optional parameters using the modprobe command. For example, the Intel 82576 network interface card uses the igb driver kernel module.
# modprobe igb [<option>=<VAL1>,<VAL2>,]
# lsmod |grep igb
igb 87592 0
dca 6708 1 igb

Attaching an SR-IOV network device to a FortiGate VM

You can enable SR-IOV for a FortiGate VM by creating a Virtual Function (VF) and then attaching the VF to your FortiGate VM.

Activate and verify an SR-IOV VF

The max_vfs parameter of the igb module allocates the maximum number of Virtual Functions (VFs). The max_vfs parameter causes the driver to spawn multiple VFs.

Before activating the maximum number of VFs enter the following command to remove the igb module:
# modprobe -r igb

Restart the igb module with max_vfs set to the maximum supported by your device. For example, the valid range for the Intel 82576 network interface card is 0 to 7. To activate the maximum number of VFs supported by this device enter:
# modprobe igb max_vfs=7

Make the VFs persistent by adding options igb max_vfs=7 to any file in /etc/modprobe.d. For example:
# echo "options igb max_vfs=7" >>/etc/modprobe.d/igb.conf

Verify the new VFs. For example, you could use the following lspci command to list the newly added VFs attached to the Intel 82576 network device. Alternatively, you can use grep to search for Virtual Function, to search for devices that support VFs.

# lspci | grep 82576
0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01)
0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

Use the -n parameter of the lspci command to find the identifier for the PCI device. The PFs correspond to 0b:00.0 and 0b:00.1. All VFs have Virtual Function in the description.

Verify that the devices exist with virsh

The libvirt service must recognize a PCI device before you can add it to a virtual machine. libvirt uses a similar notation to the lspci output.

Use the virsh nodedev-list command and the grep command to filter the Intel 82576 network device from the list of available host devices. In the example, 0b is the filter for the Intel 82576 network devices. This may vary for your system and may result in additional devices.

# virsh nodedev-list | grep 0b
pci_0000_0b_00_0
pci_0000_0b_00_1
pci_0000_0b_10_0
pci_0000_0b_10_1
pci_0000_0b_10_2
pci_0000_0b_10_3
pci_0000_0b_10_4
pci_0000_0b_10_5
pci_0000_0b_10_6
pci_0000_0b_11_7
pci_0000_0b_11_1
pci_0000_0b_11_2
pci_0000_0b_11_3
pci_0000_0b_11_4
pci_0000_0b_11_5

The serial numbers for the Virtual Functions and Physical Functions should be in the list.

Get device details with virsh

The pci_0000_0b_00_0 is one of the PFs and pci_0000_0b_10_0 is the first corresponding VF for that PF. Use virsh nodedev-dumpxml to get device details for both devices.

Example device details for the pci_0000_0b_00_0 PF device:

 # virsh nodedev-dumpxml pci_0000_0b_00_0
<device>
   <name>pci_0000_0b_00_0</name>
   <parent>pci_0000_00_01_0</parent>
   <driver>
      <name>igb</name>
   </driver>
   <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>0</slot>
      <function>0</function>
      <product id='0x10c9'>82576 Gigabit Network Connection</product>
      <vendor id='0x8086'>Intel Corporation</vendor>
   </capability>
</device>

Example device details for the pci_0000_0b_10_0 PF device:

# virsh nodedev-dumpxml pci_0000_0b_10_0
<device>
   <name>pci_0000_0b_10_0</name>
   <parent>pci_0000_00_01_0</parent>
   <driver>
      <name>igbvf</name>
   </driver>
   <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>16</slot>
      <function>0</function>
      <product id='0x10ca'>82576 Virtual Function</product>
      <vendor id='0x8086'>Intel Corporation</vendor>
   </capability>
</device>

You must use this information to specify the bus, slot, and function parameters when you add the VF to a FortiGate VM. A convienent way to do this is to create a temporary xml file and copy the following text into that file.

<interface type='hostdev' managed='yes'>
     <source>
       <address type='pci' domain='0' bus='11' slot='16' function='0'/>
     </source>
   </interface>

You can also include additional information about the VF such as a MAC address, VLAN tag, and so on. If you specify a MAC address, the VF will always have this MAC address. If you do not specify a MAC address, the system generates a new one each time the FortiGate VM restarts.

Add the VF to a FortiGate VM

Enter the following command to add the VF to a FortiGate VM. This configuration attaches the new VF device immediately and saves it for subsequent FortiGate VM restarts.

virsh attach-device MyFGTVM <temp-xml-file> --config

Where MyFGTVM is the name of the FortiGate VM for which to enable SR-IOV.

<temp-xml-file> is the temporary XML file containing the VF configuration.

After this configuration, when you start up the FortiGate VM it detects the SR-IOV VF as a new network interface.

Resources

SR-IOV

FortiGate VMs installed on KVM platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate VMs with direct access to physical network cards. Enabling SR-IOV means that one PCIe network card can function for a FortiGate VM as multiple separate physical devices. SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate VM and a network card; effectively bypassing KVM host software and without using virtual switching.

FortiGate VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency and CPU usage. FortiGate VMs do not use KVM features that are incompatible with SR-IOV so you can enable SR-IOV without negatively affecting your FortiGate VM.

Setting up SR-IOV on KVM involves creating a physical functions (PF) for each physical network card in the hardware platform. Then, you create virtual functions (VFs) that allow FortiGate VMs to communicate through the PF to the physical network card. VFs are actual PCIe hardware resources and only a limited number of VFs are available for each PF.

SR-IOV implements an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the PF and VFs.

SR-IOV hardware compatibility

SR-IOV requires that the hardware and operating system on which your KVM host is running has BIOS, physical NIC, and network driver support for SR-IOV.

To enable SR-IOV, your KVM platform must be running on hardware that is compatible with SR-IOV and with FortiGate-VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers. As well, the host hardware CPUs must support Second Level Address Translation (SLAT).

For optimal SR-IOV support, install the most up to date ixgbevf or i40e/i40evf network drivers. Fortinet recommends i40e/i40evf drivers because they provide four TxRx queues for each VF and ixgbevf only provides two TxRx queues.

Enable SR-IOV support for Intel systems

Use the following steps to enable SR-IOV support for KVM host systems that use Intel CPUs. These steps involve enabling and verifying Intel VT-d specifications in the BIOS and Linux kernel. You can skip these steps if VT-d is already enabled.

On an Intel host PC, Intel VT-d BIOS settings provide hardware support for directly assigning a physical device to a virtual machine.

  1. View the BIOS settings of the host machine and enable VT-d settings if they are not already enabled.
    You may have to review the manufacturer's documentation for details.
  2. Activate Intel VT-d in the Linux kernel by adding the intel_iommu=on parameter to the kernel line in the /boot/grub/grub.conf file. For example:

default=0

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

hiddenmenu

title Red Hat Enterprise Linux Server (2.6.32-330.x86_645)

root (hd0,0)

kernel /vmlinuz-2.6.32-330.x86_64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet intel_iommu=on

initrd /initrd-2.6.32-330.x86_64.img

  1. Restart the system.

Enable SR-IOV support for AMD systems

Use the following steps to enable SR-IOV support for KVM host systems that use AMD CPUs. These steps involve enabling the AMD IOMMU specifications in the BIOS and Linux kernel. You can skip these steps if AMD IOMMU is already enabled.

On an AMD host PC, IOMMU BIOS settings provide hardware support for directly assigning a physical device to a virtual machine.

  1. View the BIOS settings of the host machine and enable IOMMU settings if they are not already enabled.
    You may have to review the manufacturer's documentation for details.
  2. Append amd_iommu=on to the kernel command line in /boot/grub/grub.conf so that AMD IOMMU specifications are enabled when the system starts up.
  3. Restart the system.

Verify that Linux and KVM can find SR-IOV-enabled PCI devices

You can use the lspci command to view the list of PCI devices and verify that your SR-IOV supporting network cards are on the list. The following output example shows some example entries for the Intel 82576 network card:

# lspci
03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

Optionally modify the SR-IOV kernel modules

If the device is supported the driver kernel module should be loaded automatically by the kernel. You can enable optional parameters using the modprobe command. For example, the Intel 82576 network interface card uses the igb driver kernel module.
# modprobe igb [<option>=<VAL1>,<VAL2>,]
# lsmod |grep igb
igb 87592 0
dca 6708 1 igb

Attaching an SR-IOV network device to a FortiGate VM

You can enable SR-IOV for a FortiGate VM by creating a Virtual Function (VF) and then attaching the VF to your FortiGate VM.

Activate and verify an SR-IOV VF

The max_vfs parameter of the igb module allocates the maximum number of Virtual Functions (VFs). The max_vfs parameter causes the driver to spawn multiple VFs.

Before activating the maximum number of VFs enter the following command to remove the igb module:
# modprobe -r igb

Restart the igb module with max_vfs set to the maximum supported by your device. For example, the valid range for the Intel 82576 network interface card is 0 to 7. To activate the maximum number of VFs supported by this device enter:
# modprobe igb max_vfs=7

Make the VFs persistent by adding options igb max_vfs=7 to any file in /etc/modprobe.d. For example:
# echo "options igb max_vfs=7" >>/etc/modprobe.d/igb.conf

Verify the new VFs. For example, you could use the following lspci command to list the newly added VFs attached to the Intel 82576 network device. Alternatively, you can use grep to search for Virtual Function, to search for devices that support VFs.

# lspci | grep 82576
0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01)
0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)

Use the -n parameter of the lspci command to find the identifier for the PCI device. The PFs correspond to 0b:00.0 and 0b:00.1. All VFs have Virtual Function in the description.

Verify that the devices exist with virsh

The libvirt service must recognize a PCI device before you can add it to a virtual machine. libvirt uses a similar notation to the lspci output.

Use the virsh nodedev-list command and the grep command to filter the Intel 82576 network device from the list of available host devices. In the example, 0b is the filter for the Intel 82576 network devices. This may vary for your system and may result in additional devices.

# virsh nodedev-list | grep 0b
pci_0000_0b_00_0
pci_0000_0b_00_1
pci_0000_0b_10_0
pci_0000_0b_10_1
pci_0000_0b_10_2
pci_0000_0b_10_3
pci_0000_0b_10_4
pci_0000_0b_10_5
pci_0000_0b_10_6
pci_0000_0b_11_7
pci_0000_0b_11_1
pci_0000_0b_11_2
pci_0000_0b_11_3
pci_0000_0b_11_4
pci_0000_0b_11_5

The serial numbers for the Virtual Functions and Physical Functions should be in the list.

Get device details with virsh

The pci_0000_0b_00_0 is one of the PFs and pci_0000_0b_10_0 is the first corresponding VF for that PF. Use virsh nodedev-dumpxml to get device details for both devices.

Example device details for the pci_0000_0b_00_0 PF device:

 # virsh nodedev-dumpxml pci_0000_0b_00_0
<device>
   <name>pci_0000_0b_00_0</name>
   <parent>pci_0000_00_01_0</parent>
   <driver>
      <name>igb</name>
   </driver>
   <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>0</slot>
      <function>0</function>
      <product id='0x10c9'>82576 Gigabit Network Connection</product>
      <vendor id='0x8086'>Intel Corporation</vendor>
   </capability>
</device>

Example device details for the pci_0000_0b_10_0 PF device:

# virsh nodedev-dumpxml pci_0000_0b_10_0
<device>
   <name>pci_0000_0b_10_0</name>
   <parent>pci_0000_00_01_0</parent>
   <driver>
      <name>igbvf</name>
   </driver>
   <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>16</slot>
      <function>0</function>
      <product id='0x10ca'>82576 Virtual Function</product>
      <vendor id='0x8086'>Intel Corporation</vendor>
   </capability>
</device>

You must use this information to specify the bus, slot, and function parameters when you add the VF to a FortiGate VM. A convienent way to do this is to create a temporary xml file and copy the following text into that file.

<interface type='hostdev' managed='yes'>
     <source>
       <address type='pci' domain='0' bus='11' slot='16' function='0'/>
     </source>
   </interface>

You can also include additional information about the VF such as a MAC address, VLAN tag, and so on. If you specify a MAC address, the VF will always have this MAC address. If you do not specify a MAC address, the system generates a new one each time the FortiGate VM restarts.

Add the VF to a FortiGate VM

Enter the following command to add the VF to a FortiGate VM. This configuration attaches the new VF device immediately and saves it for subsequent FortiGate VM restarts.

virsh attach-device MyFGTVM <temp-xml-file> --config

Where MyFGTVM is the name of the FortiGate VM for which to enable SR-IOV.

<temp-xml-file> is the temporary XML file containing the VF configuration.

After this configuration, when you start up the FortiGate VM it detects the SR-IOV VF as a new network interface.