SR-IOV
FortiGate-VMs installed on OpenStack platforms support single root I/O virtualization (SR-IOV) to provide FortiGate-VMs with direct access to hardware devices. Enabling SR-IOV means that one PCIe device (CPU or network card) can function for a FortiGate-VM as multiple separate physical devices (CPUs or network devices). SR-IOV reduces latency and improves CPU efficiency by allowing network traffic to pass directly between a FortiGate-VM and network card without passing through the OpenStack kernel or using virtual switching.
FortiGate-VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency. FortiGate-VMs do not use OpenStack features that are incompatible with SR-IOV so you can enable SR-IOV without negatively affecting your FortiGate-VM.
SR-IOV hardware compatibility
SR-IOV requires that the hardware on which your OpenStack host is running has BIOS, physical NIC, and network driver support for SR-IOV.
To enable SR-IOV, your OpenStack platform must run on hardware that is compatible with SR-IOV and with FortiGate-VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers.
For optimal SR-IOV support, install the most up to date ixgbevf or i40evf network drivers.
To create SR-IOV VFs:
This section describes how to create virtual functions (VFs) for SR-IOV-compatible Intel network interfaces. An SR-IOV VF is a virtual PCIe device that you must add to OpenStack to allow your FortiGate-VM to use SR-IOV to communicate with a physical Ethernet interface or physical function (PF).
- Enable SR-IOV in the host system's BIOS by enabling VT-d.
- Enable IOMMU for Linux by adding
intel_iommu=on
to kernel parameters. Do this by adding the following line to the /etc/default/grub file:GRUB_CMDLINE_LINUX_DEFAULT="nomdmonddf nomdmonisw intel_iommu=on
- Save your changes. From the Linux command line, enter the following commands to update grub and reboot the host device:
# update-grub
# reboot
- On each compute node, create VFs using the PCI SYS interface:
# echo '7' > /sys/class/net/eth3/device/sriov/numvfs
- If the previous command produces a
Device or resource busy
error message, you must setsriov_numvfs
to0
before setting it to the new value. - Optionally determine the maximum number of VFs a PF can support:
# cat /sys/class/net/eth3/device/sriov_totalvfs
- Enter the following command to ensure an SR-IOV interface is up and verify its status:
# ip link set eth3 up
# ip link show eth3
- Enter the following command to verify that the VFs have been created:
# lspci | grep Ethernet
- Enter the following command to ensure the VFs are recreated when the system reboots:
# echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local
To allowlist PCI devices:
You must allowlist SR-IOV devices so their traffic can pass through OpenStack to the FortiGate-VM. The following example shows how to allowlist SR-IOV devices by modifying the nova-compute service. (You can also edit the pci_passthrough_whitelist
parameter to add allowlisting.)
- To modify the nova-compute service, open the nova.comp file and add the following line. This setting adds traffic from eth3 to the physnet2 physical network and allows physnet2 traffic to pass through OpenStack to your FortiGate-VM:
pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"}
-
After entering this command, restart the
nova-compute
service.
To configure neutron-server:
Use the following steps to configure OpenStack neutron-server to support SR-IOV:
- Add the
sriovnicswitch
as mechanism driver, edit theml2_conf.ini
file and add the following line:mechanism_drivers = openvswitch,sriovnicswitch
- Find the
vendor_id
andproduct_id
of the VFs that you created. For example:# lspci -nn | grep -i ethernet
87:00.0 Ethernet controller [0200]: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection [8086:10f8] (rev 01)
87:10.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
87:10.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
- Add the following line to the ml2_conf_sriov.ini on each controller:
supported_pci_vendor_devs = 8086:10edM
In this example the
vendor_id
is8086
and theproduct_id
is10ed
. - Add
ml2_conf_sriov.ini
to theneutron-server
daemon. Edit the initialization script to configure theneutron-server
service to load the SR-IOV configuration file. Include the following lines:--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini
--config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
- Restart the neutron-server service.
To configure the nova-schedule controller:
To complete this step, on controllers running the nova-scheduler
service, add PciPassthroughFilter
to the scheduler_default_filters
parameter and add the following new line under the [DEFAULT]
section in nova.conf
:
[DEFAULT]
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
Restart the nova-scheduler
service.
To enable the Neutron sriov-agent process:
To enable the sriov-agent
process, on each compute node, edit the sriov_agent.ini
file and add the following:
Under [securitygroup]
add:
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
Under [sriov_nic]
add:
physical_device_mappings = physnet2:eth3
exclude_devices =
The example physical_device_mappings
setting includes one mapping between the physical network (physnet2) and one VF called eth3. If you have multiple VFs connected to the same physical network, you can add them all using the following syntax that shows how to add two VFs to physnet2.
physical_device_mappings = physnet2:eth3,physnet2:eth4
Also in the example, exclude_devices
is empty and all VFs associatd with eth3 may be configured by the agent. You can also use exclude_devices
to exclude specific VFs, for example to exclude eth1 and eth2:
exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2
Enter the following command to verify that the neutron sriov_agent
runs successfully:
# neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/sriov_agent.ini
Finally, you should enable the neutron sriov_agent
service.
To assign SR-IOV interfaces to a FortiGate-VM:
After SR-IOV has been added to your OpenStack host, you can now launch FortiGate-VM instances with neutron SR-IOV ports. Use the following steps:
Use the following command to display, the ID of the neutron network where you want the SR-IOV port to be created.
$ net_id=`neutron net-show net04 | grep "\ id\ " | awk '{ print $4 }'`
Use the following command to create the SR-IOV port. This command sets vnic_type=direct
. Other options include normal
, direct-physical
, and macvtap
:
$ port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
Create the VM. This example includes the SR-IOV port created in the previous step:
$ nova boot --flavor m1.large --image ubuntu_14.04 --nic port-id=$port_id test-sriov