Deploying FortiGate-VM using compute shapes that use Mellanox network cards
You can deploy FortiGate-VMs that are paravirtualized with SR-IOV and DPDK/vNP on OCI shapes that use Mellanox network cards.
To deploy the VM using a Mellanox card:
- Create an instance using a compute shape that supports the Mellanox network card, such as VM.Standard.E3.Flex.
- In the OCI console, verify the instance has SR-IOV enabled:
# oci compute instance get --instance-id <instance ID> { "data": { ... "launch-mode": "PARAVIRTUALIZED", "launch-options": { "boot-volume-type": "PARAVIRTUALIZED", "firmware": "BIOS", "is-consistent-volume-naming-enabled": false, "is-pv-encryption-in-transit-enabled": false, "network-type": "VFIO", "remote-data-volume-type": "PARAVIRTUALIZED" }, ...
The network type is
VFIO
(as opposed toPARAVIRTUALIZED
), which means that SR-IOV is enabled. - In FortiOS, verify that the boot succeeds and check instance details:
# get system status Version: FortiGate-VM64-OPC v6.4.3,build1776,201013 (interim) ... Serial-Number: FGVMULTM20000xxx IPS Malicious URL Database: 2.00798(2020-10-15 09:20) License Status: Valid License Expiration Date: 2021-07-01 VM Resources: 2 CPU, 16085 MB RAM VM Instance ID: ocid1.instance.oc1.iad.xxxxxxxx ...
- Check the NIC driver to ensure that it is mlx5_core:
# diagnose hardware deviceinfo nic port1 Name: port1 Driver: mlx5_core ...
- Enable DPDK:
config dpdk global set status enable set interface port2 port1 end
- Reboot the FortiGate.
- Verify the drivers are now net_mlx5 to signify they are DPDK enabled:
# diagnose hardware deviceinfo nic port1 Name: port1 Driver: net_mlx5 ...
- Verify that DPDK was initiated successfully:
# diagnose dpdk log show early-init ----------------------------------------------------------------- DPDK early initialzation starts at 2020-10-16 00:37:36(UTC) ----------------------------------------------------------------- Content of early configuration file: status=1 multiqueue=0 sleep-on-idle=0 elasticbuffer=0 per-session-accounting=1 hugepage-percentage=30 nr_hugepages=2412 interfaces=port1 port2 cpus=0 1 rxcpus=0 1 vnpcpus=0 1 ipscpus=0 1 txcpus=0 1 Parse config file success! Check CPU definitions 'cpus' Check CPU definitions 'rxcpus' Check CPU definitions 'ipscpus' Check CPU definitions 'vnpcpus' Check CPU definitions 'txcpus' Check CPUs success! Huge page allocation done Ports enabled for DPDK: port1 port2 Port name to device name mapping: port1: eth0 port2: eth1 port3: eth2 port4: eth3 ... Start enabling DPDK kernel driver for port 'port1'... Getting PCI device info for eth0... reading pci dev /sys/class/net/eth0 link path: ../../devices/pci0000:00/0000:00:03.0/net/eth0 Device info of eth0: dev_name: eth0 macaddr: 00:00:17:02:3c:d9 pci_vendor: 0x15b3 pci_device: 0x101a pci_id: 0000:00:03.0 pci_domain: 0 pci_bus: 0 pci_devid: 3 pci_function: 0 guid: n/a Device eth0 is mlx5_core name changed to slv0 Creating DPDK kernel driver for device eth0... Add VNP dev: eth0 PCI: 0000:00:03.0, Succeeded DPDK kernel driver for eth0 successfully created DPDK kernel driver enabled for port 'port1' (device name 'eth0') Start enabling DPDK kernel driver for port 'port2'... Getting PCI device info for eth1... reading pci dev /sys/class/net/eth1 link path: ../../devices/pci0000:00/0000:00:05.0/net/eth1 Device info of eth1: dev_name: eth1 macaddr: 02:00:17:02:bd:df pci_vendor: 0x15b3 pci_device: 0x101a pci_id: 0000:00:05.0 pci_domain: 0 pci_bus: 0 pci_devid: 5 pci_function: 0 guid: n/a Device eth1 is mlx5_core name changed to slv1 Creating DPDK kernel driver for device eth1... Add VNP dev: eth1 PCI: 0000:00:05.0, Succeeded DPDK kernel driver for eth1 successfully created DPDK kernel driver enabled for port 'port2' (device name 'eth1') Bind ports success! Make UIO nodes success! DPDK sanity test passed
- Send traffic through the FortiGate and verify the statistics to ensure packets actually pass through DPDK:
# diagnose dpdk statistics show engine
# diagnose dpdk statistics show port
# diagnose dpdk statistics show vnp
# diagnose dpdk statistics show memory