Enhancing FortiGate-VM performance with DPDK and vNP offloading
DPDK and vNP enhance FortiGate-VM performance by offloading part of packet processing to user space while using a kernel bypass solution within the operating system. You must enable and configure DPDK with FortiOS CLI commands.
FortiOS 7.6 supports DPDK for VMware ESXi environments.
FortiOS 7.6 supports IPv6.
The current DPDK+vNP offloading-capable version of FortiOS only supports FortiGate instances with multiple vCPUs. Minimum required RAM sizes differ from those on regular FortiGate-VM models without offloading. Allocating as much RAM size as the licensed limit for maximum performance, as shown, is recommended. FortiOS 7.6 does not restrict RAM size by license. Therefore, you can allocate as much memory as desired on 7.6-based DPDK-enabled FortiGate-VMs:
Model name |
RAM size (licensed limit) |
---|---|
FG-VM02(v) |
No restriction |
FG-VM04(v) |
No restriction |
FG-VM08(v) |
No restriction |
FG-VM16(v) |
No restriction |
FG-VM32(v) |
No restriction |
You can enable DPDK up to 64 vCPUs.
FortiOS supports encrypted traffic for IPsec VPN and not for SSL VPN. Disabling the DPDK option using the CLI or adopting regular FortiGate-VM builds is recommended when using SSL VPN features. For encrypted traffic support for IPsec VPN with DPDK, FortiOS also adds support for the following:
|
Enabling DPDK+vNP offloading may result in fewer concurrent sessions when under high load than when DPDK+vNP offloading is not enabled and the same FortiGate-VM license is used. |
Enabling DPDK in polling mode results in high CPU usage. |