Fortinet white logo
Fortinet white logo
8.6.3

Deploying FortiWLC Virtual Controllers with Linux KVM

Deploying FortiWLC Virtual Controllers with Linux KVM

This section describes the virtual controller deplyoment procedure on Linux KVM. This section includes the following topics:

Pre-requisites

For deployment and management of the Virtual Controller on Linux KVM, install the following 3rd party software.

  • Install Ubuntu v16.04 LTS server.
  • Install KVM on the Ubuntu LTS server.
  • Create an open Vswitch with KVM.
  • Install Virtual Machine Manage (virt-manager) to create and manage guest virtual machines.

Note: To accomplish the pre-requisites refer to the respective 3rd party documentation.

Downloading the Virtual Controller Package File

You can download the virtual controller packages from the Fortinet Customer Support website. To access the support website you need a Fortinet Customer Support account.

The file name is, forti-x.x-xbuild-0-x86_64.img.KVM.zip, where x.x-x is the release version number. For example, 8.6.3.

Installing Linux KVM

Install Ubuntu 16.04.2 64-bit Desktop version.

  1. Run the apt-get install openssh-server command to install openssh utility.
    Now, you should be able to ssh to the machine.
  2. Run the egrep -c '(vmx|svm)' /proc/cpuinfo command to check whether the system supports Virtualization or not.
    If the output is 0, then the system does not support Virtualization. If the output is greater than 0 it means your system is set and ready to go for KVM installation.
  3. Run the apt-get install openvswitch openvswitch-common openvswitch-switch and /etc/init.d/openvswitch-switch start commands to install openvswitch which is used for tagging and untagging the vlans created.
  4. Run the following commands to create a virtual-bridge.
    • ovs-vsctl add-br <bridge-name:(user-defined)>
    • ovs-vsctl port <port-name:(user-defined) <eth-intf: name of the physical Ethernet port>
    • ovs-vsctl set port vnet0 trunks=0,168,169
      In this command 168 and 169 are tagged vlans and 0 is a mandatory argument which specifies the native-vlan.
    • dhclient <<bridge-name:(user-defined)>
  5. Run the ovs-vsctl show command to see the virtual switch created. This is a sample command output:
    root@automation-HP-406-G1-MT:~# ovs-vsctl show
    52690264-a2da-4a63-86e9-c8ceabf9be72
    Bridge "N164-T168-T169" (N164-T168-T169:Bridge-name)
    Port "N164-T168-T169" (N164-T168-T169:port--name)
    Interface "N164-T168-T169"
    type: internal
    Port "enp3s0" (enp3s0:physical Ethernet port name)
    Interface "enp3s0"
    Port "vnet0"
    trunks: [0, 168, 169]
    Interface "vnet0"
    ovs_version: "2.5.0"
  6. Run the sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils command to install KVM.
  7. Run the sudo adduser `id -username` libvirtd command to ensure that your Ubuntu username is added to the group libvirtd.
  8. Run the sudo apt-get install virt-manager command to install graphical user interface for KVM.
  9. After the virt-manager is installed, type virt-manager to start the virtual manager application.
  10. You can create a virtual Instance using GUI. In one of the window, you have to select bridge interface vnet0.
  11. Create a virtual network:
    • Create a directory for storing the virtual network xml file, for example, mkdir vmswitch-xml.
    • Let the name of the xml file stored in the directory be N164-T168-T169.xml.
    • Contents of the xml file are as follows:

      <network>
      <name>N164</name>

      <forward mode='bridge'/>
      <bridge name='N164-T168-T169' /> #Created Bridge name
      <virtualport type='openvswitch'/>
      <portgroup name='N164-T168-T169'> #Created Port name
      <vlan trunk='yes'>
      <tag id='164' nativeMode='untagged'/>
      <tag id='168'/> #tagged vlan
      <tag id='169'/> #tagged vlan
      </vlan>
      </portgroup>
      </network>

  12. Run the following commands to activate the created virtual network.
    • virsh net-define N164-T168-T169.xml
    • virsh net-start N164
  13. Copy the image in the specified path and run the VM through virt-manager(GUI).
    • cd /var/lib/libvirt/images/
    • wget -c http://10.34.224.254/release/8.6-3build/11/x86_64/x86_64/FWCVIR/forti-8.6-3build-11-x86_64.img.KVM.zip(this is a sample file).
    • Unzip forti-8.6-3build-11-x86_64.img.KVM.zip

    Configuring the Virtual Controller

    Perform these steps to configure a virtual controller.

    1. Open the virt-manager and select Import Existing Disk Image.
    2. Browse to the location of the downloaded package file and specify the OS type as Linux and Version as Ubuntu 16.04.
    3. Click Forward.

    4. Specify the memory and CPU setting as per the deployed virtual controller model.
    5. Click Forward.
    6. Specify the hostname, select the network adapter from the Network Selection drop down, and specify the Portgroup.
    7. Click Finish.
    8. In the CPUs settings, configure the Model as Nehalem. ClickApply.
    9. In the VirtIO Disk1, under Advanced options, select the Disk bus as IDE. ClickApply.
    10. In the NIC settings, specify the Network source, Portgroup, and Device model as virtio. ClickApply.
    11. The Virtual Controller deployment is complete.

    Recommended Linux KVM Host Settings

    Fortinet recommends the following host settings for enhanced Controller performance.

    • Disable the offload settings like GSO, GRO, TSO, and UFO for all ports. Run the ethtool -K <eth dev> gso off lro off tso off ufo off command.
    • Set the ring descriptor size (ethtool -G <eth dev> 4096) to the maximum limit (4096) for all ports.
    • Set net.core.netdev_budget to 600 and net.core.netdev_max_backlog to 60000.
      The commands in the above steps could be set in /etc/rc.local so that configuration is retained on a reboot of the host. Based on the VM model, modify the guest xml file and add below line in each interface in xml file.as follows:
      • FWC-VM-50: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=’2'/>
      • FWC-VM-200: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=’2'/>
      • FWC-VM-500: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=‘4'/>
      • FWC-VM-1000: <driver name='vhost' txmode='iothread' ioeventfd='on' queues='8'/>
      • FWC-VM-3000: <driver queues=‘16'/>
        This is an example of FWC-VM-200 configuration.
    • In servers where the available physical cores, that is, half of HT CPUs, are more than the number of vhost kernel threads, set the IRQ affinity for vhost kernel threads. For example, in 1000D each port has 8 queues, hence, there are 32 total vhost threads. Use this script to set the affinity for vhost kernel threads for 1000D VM on Dell PowerEdge R730 (for other hosts, the configuration would be different).

    #!/bin/bash

    cpids=`ps -ef | grep [v]host- | awk '{ print $2 }' | xargs`

    echo $cpids

    for cpid in $cpids;

    do

    taskset -pc 36-71 $cpid

    echo $cpid]

    done

    This script sets the CPU affinity for vhost kernel threads from CPUs 36-71.

    Parameters

    FWC-VM-50

    FWC-VM-200

    FWC-VM-500

    FWC-VM-1000

    FWC-VM-3000

    CPU affinity

    Yes

    Yes

    Yes

    Yes

    No
    (
    Applicable only if the number of physical cores on the host are more than 48.)

    Offload settings

    Yes

    Yes

    Yes

    Yes

    Yes

    Ring Descriptor size

    4096

    4096

    4096

    4096

    4096

    Net.core sysctl parameters

    Yes

    Yes

    Yes

    Yes

    Yes

    Guest Network configuration

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <driver
    queues=’16’/>

Deploying FortiWLC Virtual Controllers with Linux KVM

Deploying FortiWLC Virtual Controllers with Linux KVM

This section describes the virtual controller deplyoment procedure on Linux KVM. This section includes the following topics:

Pre-requisites

For deployment and management of the Virtual Controller on Linux KVM, install the following 3rd party software.

  • Install Ubuntu v16.04 LTS server.
  • Install KVM on the Ubuntu LTS server.
  • Create an open Vswitch with KVM.
  • Install Virtual Machine Manage (virt-manager) to create and manage guest virtual machines.

Note: To accomplish the pre-requisites refer to the respective 3rd party documentation.

Downloading the Virtual Controller Package File

You can download the virtual controller packages from the Fortinet Customer Support website. To access the support website you need a Fortinet Customer Support account.

The file name is, forti-x.x-xbuild-0-x86_64.img.KVM.zip, where x.x-x is the release version number. For example, 8.6.3.

Installing Linux KVM

Install Ubuntu 16.04.2 64-bit Desktop version.

  1. Run the apt-get install openssh-server command to install openssh utility.
    Now, you should be able to ssh to the machine.
  2. Run the egrep -c '(vmx|svm)' /proc/cpuinfo command to check whether the system supports Virtualization or not.
    If the output is 0, then the system does not support Virtualization. If the output is greater than 0 it means your system is set and ready to go for KVM installation.
  3. Run the apt-get install openvswitch openvswitch-common openvswitch-switch and /etc/init.d/openvswitch-switch start commands to install openvswitch which is used for tagging and untagging the vlans created.
  4. Run the following commands to create a virtual-bridge.
    • ovs-vsctl add-br <bridge-name:(user-defined)>
    • ovs-vsctl port <port-name:(user-defined) <eth-intf: name of the physical Ethernet port>
    • ovs-vsctl set port vnet0 trunks=0,168,169
      In this command 168 and 169 are tagged vlans and 0 is a mandatory argument which specifies the native-vlan.
    • dhclient <<bridge-name:(user-defined)>
  5. Run the ovs-vsctl show command to see the virtual switch created. This is a sample command output:
    root@automation-HP-406-G1-MT:~# ovs-vsctl show
    52690264-a2da-4a63-86e9-c8ceabf9be72
    Bridge "N164-T168-T169" (N164-T168-T169:Bridge-name)
    Port "N164-T168-T169" (N164-T168-T169:port--name)
    Interface "N164-T168-T169"
    type: internal
    Port "enp3s0" (enp3s0:physical Ethernet port name)
    Interface "enp3s0"
    Port "vnet0"
    trunks: [0, 168, 169]
    Interface "vnet0"
    ovs_version: "2.5.0"
  6. Run the sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils command to install KVM.
  7. Run the sudo adduser `id -username` libvirtd command to ensure that your Ubuntu username is added to the group libvirtd.
  8. Run the sudo apt-get install virt-manager command to install graphical user interface for KVM.
  9. After the virt-manager is installed, type virt-manager to start the virtual manager application.
  10. You can create a virtual Instance using GUI. In one of the window, you have to select bridge interface vnet0.
  11. Create a virtual network:
    • Create a directory for storing the virtual network xml file, for example, mkdir vmswitch-xml.
    • Let the name of the xml file stored in the directory be N164-T168-T169.xml.
    • Contents of the xml file are as follows:

      <network>
      <name>N164</name>

      <forward mode='bridge'/>
      <bridge name='N164-T168-T169' /> #Created Bridge name
      <virtualport type='openvswitch'/>
      <portgroup name='N164-T168-T169'> #Created Port name
      <vlan trunk='yes'>
      <tag id='164' nativeMode='untagged'/>
      <tag id='168'/> #tagged vlan
      <tag id='169'/> #tagged vlan
      </vlan>
      </portgroup>
      </network>

  12. Run the following commands to activate the created virtual network.
    • virsh net-define N164-T168-T169.xml
    • virsh net-start N164
  13. Copy the image in the specified path and run the VM through virt-manager(GUI).
    • cd /var/lib/libvirt/images/
    • wget -c http://10.34.224.254/release/8.6-3build/11/x86_64/x86_64/FWCVIR/forti-8.6-3build-11-x86_64.img.KVM.zip(this is a sample file).
    • Unzip forti-8.6-3build-11-x86_64.img.KVM.zip

    Configuring the Virtual Controller

    Perform these steps to configure a virtual controller.

    1. Open the virt-manager and select Import Existing Disk Image.
    2. Browse to the location of the downloaded package file and specify the OS type as Linux and Version as Ubuntu 16.04.
    3. Click Forward.

    4. Specify the memory and CPU setting as per the deployed virtual controller model.
    5. Click Forward.
    6. Specify the hostname, select the network adapter from the Network Selection drop down, and specify the Portgroup.
    7. Click Finish.
    8. In the CPUs settings, configure the Model as Nehalem. ClickApply.
    9. In the VirtIO Disk1, under Advanced options, select the Disk bus as IDE. ClickApply.
    10. In the NIC settings, specify the Network source, Portgroup, and Device model as virtio. ClickApply.
    11. The Virtual Controller deployment is complete.

    Recommended Linux KVM Host Settings

    Fortinet recommends the following host settings for enhanced Controller performance.

    • Disable the offload settings like GSO, GRO, TSO, and UFO for all ports. Run the ethtool -K <eth dev> gso off lro off tso off ufo off command.
    • Set the ring descriptor size (ethtool -G <eth dev> 4096) to the maximum limit (4096) for all ports.
    • Set net.core.netdev_budget to 600 and net.core.netdev_max_backlog to 60000.
      The commands in the above steps could be set in /etc/rc.local so that configuration is retained on a reboot of the host. Based on the VM model, modify the guest xml file and add below line in each interface in xml file.as follows:
      • FWC-VM-50: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=’2'/>
      • FWC-VM-200: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=’2'/>
      • FWC-VM-500: <driver name='vhost' txmode='iothread' ioeventfd='on' queues=‘4'/>
      • FWC-VM-1000: <driver name='vhost' txmode='iothread' ioeventfd='on' queues='8'/>
      • FWC-VM-3000: <driver queues=‘16'/>
        This is an example of FWC-VM-200 configuration.
    • In servers where the available physical cores, that is, half of HT CPUs, are more than the number of vhost kernel threads, set the IRQ affinity for vhost kernel threads. For example, in 1000D each port has 8 queues, hence, there are 32 total vhost threads. Use this script to set the affinity for vhost kernel threads for 1000D VM on Dell PowerEdge R730 (for other hosts, the configuration would be different).

    #!/bin/bash

    cpids=`ps -ef | grep [v]host- | awk '{ print $2 }' | xargs`

    echo $cpids

    for cpid in $cpids;

    do

    taskset -pc 36-71 $cpid

    echo $cpid]

    done

    This script sets the CPU affinity for vhost kernel threads from CPUs 36-71.

    Parameters

    FWC-VM-50

    FWC-VM-200

    FWC-VM-500

    FWC-VM-1000

    FWC-VM-3000

    CPU affinity

    Yes

    Yes

    Yes

    Yes

    No
    (
    Applicable only if the number of physical cores on the host are more than 48.)

    Offload settings

    Yes

    Yes

    Yes

    Yes

    Yes

    Ring Descriptor size

    4096

    4096

    4096

    4096

    4096

    Net.core sysctl parameters

    Yes

    Yes

    Yes

    Yes

    Yes

    Guest Network configuration

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <drivername=‘vhost’
    txmode=’iothread’

    ioeventfd=’on’

    queues=’2’/>

    <driver
    queues=’16’/>