Fortinet white logo
Fortinet white logo
6.2.0

SLBC - Session-aware Load Balancing Cluster

SLBC - Session-aware Load Balancing Cluster

The Session-aware Load Balancing Cluster (SLBC) protocol is used for clusters consisting of FortiControllers that perform load balancing of both TCP and UDP sessions. As session-aware load balancers, FortiControllers, with FortiASIC DP processors, are capable of directing any TCP or UDP session to any worker installed in the same chassis. It also means that more complex networking features such as NAT, fragmented packets, complex UDP protocols and others such as Session Initiation Protocol (SIP), a communications protocol for signaling and controlling multimedia communication sessions, can be load balanced by the cluster.

Currently, only three FortiController models are available for SLBC: FortiController-5103B, FortiController-5903C, and FortiController-5913C. Supported workers include the FortiGate-5001B, 5001C, 5101C, and 5001D.

FortiGate-7000 series products also support SLBC.

Note

You cannot mix FGCP and SLBC clusters in the same chassis.

An SLBC with two FortiControllers can operate in active-passive mode or dual mode. In active-passive mode, if the active FortiController fails, traffic is transferred to the backup FortiController. In dual mode both FortiControllers load balance traffic and twice as many network interfaces are available.

SLBC clusters consisting of more than one FortiController use the following types of communication between FortiControllers to operate normally:

  • Heartbeat: Allows the FortiControllers in the cluster to find each other and share status information. If a FortiController stops sending heartbeat packets it is considered down by other cluster members. By default heartbeat traffic uses VLAN 999.
  • Base control: Communication between FortiControllers on subnet 10.101.11.0/255.255.255.0 using VLAN 301.
  • Base management: Communication between FortiControllers on subnet 10.101.10.0/255.255.255.0 using VLAN 101.
  • Session synchronization: If one FortiController fails, session synchronization allows another to take its place and maintain active communication sessions. FortiController-5103B session sync traffic uses VLAN 2000. FortiController-5903C and FortiController-5913C session sync traffic between the FortiControllers in slot 1 uses VLAN 1900 and between the FortiControllers in slot 2 uses VLAN 1901. You cannot change these VLANs.

Note that SLBC does not support session synchronization between workers in the same chassis. The FortiControllers in a cluster keep track of the status of the workers in their chassis and load balance sessions to the workers. If a worker fails the FortiController detects the failure and stops load balancing sessions to that worker. The sessions that the worker is processing when it fails are lost.

Changing the heartbeat VLAN

To change the VLAN from the FortiController GUI, from the System Information dashboard widget, beside HA Status, select Configure. Change the VLAN to use for HA heartbeat traffic(1-4094) setting.

You can also change the heartbeat VLAN ID from the FortiController CLI. For example, to change the heartbeat VLAN ID to 333, enter the following:

config system ha

set hbdev-vlan-id 333

end

Setting the mgmt interface as a heartbeat interface

To add the mgmt interface to the list of heartbeat interfaces used, on the FortiController-5103B, enter the following:

config system ha

set hbdev b1 b2 mgmt

end

This example adds the mgmt interface for heartbeats to the B1 and B2 interfaces. The B1 and B2 ports are recommended because they are 10G ports and the mgmt interface is a 100Mbit interface.

Note

FortiController-5103B is currently the only model that allows its mgmt interface to be added to the heartbeat interfaces list.

Changing the heartbeat interface mode

By default, only the first heartbeat interface (usually B1) is used for heartbeat traffic. If this interface fails on any of the FortiControllers in a cluster, then the second heartbeat interface is used (B2).

To simultaneously use all heartbeat interfaces for heartbeat traffic, enter the following command:

config load-balance-setting

set base-mgmt-interface-mode active-active

end

Changing the base control subnet and VLAN

You can change the base control subnet and VLAN from the FortiController CLI. For example, to change the base control subnet to 10.122.11.0/255.255.255.0 and the VLAN ID to 320, enter the following:

config load-balance setting

set base-ctrl-network 10.122.11.0 255.255.255.0

config base-ctrl-interfaces

edit b1

set vlan-id 320

next

edit b2

set vlan-id 320

end

end

Changing the base management subnet and VLAN

You can change the base management subnet from the FortiController GUI under Load Balance > Config and changing the Internal Management Network.

You can also change the base management subnet and VLAN ID from the FortiController CLI. For example, to change the base management subnet to 10.121.10.0/255.255.255.0 and the VLAN to 131, enter the following:

config load-balance setting

set base-mgmt-internal-network 10.121.10.0 255.255.255.0

config base-mgt-interfaces

edit b1

set vlan-id 131

next

edit b2

set vlan-id 131

end

end

If required, you can use different VLAN IDs for the B1 and B2 interface.

Changing this VLAN only changes the VLAN used for base management traffic between chassis. Within a chassis the default VLAN is used.

Enabling and configuring the session sync interface

To enable session synchronization in a two chassis configuration, enter the following command:

config load-balance setting

set session-sync enable

end

You will then need to select the interface to use for session sync traffic. The following example sets the FortiController-5103B session sync interface to F4:

config system ha

set session-sync-port f4

end

The FortiController-5903C and FortiController-5913C use B1 and B2 as the session sync interfaces so no configuration changes are required.

FGCP to SLBC migration

You can convert an FGCP virtual cluster (with VDOMs) to an SLBC cluster. The conversion involves replicating the VDOM, interface, and VLAN configuration of the FGCP cluster on the SLBC cluster primary worker, then backing up the configuration of each FGCP cluster VDOM. Each of the VDOM configuration files is manually edited to adjust interface names. These modified VDOM configuration files are then restored to the corresponding SLBC cluster primary worker VDOMs.

For this migration to work, the FGCP cluster and the SLBC workers must be running the same firmware version, the VDOMs are enabled on the FGCP cluster, and the SLBC workers have been registered and licensed. However, the FGCP cluster units do not have to be the same model as the SLBC cluster workers.

Only VDOM configurations are migrated. You have to manually configure primary worker management and global settings.

Conversion steps

  1. Add VDOM(s) to the SLBC primary worker with names that match those of the FGCP cluster.
  2. Map FGCP cluster interface names to SLBC primary worker interface names. For example, you can map the FGCP cluster port1 and port2 interfaces to the SLBC primary worker fctl/f1 and fctl/f2 interfaces. You can also map FGCP cluster interfaces to SLBC trunks, and include aggregate interfaces.
  3. Add interfaces to the SLBC primary worker VDOMs according to your mapping. This includes moving SLBC physical interfaces into the appropriate VDOMs, creating aggregate interfaces, and creating SLBC trunks if required.
  4. Add VLANs to the SLBC primary worker that match VLANs in the FGCP cluster. They should have the same names as the FGCP VLANs, be added to the corresponding SLBC VDOMs and interfaces, and have the same VLAN IDs.
  5. Add inter-VDOM links to the SLBC primary worker that match the FGCP cluster.
  6. Backup the configurations of each FGCP cluster VDOM, and SLBC primary worker VDOM.
  7. Use a text editor to replace the first four lines of each FGCP cluster VDOM configuration file with the first four lines of the corresponding SLBC primary worker VDOM configuration file. Here are example lines from an SLBC primary worker VDOM configuration file:

    #config-version=FG-5KB-5.02-FW-build670-150318:opmode=0:vdom=1:user=admin

    #conf_file_ver=2306222306838080295

    #buildno=0670

    #global_vdom=0:vd_name=VDOM1

  8. With the text editor, edit each FGCP cluster VDOM configuration file and replace all FGCP cluster interface names with the corresponding SLBC worker interface names, according to the mapping you created in step 2.
  9. Set up a console connection to the SLBC primary worker to check for errors during the following steps.
  10. From the SLBC primary worker, restore each FGCP cluster VDOM configuration file to each corresponding SLBC primary worker VDOM.
  11. Check the following on the SLBC primary worker:
    • Make sure set type fctrl-trunk is enabled for SLBC trunk interfaces.
    • Enable the global and management VDOM features that you need, including SNMP, logging, connections to FortiManager, FortiAnalyzer, and so on.
    • If there is a FortiController in chassis slot 2, make sure the worker base2 interface status is up.
    • Remove snmp-index entries for each interface.
    • Since you can manage the workers from the FortiController you can remove management-related configurations using the worker mgmt1 and mgmt2 interfaces (Logging, SNMP, admin access, etc.) if you are not going to use these interfaces for management.

How to set up SLBC with one FortiController-5103B

This example describes the basics of setting up a Session-aware Load Balancing Cluster (SLBC) that consists of one FortiController-5103B, installed in chassis slot 1, and three FortiGate-5001C workers, installed in chassis slots 3, 4, and 5.

This SLBC configuration can have up to eight 10Gbit network connections.

Configuring the hardware

  1. Install a FortiGate-5000 series chassis and connect it to power. Install the FortiController in slot 1. Install the workers in slots 3, 4, and 5. Power on the chassis.
  2. Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally. (To check normal operation LED status see the FortiGate-5000 series documents available here.)
  3. Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product.

Configuring the FortiController

To configure the FortiController, you will need to either connect to the FortiController GUI or CLI with the default IP address of http://192.168.1.99. Log in using the admin account (no password).

  1. Add a password for the admin account. Use the Administrators widget in the GUI, or enter the following CLI command:

    config admin user

    edit admin

    set password <password>

    next

    end

  2. Change the FortiController mgmt interface IP address. Use the Management Port widget in the GUI, or enter the following CLI command:

    config system interface

    edit mgmt

    set ip 172.20.120.151/24

    next

    end

  3. If you need to add a default route for the management IP address, enter the following command:

    config route static

    edit route 1

    set gateway 172.20.121.2

    next

    end

  4. To set the chassis type that you are using, enter the following CLI command:

    config system global

    set chassic-type fortigate-5140

    end

  5. Go to Load Balance > Config and add workers to the cluster by selecting Edit and moving the slots that contain workers to the Member list. The Config page shows the slots in which the cluster expects to find workers. Since the workers have not been configured yet, their status is Down.

    Configure the External Management IP/Netmask. Once the workers are connected to the cluster, you can use this IP address to manage and configure them.

  6. You can also enter the following CLI command to add slots 3, 4, and 5 to the cluster:

    config load-balance setting

    config slots

    edit 3

    next

    edit 4

    next

    edit 5

    next

    end

  7. You can also enter the following command to configure the external management IP/Netmask and management access to the following address:

    config load-balance setting

    set base-mgmt-external-ip 172.20.120.100 255.255.255.0

    set base-mgmt-allowaccess https ssh ping

    end

Adding the workers

Before you begin adding workers to the cluster, make sure you enter the execute factoryreset command in the CLI so the workers are set to factory default settings. If the workers are going to run FortiOS Carrier, add the FortiOS Carrier licence instead - this will reset the worker to factory default settings.

Also make sure to register and apply licenses to each worker, including FortiClient licensing, FortiCloud activation, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed, third-party certificates are synchronized to all of the workers. FortiToken licenses can be added at any time, which will also synchronize across all of the workers.

  1. Log in to each of the worker's CLI and enter the following CLI command to set the worker to operate in FortiController mode:

    config system elbc

    set mode fortincontroller

    end

    Once the command is entered, the worker restarts and joins the cluster.

  2. On the FortiController, go to Load Balance > Status. You will see the workers appear in their appropriate slots. The worker in the lowest slot number usually becomes the primary unit.

You can now manage the workers in the same way as you would manage a standalone FortiGate. You can connect to the worker GUI or CLI using the External Management IP. If you had configured the worker mgmt1 or mgmt2 interfaces you can also connect to one of these addresses to manage the cluster.

To operate the cluster, connect networks to the FortiController front panel interfaces and connect to a worker GUI or CLI to configure the workers to process the traffic they receive. When you connect to the External Management IP you connect to the primary worker. When you make configuration changes they are synchronized to all workers in the cluster.

Managing the devices in an SLBC with the external management IP

The External Management IP address is used to manage all of the individual devices in a SLBC by adding a special port number. This special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number can be calculated using the following formula:

service_port x 100 + (chassis_id - 1) x 20 + slot_id

Where:

  • service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS and so on).
  • chassis_id is the chassis ID specified as part of the FortiController HA configuration and can be 1 or 2.
  • slot_id is the number of the chassis slot.
Note

By default, chassis 1 is the primary chassis and chassis 2 is the backup chassis. However, the actual primary chassis is the one with the primary FortiController, which can be changed independently of the chassis number. Additionally, the chassis_id is defined by the chassis number, not whether the chassis contains the primary FortiController.

Some examples:

  • HTTPS, chassis 1, slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302:

    browse to: https://172.20.120.100:44302

  • HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024:

    browse to http://172.20.120.100/8024

  • HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310:

    browse to https://172.20.120.100/44310

Single chassis or chassis 1 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8001 44301 2301 2201 16101
Slot 2 8002 44302 2302 2202 16102
Slot 3 8003 44303 2303 2203 16103
Slot 4 8004 44304 2304 2204 16104
Slot 5 8005 44305 2305 2205 16105
Slot 6 8006 44306 2306 2206 16106
Slot 7 8007 44307 2307 2207 16107
Slot 8 8008 44308 2308 2208 16108
Slot 9 8009 44309 2309 2209 16109
Slot 10 8010 44310 2310 2210 16110
Slot 11 8011 44311 2311 2211 16111
Slot 12 8012 44312 2312 2212 16112
Slot 13 8013 44313 2313 2213 16113
Slot 14 8014 44314 2314 2214 16114

Chassis 2 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8021 44321 2321 2221 16121
Slot 2 8022 44322 2322 2222 16122
Slot 3 8023 44323 2323 2223 16123
Slot 4 8024 44324 2324 2224 16124
Slot 5 8025 44325 2325 2225 16125
Slot 6 8026 44326 2326 2226 16126
Slot 7 8027 44327 2327 2227 16127
Slot 8 8028 44328 2328 2228 16128
Slot 9 8029 44329 2329 2229 16129
Slot 10 8030 44330 2330 2230 16130
Slot 11 8031 44331 2331 2231 16131
Slot 12 8032 44332 2332 2232 16132
Slot 13 8033 44333 2333 2233 16133
Slot 14 8034 44334 2334 2234 16134

For more detailed information regarding FortiController SLBC configurations, see the FortiController Session-Aware Load Balancing Cluster Guide.

SLBC - Session-aware Load Balancing Cluster

SLBC - Session-aware Load Balancing Cluster

The Session-aware Load Balancing Cluster (SLBC) protocol is used for clusters consisting of FortiControllers that perform load balancing of both TCP and UDP sessions. As session-aware load balancers, FortiControllers, with FortiASIC DP processors, are capable of directing any TCP or UDP session to any worker installed in the same chassis. It also means that more complex networking features such as NAT, fragmented packets, complex UDP protocols and others such as Session Initiation Protocol (SIP), a communications protocol for signaling and controlling multimedia communication sessions, can be load balanced by the cluster.

Currently, only three FortiController models are available for SLBC: FortiController-5103B, FortiController-5903C, and FortiController-5913C. Supported workers include the FortiGate-5001B, 5001C, 5101C, and 5001D.

FortiGate-7000 series products also support SLBC.

Note

You cannot mix FGCP and SLBC clusters in the same chassis.

An SLBC with two FortiControllers can operate in active-passive mode or dual mode. In active-passive mode, if the active FortiController fails, traffic is transferred to the backup FortiController. In dual mode both FortiControllers load balance traffic and twice as many network interfaces are available.

SLBC clusters consisting of more than one FortiController use the following types of communication between FortiControllers to operate normally:

  • Heartbeat: Allows the FortiControllers in the cluster to find each other and share status information. If a FortiController stops sending heartbeat packets it is considered down by other cluster members. By default heartbeat traffic uses VLAN 999.
  • Base control: Communication between FortiControllers on subnet 10.101.11.0/255.255.255.0 using VLAN 301.
  • Base management: Communication between FortiControllers on subnet 10.101.10.0/255.255.255.0 using VLAN 101.
  • Session synchronization: If one FortiController fails, session synchronization allows another to take its place and maintain active communication sessions. FortiController-5103B session sync traffic uses VLAN 2000. FortiController-5903C and FortiController-5913C session sync traffic between the FortiControllers in slot 1 uses VLAN 1900 and between the FortiControllers in slot 2 uses VLAN 1901. You cannot change these VLANs.

Note that SLBC does not support session synchronization between workers in the same chassis. The FortiControllers in a cluster keep track of the status of the workers in their chassis and load balance sessions to the workers. If a worker fails the FortiController detects the failure and stops load balancing sessions to that worker. The sessions that the worker is processing when it fails are lost.

Changing the heartbeat VLAN

To change the VLAN from the FortiController GUI, from the System Information dashboard widget, beside HA Status, select Configure. Change the VLAN to use for HA heartbeat traffic(1-4094) setting.

You can also change the heartbeat VLAN ID from the FortiController CLI. For example, to change the heartbeat VLAN ID to 333, enter the following:

config system ha

set hbdev-vlan-id 333

end

Setting the mgmt interface as a heartbeat interface

To add the mgmt interface to the list of heartbeat interfaces used, on the FortiController-5103B, enter the following:

config system ha

set hbdev b1 b2 mgmt

end

This example adds the mgmt interface for heartbeats to the B1 and B2 interfaces. The B1 and B2 ports are recommended because they are 10G ports and the mgmt interface is a 100Mbit interface.

Note

FortiController-5103B is currently the only model that allows its mgmt interface to be added to the heartbeat interfaces list.

Changing the heartbeat interface mode

By default, only the first heartbeat interface (usually B1) is used for heartbeat traffic. If this interface fails on any of the FortiControllers in a cluster, then the second heartbeat interface is used (B2).

To simultaneously use all heartbeat interfaces for heartbeat traffic, enter the following command:

config load-balance-setting

set base-mgmt-interface-mode active-active

end

Changing the base control subnet and VLAN

You can change the base control subnet and VLAN from the FortiController CLI. For example, to change the base control subnet to 10.122.11.0/255.255.255.0 and the VLAN ID to 320, enter the following:

config load-balance setting

set base-ctrl-network 10.122.11.0 255.255.255.0

config base-ctrl-interfaces

edit b1

set vlan-id 320

next

edit b2

set vlan-id 320

end

end

Changing the base management subnet and VLAN

You can change the base management subnet from the FortiController GUI under Load Balance > Config and changing the Internal Management Network.

You can also change the base management subnet and VLAN ID from the FortiController CLI. For example, to change the base management subnet to 10.121.10.0/255.255.255.0 and the VLAN to 131, enter the following:

config load-balance setting

set base-mgmt-internal-network 10.121.10.0 255.255.255.0

config base-mgt-interfaces

edit b1

set vlan-id 131

next

edit b2

set vlan-id 131

end

end

If required, you can use different VLAN IDs for the B1 and B2 interface.

Changing this VLAN only changes the VLAN used for base management traffic between chassis. Within a chassis the default VLAN is used.

Enabling and configuring the session sync interface

To enable session synchronization in a two chassis configuration, enter the following command:

config load-balance setting

set session-sync enable

end

You will then need to select the interface to use for session sync traffic. The following example sets the FortiController-5103B session sync interface to F4:

config system ha

set session-sync-port f4

end

The FortiController-5903C and FortiController-5913C use B1 and B2 as the session sync interfaces so no configuration changes are required.

FGCP to SLBC migration

You can convert an FGCP virtual cluster (with VDOMs) to an SLBC cluster. The conversion involves replicating the VDOM, interface, and VLAN configuration of the FGCP cluster on the SLBC cluster primary worker, then backing up the configuration of each FGCP cluster VDOM. Each of the VDOM configuration files is manually edited to adjust interface names. These modified VDOM configuration files are then restored to the corresponding SLBC cluster primary worker VDOMs.

For this migration to work, the FGCP cluster and the SLBC workers must be running the same firmware version, the VDOMs are enabled on the FGCP cluster, and the SLBC workers have been registered and licensed. However, the FGCP cluster units do not have to be the same model as the SLBC cluster workers.

Only VDOM configurations are migrated. You have to manually configure primary worker management and global settings.

Conversion steps

  1. Add VDOM(s) to the SLBC primary worker with names that match those of the FGCP cluster.
  2. Map FGCP cluster interface names to SLBC primary worker interface names. For example, you can map the FGCP cluster port1 and port2 interfaces to the SLBC primary worker fctl/f1 and fctl/f2 interfaces. You can also map FGCP cluster interfaces to SLBC trunks, and include aggregate interfaces.
  3. Add interfaces to the SLBC primary worker VDOMs according to your mapping. This includes moving SLBC physical interfaces into the appropriate VDOMs, creating aggregate interfaces, and creating SLBC trunks if required.
  4. Add VLANs to the SLBC primary worker that match VLANs in the FGCP cluster. They should have the same names as the FGCP VLANs, be added to the corresponding SLBC VDOMs and interfaces, and have the same VLAN IDs.
  5. Add inter-VDOM links to the SLBC primary worker that match the FGCP cluster.
  6. Backup the configurations of each FGCP cluster VDOM, and SLBC primary worker VDOM.
  7. Use a text editor to replace the first four lines of each FGCP cluster VDOM configuration file with the first four lines of the corresponding SLBC primary worker VDOM configuration file. Here are example lines from an SLBC primary worker VDOM configuration file:

    #config-version=FG-5KB-5.02-FW-build670-150318:opmode=0:vdom=1:user=admin

    #conf_file_ver=2306222306838080295

    #buildno=0670

    #global_vdom=0:vd_name=VDOM1

  8. With the text editor, edit each FGCP cluster VDOM configuration file and replace all FGCP cluster interface names with the corresponding SLBC worker interface names, according to the mapping you created in step 2.
  9. Set up a console connection to the SLBC primary worker to check for errors during the following steps.
  10. From the SLBC primary worker, restore each FGCP cluster VDOM configuration file to each corresponding SLBC primary worker VDOM.
  11. Check the following on the SLBC primary worker:
    • Make sure set type fctrl-trunk is enabled for SLBC trunk interfaces.
    • Enable the global and management VDOM features that you need, including SNMP, logging, connections to FortiManager, FortiAnalyzer, and so on.
    • If there is a FortiController in chassis slot 2, make sure the worker base2 interface status is up.
    • Remove snmp-index entries for each interface.
    • Since you can manage the workers from the FortiController you can remove management-related configurations using the worker mgmt1 and mgmt2 interfaces (Logging, SNMP, admin access, etc.) if you are not going to use these interfaces for management.

How to set up SLBC with one FortiController-5103B

This example describes the basics of setting up a Session-aware Load Balancing Cluster (SLBC) that consists of one FortiController-5103B, installed in chassis slot 1, and three FortiGate-5001C workers, installed in chassis slots 3, 4, and 5.

This SLBC configuration can have up to eight 10Gbit network connections.

Configuring the hardware

  1. Install a FortiGate-5000 series chassis and connect it to power. Install the FortiController in slot 1. Install the workers in slots 3, 4, and 5. Power on the chassis.
  2. Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally. (To check normal operation LED status see the FortiGate-5000 series documents available here.)
  3. Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiController and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product.

Configuring the FortiController

To configure the FortiController, you will need to either connect to the FortiController GUI or CLI with the default IP address of http://192.168.1.99. Log in using the admin account (no password).

  1. Add a password for the admin account. Use the Administrators widget in the GUI, or enter the following CLI command:

    config admin user

    edit admin

    set password <password>

    next

    end

  2. Change the FortiController mgmt interface IP address. Use the Management Port widget in the GUI, or enter the following CLI command:

    config system interface

    edit mgmt

    set ip 172.20.120.151/24

    next

    end

  3. If you need to add a default route for the management IP address, enter the following command:

    config route static

    edit route 1

    set gateway 172.20.121.2

    next

    end

  4. To set the chassis type that you are using, enter the following CLI command:

    config system global

    set chassic-type fortigate-5140

    end

  5. Go to Load Balance > Config and add workers to the cluster by selecting Edit and moving the slots that contain workers to the Member list. The Config page shows the slots in which the cluster expects to find workers. Since the workers have not been configured yet, their status is Down.

    Configure the External Management IP/Netmask. Once the workers are connected to the cluster, you can use this IP address to manage and configure them.

  6. You can also enter the following CLI command to add slots 3, 4, and 5 to the cluster:

    config load-balance setting

    config slots

    edit 3

    next

    edit 4

    next

    edit 5

    next

    end

  7. You can also enter the following command to configure the external management IP/Netmask and management access to the following address:

    config load-balance setting

    set base-mgmt-external-ip 172.20.120.100 255.255.255.0

    set base-mgmt-allowaccess https ssh ping

    end

Adding the workers

Before you begin adding workers to the cluster, make sure you enter the execute factoryreset command in the CLI so the workers are set to factory default settings. If the workers are going to run FortiOS Carrier, add the FortiOS Carrier licence instead - this will reset the worker to factory default settings.

Also make sure to register and apply licenses to each worker, including FortiClient licensing, FortiCloud activation, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed, third-party certificates are synchronized to all of the workers. FortiToken licenses can be added at any time, which will also synchronize across all of the workers.

  1. Log in to each of the worker's CLI and enter the following CLI command to set the worker to operate in FortiController mode:

    config system elbc

    set mode fortincontroller

    end

    Once the command is entered, the worker restarts and joins the cluster.

  2. On the FortiController, go to Load Balance > Status. You will see the workers appear in their appropriate slots. The worker in the lowest slot number usually becomes the primary unit.

You can now manage the workers in the same way as you would manage a standalone FortiGate. You can connect to the worker GUI or CLI using the External Management IP. If you had configured the worker mgmt1 or mgmt2 interfaces you can also connect to one of these addresses to manage the cluster.

To operate the cluster, connect networks to the FortiController front panel interfaces and connect to a worker GUI or CLI to configure the workers to process the traffic they receive. When you connect to the External Management IP you connect to the primary worker. When you make configuration changes they are synchronized to all workers in the cluster.

Managing the devices in an SLBC with the external management IP

The External Management IP address is used to manage all of the individual devices in a SLBC by adding a special port number. This special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number can be calculated using the following formula:

service_port x 100 + (chassis_id - 1) x 20 + slot_id

Where:

  • service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS and so on).
  • chassis_id is the chassis ID specified as part of the FortiController HA configuration and can be 1 or 2.
  • slot_id is the number of the chassis slot.
Note

By default, chassis 1 is the primary chassis and chassis 2 is the backup chassis. However, the actual primary chassis is the one with the primary FortiController, which can be changed independently of the chassis number. Additionally, the chassis_id is defined by the chassis number, not whether the chassis contains the primary FortiController.

Some examples:

  • HTTPS, chassis 1, slot 2: 443 x 100 + (1 - 1) x 20 + 2 = 44300 + 0 + 2 = 44302:

    browse to: https://172.20.120.100:44302

  • HTTP, chassis 2, slot 4: 80 x 100 + (2 - 1) x 20 + 4 = 8000 + 20 + 4 = 8024:

    browse to http://172.20.120.100/8024

  • HTTPS, chassis 1, slot 10: 443 x 100 + (1 - 1) x 20 + 10 = 44300 + 0 + 10 = 44310:

    browse to https://172.20.120.100/44310

Single chassis or chassis 1 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8001 44301 2301 2201 16101
Slot 2 8002 44302 2302 2202 16102
Slot 3 8003 44303 2303 2203 16103
Slot 4 8004 44304 2304 2204 16104
Slot 5 8005 44305 2305 2205 16105
Slot 6 8006 44306 2306 2206 16106
Slot 7 8007 44307 2307 2207 16107
Slot 8 8008 44308 2308 2208 16108
Slot 9 8009 44309 2309 2209 16109
Slot 10 8010 44310 2310 2210 16110
Slot 11 8011 44311 2311 2211 16111
Slot 12 8012 44312 2312 2212 16112
Slot 13 8013 44313 2313 2213 16113
Slot 14 8014 44314 2314 2214 16114

Chassis 2 special management port numbers

Slot number HTTP (80) HTTPS (443) Telnet (23) SSH (22) SNMP (161)
Slot 1 8021 44321 2321 2221 16121
Slot 2 8022 44322 2322 2222 16122
Slot 3 8023 44323 2323 2223 16123
Slot 4 8024 44324 2324 2224 16124
Slot 5 8025 44325 2325 2225 16125
Slot 6 8026 44326 2326 2226 16126
Slot 7 8027 44327 2327 2227 16127
Slot 8 8028 44328 2328 2228 16128
Slot 9 8029 44329 2329 2229 16129
Slot 10 8030 44330 2330 2230 16130
Slot 11 8031 44331 2331 2231 16131
Slot 12 8032 44332 2332 2232 16132
Slot 13 8033 44333 2333 2233 16133
Slot 14 8034 44334 2334 2234 16134

For more detailed information regarding FortiController SLBC configurations, see the FortiController Session-Aware Load Balancing Cluster Guide.