Fortinet black logo

FortiGate-7000 Handbook

HA configuration

Copy Link
Copy Doc ID 13098487-2a56-11e9-94bf-00505692583a:520350
Download PDF

HA configuration

Use the following steps to setup the configuration for HA between two FortiGate-7000s (in this example two FortiGate-7040E or 7060Es). Similar steps are required for the FortiGate-7030E except that each FortiGate-7030E only has one FIM.

Each FIM has to be configured for HA separately. The HA configuration is not synchronized among FIMs. You can begin by setting up the first FortiGate-7000 and setting up HA on both of the FIMs in it. Then do the same for the second FortiGate-7000.

To configure HA you assign a chassis ID (1 and 2) to each of the FortiGate-7000s is assigned a chassis ID.These IDs just allow you to identify the chassis and do not influence primary unit selection.

Setting up HA on the FIMs in the first FortiGate-7000 (chassis 1)

  1. Log into the CLI of the FIM in slot 1 (FM01) and enter the following command:

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 1

    set hbdev M1/M2

    end

    This adds basic HA settings to this FIM.

  2. Repeat this configuration on the FIM in slot 2 (FIM02).

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 1

    set hbdev M1/M2

    end

  3. From either FIM, enter the following command to confirm that the FortiGate-7000 is in HA mode:

    diagnose sys ha status

    The password and group-id are unique for each HA cluster and must be the same on all FIMs. If a cluster does not form, one of the first things to check are groupd-id and re-enter the password on both FIMs.

Configure HA on the FIMs in the second FortiGate-7000 (chassis 2)

  1. Repeat the same HA configuration settings on the FIMs in the second chassis except set the chassis ID to 2.

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 2

    set hbdev M1/M2

    end

  2. From any FIM, enter the following command to confirm that the cluster has formed and all of the FIMs have been added to it:

    diagnose sys ha status

    The cluster has now formed and you can add the configuration and connect network equipment and start operating the cluster. You can also modify the HA configuration depending on your requirements.

Verifying that the cluster is operating correctly

Enter the following CLI command to view the status of the cluster. You can enter this command from any module's CLI. The HA members can be in a different order depending on the module CLI from which you enter the command.

If the cluster is operating properly the following command output should indicate the primary and backup (master and slave) FortiGate-7000 as well as primary and backup (master and slave) modules. For each module, the state portion of the output shows all the parameters used to select the primary FIM. These parameters include the number FPMs that the FIM is connecting to that have failed, the status of any link aggregation group (LAG) interfaces in the configuration, the state of the interfaces in the FIM, the traffic bandwidth score for the FIM (the higher the traffic bandwidth score the more interfaces are connected to networks), and the status of the management links.

diagnose  sys  ha  status
==========================================================================
Current slot: 1  Module SN: FIM04E3E16000085
Chassis HA mode: a-p

Chassis HA information:
[Debug_Zone HA information]
HA group member information: is_manage_master=1.
FG74E83E16000015:  Slave, serialno_prio=1, usr_priority=128, hostname=CH15
FG74E83E16000016: Master, serialno_prio=0, usr_priority=127, hostname=CH16

HA member information:
CH16(FIM04E3E16000085), Master(priority=0), uptime=78379.78, slot=1, chassis=2(2)
    slot: 1, chassis_uptime=145358.97,
    more: cluster_id:0, flag:1, local_priority:0, usr_priority:127, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(1:force-to-master)
           traffic-bandwidth-score=120, mgmt-link=1
    hbdevs: local_interface=      1-M1 best=yes
            local_interface=      1-M2 best=no
    ha_elbc_master: 3, local_elbc_master: 3

CH15(FIM04E3E16000074), Slave(priority=2), uptime=145363.64, slot=1, chassis=1(2)
    slot: 1, chassis_uptime=145363.64,
    more: cluster_id:0, flag:0, local_priority:2, usr_priority:128, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=120, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.39   status=alive
            local_interface=      1-M2 last_hb_time=145640.39   status=alive

CH15(FIM10E3E16000040), Slave(priority=3), uptime=145411.85, slot=2, chassis=1(2)
    slot: 2, chassis_uptime=145638.51,
    more: cluster_id:0, flag:0, local_priority:3, usr_priority:128, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=100, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.62   status=alive
            local_interface=      1-M2 last_hb_time=145640.62   status=alive

CH16(FIM10E3E16000062), Slave(priority=1), uptime=76507.11, slot=2, chassis=2(2)
    slot: 2, chassis_uptime=145641.75,
    more: cluster_id:0, flag:0, local_priority:1, usr_priority:127, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=100, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.39   status=alive
            local_interface=      1-M2 last_hb_time=145640.39   status=alive

HA configuration

Use the following steps to setup the configuration for HA between two FortiGate-7000s (in this example two FortiGate-7040E or 7060Es). Similar steps are required for the FortiGate-7030E except that each FortiGate-7030E only has one FIM.

Each FIM has to be configured for HA separately. The HA configuration is not synchronized among FIMs. You can begin by setting up the first FortiGate-7000 and setting up HA on both of the FIMs in it. Then do the same for the second FortiGate-7000.

To configure HA you assign a chassis ID (1 and 2) to each of the FortiGate-7000s is assigned a chassis ID.These IDs just allow you to identify the chassis and do not influence primary unit selection.

Setting up HA on the FIMs in the first FortiGate-7000 (chassis 1)

  1. Log into the CLI of the FIM in slot 1 (FM01) and enter the following command:

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 1

    set hbdev M1/M2

    end

    This adds basic HA settings to this FIM.

  2. Repeat this configuration on the FIM in slot 2 (FIM02).

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 1

    set hbdev M1/M2

    end

  3. From either FIM, enter the following command to confirm that the FortiGate-7000 is in HA mode:

    diagnose sys ha status

    The password and group-id are unique for each HA cluster and must be the same on all FIMs. If a cluster does not form, one of the first things to check are groupd-id and re-enter the password on both FIMs.

Configure HA on the FIMs in the second FortiGate-7000 (chassis 2)

  1. Repeat the same HA configuration settings on the FIMs in the second chassis except set the chassis ID to 2.

    config system ha

    set mode a-p

    set password <password>

    set group-id <id>

    set chassis-id 2

    set hbdev M1/M2

    end

  2. From any FIM, enter the following command to confirm that the cluster has formed and all of the FIMs have been added to it:

    diagnose sys ha status

    The cluster has now formed and you can add the configuration and connect network equipment and start operating the cluster. You can also modify the HA configuration depending on your requirements.

Verifying that the cluster is operating correctly

Enter the following CLI command to view the status of the cluster. You can enter this command from any module's CLI. The HA members can be in a different order depending on the module CLI from which you enter the command.

If the cluster is operating properly the following command output should indicate the primary and backup (master and slave) FortiGate-7000 as well as primary and backup (master and slave) modules. For each module, the state portion of the output shows all the parameters used to select the primary FIM. These parameters include the number FPMs that the FIM is connecting to that have failed, the status of any link aggregation group (LAG) interfaces in the configuration, the state of the interfaces in the FIM, the traffic bandwidth score for the FIM (the higher the traffic bandwidth score the more interfaces are connected to networks), and the status of the management links.

diagnose  sys  ha  status
==========================================================================
Current slot: 1  Module SN: FIM04E3E16000085
Chassis HA mode: a-p

Chassis HA information:
[Debug_Zone HA information]
HA group member information: is_manage_master=1.
FG74E83E16000015:  Slave, serialno_prio=1, usr_priority=128, hostname=CH15
FG74E83E16000016: Master, serialno_prio=0, usr_priority=127, hostname=CH16

HA member information:
CH16(FIM04E3E16000085), Master(priority=0), uptime=78379.78, slot=1, chassis=2(2)
    slot: 1, chassis_uptime=145358.97,
    more: cluster_id:0, flag:1, local_priority:0, usr_priority:127, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(1:force-to-master)
           traffic-bandwidth-score=120, mgmt-link=1
    hbdevs: local_interface=      1-M1 best=yes
            local_interface=      1-M2 best=no
    ha_elbc_master: 3, local_elbc_master: 3

CH15(FIM04E3E16000074), Slave(priority=2), uptime=145363.64, slot=1, chassis=1(2)
    slot: 1, chassis_uptime=145363.64,
    more: cluster_id:0, flag:0, local_priority:2, usr_priority:128, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=120, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.39   status=alive
            local_interface=      1-M2 last_hb_time=145640.39   status=alive

CH15(FIM10E3E16000040), Slave(priority=3), uptime=145411.85, slot=2, chassis=1(2)
    slot: 2, chassis_uptime=145638.51,
    more: cluster_id:0, flag:0, local_priority:3, usr_priority:128, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=100, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.62   status=alive
            local_interface=      1-M2 last_hb_time=145640.62   status=alive

CH16(FIM10E3E16000062), Slave(priority=1), uptime=76507.11, slot=2, chassis=2(2)
    slot: 2, chassis_uptime=145641.75,
    more: cluster_id:0, flag:0, local_priority:1, usr_priority:127, usr_override:0
    state: worker_failure=0/2, lag=(total/good/down/bad-score)=5/5/0/0,
           intf_state=(port up)=0, force-state(-1:force-to-slave)
           traffic-bandwidth-score=100, mgmt-link=1
    hbdevs: local_interface=      1-M1 last_hb_time=145640.39   status=alive
            local_interface=      1-M2 last_hb_time=145640.39   status=alive