Fortinet white logo
Fortinet white logo
4.4.0

Optional: Using HA-Cluster

Optional: Using HA-Cluster

You can set up multiple FortiSandbox instances in a load-balancing HA (high availability) cluster.

For information on using HA clusters, see the FortiSandbox Administration Guide.

Launching an HA-Cluster

To launch FortiSandbox instances on GCP:
  1. On the GCP Launch Instances page, launch FortiSandbox primary instances from the GCP marketplace first.
  2. Edit the Network settings. Assign Network interface 1 to the FortiSandbox firmware subnet of port1 (eg:10.0.1.x). Add two additional network interfaces under the dedicated subnets for all HA-Cluster nodes.
    1. Create Network interface 2, local Windows clone communication for custom VM.
    2. Create Network interface 3, cluster inter-communication for HA-Cluster communication.
  3. Edit the Firewall Policy for FortiSandbox interface 3 VPC, and open the following ports for HA-Cluster communication:

    • TCP 2015 (Source: port3 subnet, Example:10.0.3.0/24)
    • TCP 2018 (Source: port3 subnet, Example:10.0.3.0/24)
    • TCP 443 (Source: port3 subnet, Example:10.0.3.0/24)
    Note
  4. In cluster mode, FortiSandbox uses TCP ports 2015 and 2018 for cluster internal communication. TCP 443 is used to access the dashboard of the work node from the primary node. If the unit works as a Collector to receive threat information from other units, it uses TCP port 2443.
  5. Reserve a static internal IP address for the external HA-Cluster communication IP Address(Failover IP)

      This step is optional but is recommended.

    1. On the GCP Console, go to VPC network > VPC networks.
    2. Select the VPC that is used for Port1.
    3. Click the STATIC INTERNAL IP ADDRESSES tab .
    4. Click RESERVE STATIC ADDRESS.

  6. Follow the prompts to fill in the correct information and click RESERVE. The reserved IP will be used as the FortiSandbox external HA-Cluster communication IP address that is attached to the Primary Node, and will transfer to the Secondary Node when the Primary node is down.

Configuring an HA-Cluster

If you are using HA-Cluster without failover, the secondary node is optional.

Ensure the HA-Cluster meets the following requirements:

  • Use the same scan environment on all nodes. For example, install the same set of Windows VMs on each node so that the same scan profiles can be used and controlled by the primary node.

  • Run the same firmware build on all nodes.

  • Set up a dedicated network interface (such as port2) for each node for custom VMs.

  • Set up a dedicated network interface (such as port3) for each node for internal HA-Cluster communication.

In this example, 10.0.1.9/24 is an external HA-Cluster communication IP address. The secondary node's private IP address is on the primary node’s port1 network interface.

To configure an HA-Cluster with the CLI:
  1. Configure the primary node:

    hc-settings -sc -tM –n<PrimaryNodeName> -c<ClusterName> -p <AuthenticationCode> -iport3

    hc-settings -si -iport1 –a<FailoverIP/netmask>

    Example:

    hc-settings -sc -tM -nMyPrimaryNode -cMyCluster -p123456 -iport3

    hc-settings -si -iport1 -a10.0.1.9/24

  2. Configure the secondary node:

    hc-settings -sc -tP -n<SecondaryNodeName> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tP -nMySecondaryNode -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

  3. Configure the first worker node:

    hc-settings -sc -tR -n<WorkerNodeName1> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tR -nWorkerNode1 -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

  4. If necessary, configure consecutive worker nodes:

    hc-settings -sc -tR -n<WorkerNodeName2> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tR -nWorkerNode2 -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

To check the status of the HA-Cluster:

On the primary node, use the following CLI command to view the status of all units in the cluster.

hc-status -l

To use a custom VM on an HA-Cluster:
  1. Install the GCP local custom VMs from the primary node onto each worker node using the FortiSandbox CLI command vm-customized.
    Note

    All options must be the same when installing custom VMs on an HA-Cluster, including -vn[VM name].

  2. In the FortiSandbox GUI, go to Scan Policy and Object > VM Settings and change Clone # to 1 for each node. After all VM clones on all nodes are configured, you can change the Clone # to a higher number.
  3. In a new CLI window, check the VM clone initialization using the diagnose-debug vminit command.
  4. In the FortiSandbox GUI, go to the Dashboard to verify there is a green checkmark beside Windows VM.
  5. To associate file extensions to the custom VM, go to Scan Policy > Scan Profile to the VM Association tab.
  6. You can now submit scan jobs from the primary node. HA-Cluster supports VM Interaction on each node.

Optional: Using HA-Cluster

Optional: Using HA-Cluster

You can set up multiple FortiSandbox instances in a load-balancing HA (high availability) cluster.

For information on using HA clusters, see the FortiSandbox Administration Guide.

Launching an HA-Cluster

To launch FortiSandbox instances on GCP:
  1. On the GCP Launch Instances page, launch FortiSandbox primary instances from the GCP marketplace first.
  2. Edit the Network settings. Assign Network interface 1 to the FortiSandbox firmware subnet of port1 (eg:10.0.1.x). Add two additional network interfaces under the dedicated subnets for all HA-Cluster nodes.
    1. Create Network interface 2, local Windows clone communication for custom VM.
    2. Create Network interface 3, cluster inter-communication for HA-Cluster communication.
  3. Edit the Firewall Policy for FortiSandbox interface 3 VPC, and open the following ports for HA-Cluster communication:

    • TCP 2015 (Source: port3 subnet, Example:10.0.3.0/24)
    • TCP 2018 (Source: port3 subnet, Example:10.0.3.0/24)
    • TCP 443 (Source: port3 subnet, Example:10.0.3.0/24)
    Note
  4. In cluster mode, FortiSandbox uses TCP ports 2015 and 2018 for cluster internal communication. TCP 443 is used to access the dashboard of the work node from the primary node. If the unit works as a Collector to receive threat information from other units, it uses TCP port 2443.
  5. Reserve a static internal IP address for the external HA-Cluster communication IP Address(Failover IP)

      This step is optional but is recommended.

    1. On the GCP Console, go to VPC network > VPC networks.
    2. Select the VPC that is used for Port1.
    3. Click the STATIC INTERNAL IP ADDRESSES tab .
    4. Click RESERVE STATIC ADDRESS.

  6. Follow the prompts to fill in the correct information and click RESERVE. The reserved IP will be used as the FortiSandbox external HA-Cluster communication IP address that is attached to the Primary Node, and will transfer to the Secondary Node when the Primary node is down.

Configuring an HA-Cluster

If you are using HA-Cluster without failover, the secondary node is optional.

Ensure the HA-Cluster meets the following requirements:

  • Use the same scan environment on all nodes. For example, install the same set of Windows VMs on each node so that the same scan profiles can be used and controlled by the primary node.

  • Run the same firmware build on all nodes.

  • Set up a dedicated network interface (such as port2) for each node for custom VMs.

  • Set up a dedicated network interface (such as port3) for each node for internal HA-Cluster communication.

In this example, 10.0.1.9/24 is an external HA-Cluster communication IP address. The secondary node's private IP address is on the primary node’s port1 network interface.

To configure an HA-Cluster with the CLI:
  1. Configure the primary node:

    hc-settings -sc -tM –n<PrimaryNodeName> -c<ClusterName> -p <AuthenticationCode> -iport3

    hc-settings -si -iport1 –a<FailoverIP/netmask>

    Example:

    hc-settings -sc -tM -nMyPrimaryNode -cMyCluster -p123456 -iport3

    hc-settings -si -iport1 -a10.0.1.9/24

  2. Configure the secondary node:

    hc-settings -sc -tP -n<SecondaryNodeName> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tP -nMySecondaryNode -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

  3. Configure the first worker node:

    hc-settings -sc -tR -n<WorkerNodeName1> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tR -nWorkerNode1 -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

  4. If necessary, configure consecutive worker nodes:

    hc-settings -sc -tR -n<WorkerNodeName2> -c<ClusterName> -p<AuthenticationCode> -iport3

    hc-worker -a –s<PrimaryNodePort3PrivateIP> -p<AuthenticationCode>

    Example:

    hc-settings -sc -tR -nWorkerNode2 -cMyCluster -p123456 -iport3

    hc-worker -a -s10.0.3.10 -p123456

To check the status of the HA-Cluster:

On the primary node, use the following CLI command to view the status of all units in the cluster.

hc-status -l

To use a custom VM on an HA-Cluster:
  1. Install the GCP local custom VMs from the primary node onto each worker node using the FortiSandbox CLI command vm-customized.
    Note

    All options must be the same when installing custom VMs on an HA-Cluster, including -vn[VM name].

  2. In the FortiSandbox GUI, go to Scan Policy and Object > VM Settings and change Clone # to 1 for each node. After all VM clones on all nodes are configured, you can change the Clone # to a higher number.
  3. In a new CLI window, check the VM clone initialization using the diagnose-debug vminit command.
  4. In the FortiSandbox GUI, go to the Dashboard to verify there is a green checkmark beside Windows VM.
  5. To associate file extensions to the custom VM, go to Scan Policy > Scan Profile to the VM Association tab.
  6. You can now submit scan jobs from the primary node. HA-Cluster supports VM Interaction on each node.