Fortinet black logo

FortiSandbox VM on AWS

Optional: Set up a HA-Cluster

Optional: Set up a HA-Cluster

You can set up multiple FortiSandbox instances in a load-balancing HA (high availability) cluster. For more information on using HA clusters, see the FortiSandbox Administration Guide.

Prepare the HA cluster in FortiSandbox

Note

It is assumed the following operations are in the same VPC and the same availability zone

  1. Prepare the following subnets:

    Subnet

    Port on FortiSandbox

    Function

    Subnet1

    port1

    Management port.

    Subnet2

    port2

    Port to communicate with local customized VM.

    Subnet3

    port3

    Port for cluster internal communication.
  2. Prepare the following security groups:

    Security-Group

    For subnet

    Description

    Security-group1

    subnet1

    The default ports recommended by Fortinet when launching the instance are usually enough.

    Security-group2

    subnet2

    Open at least TCP 21 for communication with local customized VM.

    Security-group3

    subnet3

    Make sure to open ports TCP 2015, TCP 2018 for cluster internal communication.

For detailed port information, see Port and access control information in the FortiSandbox Administration Guide.

Launching a HA-Cluster

A cluster is comprised of the following nodes:

  • One primary node

  • One secondary node

  • (Optional) One or more worker nodes

To launch FortiSandbox instances on AWS:
  1. Launch FortiSandbox VMs. For example one primary, one secondary, one worker
  2. For each FortiSandbox VM, follow the steps described in this AWS deployment guide, with the exception of the Network Settings:
    1. Under Firewall (security groups), choose Select existing security group and specify subnet1 for network interface.

    2. Leave Common security groups empty.

    3. Click the Add network interface button to add two more network interfaces:
      • Specify subnet2 for interface 2

      • Specify security-group2 for interface 2

      • Specify subnet3 for interface 3

      • Specify security-group3 for interface 3

  3. Follow this guide and the on-screen instructions to finish launching the instances.
  4. Associate an Elastic IP (EIP) to interface 1 of each FortiSandbox VM .
  5. Log into each FortiSandbox HA-Cluster node using the EIP address. The initial password is the VM instance ID.
  6. Go to System > AWS Config, and configure the subnet2 information for the primary, secondary and worker nodes.

Configuring an HA-Cluster

Ensure all the nodes meet the following requirements:

  • Use the same scan environment on all nodes. For example, install the same set of Windows VMs on each node so that the same scan profiles can be used and controlled by the primary node.
  • Run the same firmware build on all nodes.
  • Set up a dedicated network interface (such as port2) for each node for custom VMs.
  • Set up a dedicated network interface (such as port3) for each node for internal HA-Cluster communication.

In this example, 10.20.0.22/24 is a HA-Cluster failover IP address. It is configured as a secondary IP for port1 of the primary node in the CLI below.

To configure an HA-Cluster using FortiSandbox CLI commands:
  1. Configure the primary node:
    • hc-settings -sc -tM -nMyHAPrimary -cClusterName -p123 -iport3
    • hc-settings -si -iport1 -a10.20.0.22/24
  2. Configure the secondary node:
    • hc-settings -sc -tP -nMyPWorker -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
  3. Configure the first worker node:
    • hc-settings -sc -tR -nMyRWorker1 -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
  4. If necessary, configure consecutive worker nodes:
    • hc-settings -sc -tR -nMyRWorker2 -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
To check the status of the HA-Cluster:

On the primary node, use this CLI command to view the status of all units in the cluster.

hc-status -l

To use a custom VM on a HA-Cluster:
  1. Install the same custom VM used by the primary node onto each worker node using the FortiSandbox CLI command vm-customized.

    All options must be the same when installing custom VMs on an HA-Cluster, including -vn[VM name].

  2. In the FortiSandbox AWS GUI, go to Scan Policy and Object > VM Settings and change Clone # to 1 for each node.

    After all VM clones on all nodes are configured, you can change the Clone # to a higher number.

  3. In a new CLI window, check the VM clone initialization using the diagnose-debug vminit command.
  4. In the FortiSandbox GUI, go to the Dashboard to verify there is a green checkmark beside Windows VM.
  5. Associate the file extensions to the custom VM, go to Scan Policy > Scan Profile to the VM Association tab.

You can now submit scan jobs from the primary node. HA-Cluster supports VM Interaction on each node.

Configuring an HA-Cluster on dual-zone

Setup a HA cluster with two FortiSandbox-AWS instances located in different AWS Availability Zones, where their internal IP addresses are different.

HA-Cluster requirements:
  • There are different subnets in different available zones under the same VPC.

  • The subnets reserved for the same FortiSandbox's interfaces are in the same available zone.

  • All nodes are running the same firmware build.

  • There is a dedicated network interface (such as port3) on FortiSandbox for each node for internal HA-Cluster communication.

To configure an HA-Cluster on dual-zone:

In this example, FSA01 is set as the HA primary node. FSA02 is set as HA secondary node. More HA nodes follow the same logic.

  1. On FSA01 go to System > Static route > Create new route to add a new route entry.

    Destination IP/Mask FSA02 HA inter-communication interface IP (in this example, FSA02 port3’s IP).
    Gateway FSA01 HA inter-communication interface gateway (in this example, FSA01 port3’s gateway).
    Device FSA01 HA inter-communication interface (in this example, port3).
  2. On FSA02 go to System > Static route > Create new route and configure the new route:

    Destination IP/Mask FSA01 HA interface IP (such as FSA01 port3’s IP).
    Gateway FSA02 HA interface gateway (such as FSA02 port3’s gateway).
    Device FSA02 HA interface (such as port3).
  3. In FSA01 ping FSA02's interfaces. The interfaces should be accessible.

  4. In FSA02 ping FSA01's interfaces. The interfaces should be accessible.

Note

After failover:

  • HA roles are switched and HA internal communication are re-established.

  • The HA-Cluster IP is lost.

Optional: Set up a HA-Cluster

You can set up multiple FortiSandbox instances in a load-balancing HA (high availability) cluster. For more information on using HA clusters, see the FortiSandbox Administration Guide.

Prepare the HA cluster in FortiSandbox

Note

It is assumed the following operations are in the same VPC and the same availability zone

  1. Prepare the following subnets:

    Subnet

    Port on FortiSandbox

    Function

    Subnet1

    port1

    Management port.

    Subnet2

    port2

    Port to communicate with local customized VM.

    Subnet3

    port3

    Port for cluster internal communication.
  2. Prepare the following security groups:

    Security-Group

    For subnet

    Description

    Security-group1

    subnet1

    The default ports recommended by Fortinet when launching the instance are usually enough.

    Security-group2

    subnet2

    Open at least TCP 21 for communication with local customized VM.

    Security-group3

    subnet3

    Make sure to open ports TCP 2015, TCP 2018 for cluster internal communication.

For detailed port information, see Port and access control information in the FortiSandbox Administration Guide.

Launching a HA-Cluster

A cluster is comprised of the following nodes:

  • One primary node

  • One secondary node

  • (Optional) One or more worker nodes

To launch FortiSandbox instances on AWS:
  1. Launch FortiSandbox VMs. For example one primary, one secondary, one worker
  2. For each FortiSandbox VM, follow the steps described in this AWS deployment guide, with the exception of the Network Settings:
    1. Under Firewall (security groups), choose Select existing security group and specify subnet1 for network interface.

    2. Leave Common security groups empty.

    3. Click the Add network interface button to add two more network interfaces:
      • Specify subnet2 for interface 2

      • Specify security-group2 for interface 2

      • Specify subnet3 for interface 3

      • Specify security-group3 for interface 3

  3. Follow this guide and the on-screen instructions to finish launching the instances.
  4. Associate an Elastic IP (EIP) to interface 1 of each FortiSandbox VM .
  5. Log into each FortiSandbox HA-Cluster node using the EIP address. The initial password is the VM instance ID.
  6. Go to System > AWS Config, and configure the subnet2 information for the primary, secondary and worker nodes.

Configuring an HA-Cluster

Ensure all the nodes meet the following requirements:

  • Use the same scan environment on all nodes. For example, install the same set of Windows VMs on each node so that the same scan profiles can be used and controlled by the primary node.
  • Run the same firmware build on all nodes.
  • Set up a dedicated network interface (such as port2) for each node for custom VMs.
  • Set up a dedicated network interface (such as port3) for each node for internal HA-Cluster communication.

In this example, 10.20.0.22/24 is a HA-Cluster failover IP address. It is configured as a secondary IP for port1 of the primary node in the CLI below.

To configure an HA-Cluster using FortiSandbox CLI commands:
  1. Configure the primary node:
    • hc-settings -sc -tM -nMyHAPrimary -cClusterName -p123 -iport3
    • hc-settings -si -iport1 -a10.20.0.22/24
  2. Configure the secondary node:
    • hc-settings -sc -tP -nMyPWorker -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
  3. Configure the first worker node:
    • hc-settings -sc -tR -nMyRWorker1 -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
  4. If necessary, configure consecutive worker nodes:
    • hc-settings -sc -tR -nMyRWorker2 -cClusterName -p123 -iport3
    • hc-worker -a -sPrimary_Port3_private_IP -p123
To check the status of the HA-Cluster:

On the primary node, use this CLI command to view the status of all units in the cluster.

hc-status -l

To use a custom VM on a HA-Cluster:
  1. Install the same custom VM used by the primary node onto each worker node using the FortiSandbox CLI command vm-customized.

    All options must be the same when installing custom VMs on an HA-Cluster, including -vn[VM name].

  2. In the FortiSandbox AWS GUI, go to Scan Policy and Object > VM Settings and change Clone # to 1 for each node.

    After all VM clones on all nodes are configured, you can change the Clone # to a higher number.

  3. In a new CLI window, check the VM clone initialization using the diagnose-debug vminit command.
  4. In the FortiSandbox GUI, go to the Dashboard to verify there is a green checkmark beside Windows VM.
  5. Associate the file extensions to the custom VM, go to Scan Policy > Scan Profile to the VM Association tab.

You can now submit scan jobs from the primary node. HA-Cluster supports VM Interaction on each node.

Configuring an HA-Cluster on dual-zone

Setup a HA cluster with two FortiSandbox-AWS instances located in different AWS Availability Zones, where their internal IP addresses are different.

HA-Cluster requirements:
  • There are different subnets in different available zones under the same VPC.

  • The subnets reserved for the same FortiSandbox's interfaces are in the same available zone.

  • All nodes are running the same firmware build.

  • There is a dedicated network interface (such as port3) on FortiSandbox for each node for internal HA-Cluster communication.

To configure an HA-Cluster on dual-zone:

In this example, FSA01 is set as the HA primary node. FSA02 is set as HA secondary node. More HA nodes follow the same logic.

  1. On FSA01 go to System > Static route > Create new route to add a new route entry.

    Destination IP/Mask FSA02 HA inter-communication interface IP (in this example, FSA02 port3’s IP).
    Gateway FSA01 HA inter-communication interface gateway (in this example, FSA01 port3’s gateway).
    Device FSA01 HA inter-communication interface (in this example, port3).
  2. On FSA02 go to System > Static route > Create new route and configure the new route:

    Destination IP/Mask FSA01 HA interface IP (such as FSA01 port3’s IP).
    Gateway FSA02 HA interface gateway (such as FSA02 port3’s gateway).
    Device FSA02 HA interface (such as port3).
  3. In FSA01 ping FSA02's interfaces. The interfaces should be accessible.

  4. In FSA02 ping FSA01's interfaces. The interfaces should be accessible.

Note

After failover:

  • HA roles are switched and HA internal communication are re-established.

  • The HA-Cluster IP is lost.