Fortinet white logo
Fortinet white logo

Azure Administration Guide

HA for FortiGate-VM on Azure

HA for FortiGate-VM on Azure

You can use FortiGate-VM in different scenarios to protect assets that are deployed in Azure virtual networks (VNet):

  • Secure hybrid cloud

  • Cloud security services hub

  • Logical intent-based segmentation

  • Secure remote access

See Cloud Security for Microsoft Azure for a general overview of different public cloud use cases.

When designing a reliable architecture in Azure, you must take resiliency and high availability (HA) into account. See Microsoft's Overview of the reliability pillar. Running the FortiGate next generation firewall inside Azure offers different reliability levels depending on the building blocks used.

Microsoft offers different SLAs on Azure based on the deployment that you use:

  • Availability Zone (AZ) (different datacenter in the same region): 99.99%

  • Availability Set (different rack and power): 99.95%

  • Single VM with premium SSD: 99.9%

Building blocks

  • Single VM: this single FortiGate-VM processes all the traffic and becomes a single point of failure during operations and upgrades. You can also use this block in an architecture with multiple regions where a FortiGate is deployed in each region. This setup provides an SLA of 99.9% when using a premium SSD disk. See Single FortiGate-VM deployment.

  • Active-passive with external and internal Azure load balancer (LB): this design deploys two FortiGate-VMs in active-passive mode connected using unicast FortiGate clustering protocol (FGCP) HA protocol. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiGate-VMs. The failover times are based on the health probe of the Azure LB: 2 failed attempts per 5 seconds with a maximum of 15 seconds. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiGate. Microsoft provides guidance on this architecture.

  • Active-passive HA with SDN connector failover: This design deploys two FortiGate-VMs in active-passive mode connected using the unicast FGCP HA protocol. This protocol synchronizes the configuration. On failover, the passive FortiGate takes control and issues API calls to Azure to shift the public IP address and update the internal user-defined routing to itself. Shifting the public IP address and gateway IP addresses of the routes takes time for Azure to complete. Microsoft provides a general architecture. In FortiGate's case, the API calls logic is built-in instead of requiring additional outside logic like Azure Functions or ZooKeeper nodes.

  • Active-active with external and internal Azure LB:this design deploys two FortiGate-VMs in active-active as two independent systems. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiGate-VMs. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiGate. You can use a FortiManager or local replication to synchronize configuration in this setup. Microsoft provides guidance on this architecture.

Availability zones and availability sets are available as options in the Azure marketplace and on the ARM Templates on GitHub. You can select them during deployment.

Architecture

You can deploy the FortiGate-VM in Azure in different architectures. Each architecture has specific properties that can be advantages or disadvantages in your environment:

Architecture

Description

Single VNet

All building blocks above are ready to deploy in a new or existing VNet. Select your block to get started.

Cloud Security Services Hub (VNet peering)

With VNet peering, you can have different islands deploying different services that different internal and/or external teams manage, while maintaining a single point of control going to on-premise, other clouds, or public Internet. The VNets are connected in a hub-spoke setup where the hub controls all traffic. See VNET-Peering.

note icon In active-passive HA scenarios on Azure, you must set the physical interface IP address (port1) and local tunnel interface IP addresses manually on the secondary FortiGate. HA does not automatically sync these IP addresses. You must also manually copy loopback interface configuration from the HA primary to the secondary FortiGate. Configuring a virtual domain exception for "system.interface" does not affect behavior.

HA for FortiGate-VM on Azure

HA for FortiGate-VM on Azure

You can use FortiGate-VM in different scenarios to protect assets that are deployed in Azure virtual networks (VNet):

  • Secure hybrid cloud

  • Cloud security services hub

  • Logical intent-based segmentation

  • Secure remote access

See Cloud Security for Microsoft Azure for a general overview of different public cloud use cases.

When designing a reliable architecture in Azure, you must take resiliency and high availability (HA) into account. See Microsoft's Overview of the reliability pillar. Running the FortiGate next generation firewall inside Azure offers different reliability levels depending on the building blocks used.

Microsoft offers different SLAs on Azure based on the deployment that you use:

  • Availability Zone (AZ) (different datacenter in the same region): 99.99%

  • Availability Set (different rack and power): 99.95%

  • Single VM with premium SSD: 99.9%

Building blocks

  • Single VM: this single FortiGate-VM processes all the traffic and becomes a single point of failure during operations and upgrades. You can also use this block in an architecture with multiple regions where a FortiGate is deployed in each region. This setup provides an SLA of 99.9% when using a premium SSD disk. See Single FortiGate-VM deployment.

  • Active-passive with external and internal Azure load balancer (LB): this design deploys two FortiGate-VMs in active-passive mode connected using unicast FortiGate clustering protocol (FGCP) HA protocol. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiGate-VMs. The failover times are based on the health probe of the Azure LB: 2 failed attempts per 5 seconds with a maximum of 15 seconds. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiGate. Microsoft provides guidance on this architecture.

  • Active-passive HA with SDN connector failover: This design deploys two FortiGate-VMs in active-passive mode connected using the unicast FGCP HA protocol. This protocol synchronizes the configuration. On failover, the passive FortiGate takes control and issues API calls to Azure to shift the public IP address and update the internal user-defined routing to itself. Shifting the public IP address and gateway IP addresses of the routes takes time for Azure to complete. Microsoft provides a general architecture. In FortiGate's case, the API calls logic is built-in instead of requiring additional outside logic like Azure Functions or ZooKeeper nodes.

  • Active-active with external and internal Azure LB:this design deploys two FortiGate-VMs in active-active as two independent systems. In this setup, the Azure LB handles traffic failover using a health probe towards the FortiGate-VMs. You configure the public IP addresses on the Azure LB. The public IP addresses provide ingress and egress flows with inspection from the FortiGate. You can use a FortiManager or local replication to synchronize configuration in this setup. Microsoft provides guidance on this architecture.

Availability zones and availability sets are available as options in the Azure marketplace and on the ARM Templates on GitHub. You can select them during deployment.

Architecture

You can deploy the FortiGate-VM in Azure in different architectures. Each architecture has specific properties that can be advantages or disadvantages in your environment:

Architecture

Description

Single VNet

All building blocks above are ready to deploy in a new or existing VNet. Select your block to get started.

Cloud Security Services Hub (VNet peering)

With VNet peering, you can have different islands deploying different services that different internal and/or external teams manage, while maintaining a single point of control going to on-premise, other clouds, or public Internet. The VNets are connected in a hub-spoke setup where the hub controls all traffic. See VNET-Peering.

note icon In active-passive HA scenarios on Azure, you must set the physical interface IP address (port1) and local tunnel interface IP addresses manually on the secondary FortiGate. HA does not automatically sync these IP addresses. You must also manually copy loopback interface configuration from the HA primary to the secondary FortiGate. Configuring a virtual domain exception for "system.interface" does not affect behavior.