HA for FortiGate-VM on Azure
You can use FortiGate-VM in different scenarios to protect assets that are deployed in Azure virtual networks:
-
Secure hybrid cloud
-
Cloud security services hub
-
Logical intent-based segmentation
-
Secure remote access
See Fortinet Use Cases for Microsoft Azure for a general overview of different public cloud use cases.
When designing a reliable architecture in Azure, you must take resiliency and high availability (HA) into account. See Microsoft's Overview of the reliability pillar. Running the FortiGate next generation firewall inside Azure offers different reliability levels depending on the building blocks used.
Microsoft offers different SLAs on Azure based on the deployment that you use:
-
Availability Zone (AZ) (different datacenter in the same region): 99.99%
-
Availability Set (different rack and power): 99.95%
-
Single VM with premium SSD: 99.9%
Building blocks
-
Single VM: This single FortiGate-VM processes all the traffic and becomes a single point of failure during operations and upgrades. You can also use this block in an architecture with multiple regions where a FortiGate is deployed in each region. This setup provides an SLA of 99.9% when using a premium SSD disk. See Single FortiGate-VM deployment.
-
Active-passive with external and internal Azure load balancer: This design deploys two FortiGate-VMs in active-passive mode connected using Unicast FGCP HA protocol. In this setup, the Azure load balancer handles traffic failover using a health probe towards the FortiGate-VMs. The failover times are based on the health probe of the Azure load balancer: 2 failed attempts per 5 seconds with a maximum of 15 seconds. The public IP addresses are configured on the Azure load balancer and provide ingress and egress flows with inspection from the FortiGate. Microsoft provides guidance on this architecture.
-
Active-passive HA with SDN connector failover: This design deploys two FortiGate-VMs in active-passive mode connected using the Unicast FGCP HA protocol. This protocol synchronizes the configuration. On failover, the passive FortiGate takes control and issues API calls to Azure to shift the public IP address and update the internal user-defined routing to itself. Shifting the public IP address and gateway IP addresses of the routes takes time for Azure to complete. Microsoft provides a general architecture. In FortiGate's case, the API calls logic is built-in instead of requiring additional outside logic like Azure Functions or ZooKeeper nodes.
-
Active-active with external and internal Azure load balancer:This design deploys two FortiGate-VMs in active-active as two independent systems. In this setup, the Azure load balancer handles traffic failover using a health probe towards the FortiGate-VMs. The public IP addresses are configured on the Azure load balancer and provide ingress and egress flows with inspection from the FortiGate. You can use a FortiManager or local replication to synchronize configuration in this setup. Microsoft provides guidance on this architecture.
AZs and availability sets are available as options in the Azure marketplace and on the ARM Templates on GitHub. You can select them during deployment.
Architecture
You can deploy the FortiGate-VM in Azure in different architectures. Each architecture has specific properties that can be advantages or disadvantages in your environment:
Architecture |
Description |
---|---|
Single VNet |
All building blocks above are ready to deploy in a new or existing VNet. Select your block to get started. |
Cloud Security Services Hub (VNet peering) |
With VNet peering, you can have different islands deploying different services managed by different internal and/or external teams, while maintaining a single point of control going to on-premise, other clouds, or public internet. The VNets are connected in a hub-spoke setup where the hub controls all traffic. See VNET-Peering. |
In active-passive HA scenarios on Azure, you must set the physical interface IP address (port1) and local tunnel interface IP addresses manually on the secondary FortiGate. HA does not automatically sync these IP addresses. You must also manually copy loopback interface configuration from the HA primary to the secondary FortiGate. Configuring a VDOM exception for "system.interface" does not affect behavior. |