Fortinet white logo
Fortinet white logo

Handling Node Lifecycle Events

Handling Node Lifecycle Events

The FortiADC Kubernetes Controller maintains a synchronized map of the cluster's physical topology. When worker nodes are added, removed, or experience failure, the controller automatically updates the FortiADC configuration to ensure traffic is only directed to healthy, available infrastructure.

Automated Node Synchronization

The controller monitors the Kubernetes API for three specific node-level events. Its behavior depends on whether the application is exposed via NodePort or ClusterIP.

  1. Node Addition (Scale-Out)

    When a new worker node joins the cluster:

    • For NodePort Services: The controller detects the new Node IP and automatically adds it as a new "Real Server" to the associated pools on the FortiADC.

    • For ClusterIP Services: While the node itself is new, the controller focuses on the Pods scheduled onto that node. As pods become Ready, their IPs are added to the FortiADC in the Real Server and Overlay Tunnel configurations.

  2. Node Deletion (Scale-In)

    When a node is decommissioned or removed from the cluster:

    • The controller immediately identifies the node's removal and purges the corresponding Real Server entry from all FortiADC pools.

    • This prevents the "blackholing" of traffic that would occur if the ADC attempted to send requests to an IP that no longer exists.

  3. Node Health Status (NotReady)

    The controller continuously monitors the NodeCondition of every worker node.

    • If a node enters a NotReady state (due to network partition, kubelet failure, or resource exhaustion), the controller treats this as a failure event.

    • Traffic Suspension: The controller will temporarily disable the Real Server associated with that node on the FortiADC until the node returns to a Ready status.

Comparison of Lifecycle Handling

Event

Impact on NodePort Services

Impact on ClusterIP Services

New Node Added New Node IP added to FADC Pool. No change until Pods are scheduled.
Node Deleted Node IP removed from FADC Pool. Pod IPs on that node removed from FADC.
Node NotReady Node IP disabled on FADC. FADC relies on individual Pod health checks.

Technical Considerations

  • Fake Nodes: Note that the "Fake Node" created for Calico or Flannel integration will always show as NotReady in Kubernetes. The controller is specifically programmed to ignore the health status of this fake node to ensure the VXLAN tunnel remains active.

  • Draining Nodes: If you use kubectl drain, the controller will see the Pods being evicted and update the FortiADC pools in real-time before the node actually shuts down.

Handling Node Lifecycle Events

Handling Node Lifecycle Events

The FortiADC Kubernetes Controller maintains a synchronized map of the cluster's physical topology. When worker nodes are added, removed, or experience failure, the controller automatically updates the FortiADC configuration to ensure traffic is only directed to healthy, available infrastructure.

Automated Node Synchronization

The controller monitors the Kubernetes API for three specific node-level events. Its behavior depends on whether the application is exposed via NodePort or ClusterIP.

  1. Node Addition (Scale-Out)

    When a new worker node joins the cluster:

    • For NodePort Services: The controller detects the new Node IP and automatically adds it as a new "Real Server" to the associated pools on the FortiADC.

    • For ClusterIP Services: While the node itself is new, the controller focuses on the Pods scheduled onto that node. As pods become Ready, their IPs are added to the FortiADC in the Real Server and Overlay Tunnel configurations.

  2. Node Deletion (Scale-In)

    When a node is decommissioned or removed from the cluster:

    • The controller immediately identifies the node's removal and purges the corresponding Real Server entry from all FortiADC pools.

    • This prevents the "blackholing" of traffic that would occur if the ADC attempted to send requests to an IP that no longer exists.

  3. Node Health Status (NotReady)

    The controller continuously monitors the NodeCondition of every worker node.

    • If a node enters a NotReady state (due to network partition, kubelet failure, or resource exhaustion), the controller treats this as a failure event.

    • Traffic Suspension: The controller will temporarily disable the Real Server associated with that node on the FortiADC until the node returns to a Ready status.

Comparison of Lifecycle Handling

Event

Impact on NodePort Services

Impact on ClusterIP Services

New Node Added New Node IP added to FADC Pool. No change until Pods are scheduled.
Node Deleted Node IP removed from FADC Pool. Pod IPs on that node removed from FADC.
Node NotReady Node IP disabled on FADC. FADC relies on individual Pod health checks.

Technical Considerations

  • Fake Nodes: Note that the "Fake Node" created for Calico or Flannel integration will always show as NotReady in Kubernetes. The controller is specifically programmed to ignore the health status of this fake node to ensure the VXLAN tunnel remains active.

  • Draining Nodes: If you use kubectl drain, the controller will see the Pods being evicted and update the FortiADC pools in real-time before the node actually shuts down.