Fortinet white logo
Fortinet white logo

Kubernetes CNI Plugin

Kubernetes CNI Plugin

The Kubernetes cluster network model is implemented by Container Network Interface (CNI) plugins. Depending on your performance requirements and service types, you may choose to integrate FortiADC directly into this network.

Networking Connectivity Options

FortiADC supports two ways to reach your Kubernetes workloads. Your choice determines whether you need to configure an overlay tunnel.

Connectivity Method

Description

Tunnel Required?

NodePort FortiADC sends traffic to the IP address of the Kubernetes Nodes. The cluster then handles internal routing to the pods. No
ClusterIP FortiADC sends traffic directly to the Pod IPs. This provides better performance and visibility. Yes

Why an Overlay Tunnel is Required

While FortiADC Kubernetes Controller version 1.0 only supported NodePort, version 2.0 and later added support for ClusterIP.

Because ClusterIP services exist only within the cluster’s private virtual network, an overlay tunnel (VXLAN) is required to bridge the FortiADC appliance to the Kubernetes pods. This tunnel allows FortiADC to participate in the cluster’s overlay network and communicate directly with EndpointSlices (the collection of Pods implementing your service).

Supported CNI Plugins

If you choose to use ClusterIP services, you must establish a VXLAN tunnel based on your cluster's CNI plugin. FortiADC supports the following:

  • Flannel: Supported starting from FortiADC version 7.4.0.

  • Calico: Supported starting from FortiADC version 8.0.2.

Configuration Guides (ClusterIP Only)

If you are using NodePort, you may skip this section. If you are using ClusterIP, follow the guide corresponding to your CNI plugin:

  • Flannel VXLAN CNI: Steps to configure the VXLAN tunnel for clusters using Flannel.

  • Calico VXLAN CNI: Steps to configure the VXLAN tunnel for clusters using Calico.

Kubernetes CNI Plugin

Kubernetes CNI Plugin

The Kubernetes cluster network model is implemented by Container Network Interface (CNI) plugins. Depending on your performance requirements and service types, you may choose to integrate FortiADC directly into this network.

Networking Connectivity Options

FortiADC supports two ways to reach your Kubernetes workloads. Your choice determines whether you need to configure an overlay tunnel.

Connectivity Method

Description

Tunnel Required?

NodePort FortiADC sends traffic to the IP address of the Kubernetes Nodes. The cluster then handles internal routing to the pods. No
ClusterIP FortiADC sends traffic directly to the Pod IPs. This provides better performance and visibility. Yes

Why an Overlay Tunnel is Required

While FortiADC Kubernetes Controller version 1.0 only supported NodePort, version 2.0 and later added support for ClusterIP.

Because ClusterIP services exist only within the cluster’s private virtual network, an overlay tunnel (VXLAN) is required to bridge the FortiADC appliance to the Kubernetes pods. This tunnel allows FortiADC to participate in the cluster’s overlay network and communicate directly with EndpointSlices (the collection of Pods implementing your service).

Supported CNI Plugins

If you choose to use ClusterIP services, you must establish a VXLAN tunnel based on your cluster's CNI plugin. FortiADC supports the following:

  • Flannel: Supported starting from FortiADC version 7.4.0.

  • Calico: Supported starting from FortiADC version 8.0.2.

Configuration Guides (ClusterIP Only)

If you are using NodePort, you may skip this section. If you are using ClusterIP, follow the guide corresponding to your CNI plugin:

  • Flannel VXLAN CNI: Steps to configure the VXLAN tunnel for clusters using Flannel.

  • Calico VXLAN CNI: Steps to configure the VXLAN tunnel for clusters using Calico.