Version:

Version:

Version:

Version:


Table of Contents

Online Help

Kubernetes Agent and Node Status

Kubernetes Agent Status

After adding a new Kubernetes Cluster, the kubernetes agent status shows the overall pod status on the Kubernetes Cluster nodes.

There are 4 kubernetes agent status: initializing, unhealthy, healthy, and warning.

The kubernetes agent status is determined by the Kubernetes pods status returned by the command:

kubectl get pods -n fortinet

Example of get pods status command returned:

Kubernetes Agent Status

Pod Status

Initializing When pods are being deployed on the Kubernetes Cluster nodes.
Unhealthy When the controller pod is in "Not Running" status, Kubernetes agent status will be "Unhealthy".
Healthy When all pods are in "Running" status, Kubernetes agent status will be "Healthy".
Warning When more than one pod is in "Not Running" status, and the controller pod is in "Running" status, the Kuberenetes agent status will be "Warning".

 

Node Status

Kubernetes Node Status shows the number of nodes that are running and the total number of nodes that are available.

For the example above, the cluster name "nde-deployment" have 3 nodes that are running out of a total number of 0 nodes.

The Kubernetes node status is determined by the CLI command: 

kubectl get nodes

Example of get nodes status command returned:

In this case only the master node is ready while the 2 other nodes are not ready. The Node Status will be shown as "1/3" in Container Protection.

 

Auto Scaling

Auto Scaling happens when the container provider automatically adjusts the workload of the Kubernetes Clusters. When the demand is high, the number of worker nodes will increase, and when the demand is low, the number of worker nodes will decrease.

When a worker node is deactivated, the kubernetes agent will be removed automatically, and when a worker node is activated, the Kubernetes agent will be installed automatically. This is to eliminate the need to manually install or uninstall kubernetes agent when auto-scaling happens.

 

Scenario 1: Auto-scale down when a worker node is deleted.

 

Scenario 2: Auto-scale up when a worker node is added to the group.

 

 

 

 

 

 

 

 

 

Kubernetes Agent and Node Status

Kubernetes Agent Status

After adding a new Kubernetes Cluster, the kubernetes agent status shows the overall pod status on the Kubernetes Cluster nodes.

There are 4 kubernetes agent status: initializing, unhealthy, healthy, and warning.

The kubernetes agent status is determined by the Kubernetes pods status returned by the command:

kubectl get pods -n fortinet

Example of get pods status command returned:

Kubernetes Agent Status

Pod Status

Initializing When pods are being deployed on the Kubernetes Cluster nodes.
Unhealthy When the controller pod is in "Not Running" status, Kubernetes agent status will be "Unhealthy".
Healthy When all pods are in "Running" status, Kubernetes agent status will be "Healthy".
Warning When more than one pod is in "Not Running" status, and the controller pod is in "Running" status, the Kuberenetes agent status will be "Warning".

 

Node Status

Kubernetes Node Status shows the number of nodes that are running and the total number of nodes that are available.

For the example above, the cluster name "nde-deployment" have 3 nodes that are running out of a total number of 0 nodes.

The Kubernetes node status is determined by the CLI command: 

kubectl get nodes

Example of get nodes status command returned:

In this case only the master node is ready while the 2 other nodes are not ready. The Node Status will be shown as "1/3" in Container Protection.

 

Auto Scaling

Auto Scaling happens when the container provider automatically adjusts the workload of the Kubernetes Clusters. When the demand is high, the number of worker nodes will increase, and when the demand is low, the number of worker nodes will decrease.

When a worker node is deactivated, the kubernetes agent will be removed automatically, and when a worker node is activated, the Kubernetes agent will be installed automatically. This is to eliminate the need to manually install or uninstall kubernetes agent when auto-scaling happens.

 

Scenario 1: Auto-scale down when a worker node is deleted.

 

Scenario 2: Auto-scale up when a worker node is added to the group.