FAQ
1) What should I do if I find a Kubernetes API object is supported by multiple API groups?
You may encounter this scenario when upgrading your Kubernetes cluster and the higher Kubernetes version has deprecated some of the API groups.
For example, extensions/v1beta1/Ingress is entirely deprecated by networking.k8s.io/v1/Ingress in Kubernetes v1.22. You may find extensions/v1beta1/Ingress and networking.k8s.io/v1/Ingress both exist in your system when upgrading the Kubernetes cluster from v1.16 to v1.20.
In this case, you have to disable the API group extensions/v1beta1 to ensure the system use the API group networking.k8s.io/v1 that is supported by FortiADC Kubernetes Controller. After disabling the API group, the Kubernetes API server needs to be restarted to apply the changes.
Follow the steps below:
- Edit
/etc/kubernetes/manifests/kube-apiserver.yaml - Under spec.containers.command, add
--runtime-config=extensions/v1beta1=false - Restart the Kubernetes API Server using
systemctl restart kubelet.service
For more information on enabling/disabling deprecated API groups, see https://kubernetes.io/docs/reference/using-api/#enabling-or-disabling.
2) Why does FortiADC Kubernetes Controller not spin up when failover occurs?
First, check if the NotReady Node is marked with taints, node.kubernetes.io/unreachable:NoExecute or node.kubernetes.io/not-ready:NoExecute.
If both taints are missing, check the following:
- If the Kubernetes version is earlier than 1.19.9, upgrade the Kubernetes version to avoid this issue.
For more information, see https://github.com/kubernetes/kubernetes/issues/97100. - If the percentage of NotReady nodes in the same zone is greater than the value of unhealthyZoneThreshold (default is 55%), then taints with the NoExecute effect may not apply to the NotReady nodes.
For details, see https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/.
3) How do I resolve a TLS Handshake error: bad certificate?
This error typically occurs during a reinstallation or upgrade. Because cert-manager automatically issues the webhook certificate, the old certificate may persist in the cluster after the controller is uninstalled, leading to a mismatch.
The controller logs will show a TLS handshake error and a remote error: tls: bad certificate.
To resolve this, uninstall the FortiADC Kubernetes Controller and manually delete the leftover TLS secrets in the controller's namespace before reinstalling:
-
Run
helm uninstallfor the FortiADC Kubernetes Controller. -
Delete the leftover TLS secrets (
webhook-tls/webhook-tls-ca). The secret is saved in the namespace the same with FortiADC Kubernetes Controller -
Then perform a fresh
helm install.
4) How should I handle CRD deletion hangs or webhook timeouts?
Starting with v3.1, a Conversion Webhook Server checks all VirtualServer custom resource (CR) operations. If the controller is uninstalled while CRs still exist, Kubernetes may hang while waiting for a response from the offline webhook.
To avoid this, follow this specific deletion order:
-
Delete all VirtualServer CRs created by the controller.
-
Uninstall the controller via Helm:
helm uninstall <release-name> -n <namespace>
-
Delete the VirtualServer CRDs once all VirtualServer CRs and the controller are fully removed:
kubectl delete crd <crd-name>
If a CRD is already stuck, you can force the deletion by patching the resource to remove its finalizers:
kubectl patch crd virtualservers.fadk8sctrl.fortinet.com \
-p '{"metadata":{"finalizers":[]}}' --type=merge
5) How do I resolve kubeadm upgrade errors related to Fake Nodes?
When using FortiADC as a Fake Node for CNI integration, kubeadm may fail to parse the kubelet version of that logical node during a cluster upgrade, resulting in a couldn't parse kubelet version error.
You can bypass this preflight validation by applying the --force flag to your upgrade command:
sudo kubeadm upgrade apply <version> --force
6) How do I handle node draining issues?
During a node drain, Kubernetes may block the process if the FortiADC Kubernetes Controller is using local storage (emptyDir) for temporary caching. The error will state that it cannot delete Pods with local storage.
Since this local storage is only used for temporary cache, it is safe to delete. Add the --delete-emptydir-data flag to the drain command to proceed:
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data