Deploying FortiSOAR Docker on an Amazon Elastic Kubernetes Cluster
You can deploy the FortiSOAR Docker on an Amazon Elastic Kubernetes (EKS) cluster, in the Amazon Web Services (AWS) Cloud. The product deployment solution facilitates an orchestrated deployment of the FortiSOAR components on EKS.
Required Terminology
The following table describes the important terms for FortiSOAR deployment on the EKS cluster.
Term | Description |
Pod | A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. For more information, see the Pods section in the Kubernetes documentation. |
StatefulSet | StatefulSet is the workload API object used to manage stateful applications, and it represents a set of Pods with unique, persistent identities, and stable hostnames. For more information, see the StatefulSet section in the Kubernetes documentation and the Statefulset section. |
Service | A Service enables network access to a set of Pods in Kubernetes. For more information, see the Service section in the Kubernetes documentation and the Load Balancer (Service) section. |
Persistent Volume Claim | A PersistentVolumeClaim (PVC) is a request for storage by a user. For more information, see the Persistent Volumes section in the Kubernetes Documentation and the Persistent Volume Claim section. |
Persistent Volume |
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using storage classes. For more information, see the Persistent Volumes section in the Kubernetes documentation and the Persistent Volume Claim section. |
StorageClass |
A StorageClass provides a way for administrators to describe the offered "classes" of storage. For more information, see the Storage Classes section in the Kubernetes documentation and the Storage Class section. |
Namespace |
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. For more information, see the Namespaces section in the Kubernetes documentation. |
Preparing the environment for installing FortiSOAR on EKS
EKS-specific Requirements
- Use Kubernetes version 1.23 and later.
- The AWS default CNI is used during cluster creation.
- Create a node group with only one availability zone (The node group in EKS must not cross the availability zone), and that has at least one instance of 32GB RAM and 8 CPUs based on x86/64 or amd64 architecture. You can use t2.2xlarge.
- All the nodes in the node group must be running on the Linux operating system.
- While creating the node group you need an 'IAM' role with at least the following policies:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
- From Kubernetes version 1.24, docker support, 'dockershim', is removed. Ensure that your NodeGroup instance has the ‘
containerd
’ runtime installed. For more information on docker removal from Kubernetes, see the dockershim deprecation document. - Use an existing AWS Elastic Container Registry or create a new one, and ensure that the EKS has full access to pull images from the Elastic Container Registry.
- Deploy an AWS load balancer controller add-on in the cluster. This is required for automatically creating a network load balancer when the FortiSOAR service is created. For more information on installing the add-on, see the Installing the AWS Load Balancer Controller add-on document.
- Ensure that the EBS CSI driver and its requirements are fulfilled. Depending on your Kubernetes version, you might need to install the EBS CSI driver. For more information, see the EBS CSI guide.
Host-specific Requirements
You can install the following tools on your host while using Kubernetes to ease its administration:
- AWS CLI: For more information on installing the AWS CLI, see the Installing or updating the latest version of the AWS CLI document.
- Kubectl CLI: For more information on installing the Kubectl CLI, see the Installing kubectl document.
- Docker: Install and configure Docker to enable the push of the container images to the container registry. If you are using ECR, ensure that you are able to upload images to ECR correctly.
Apart from installing the tools, you also need to do the following:
- If an IAM role needs access to the EKS cluster, then run the following command from the system that already has access to the EKS cluster:
kubectl edit -n kube-system configmap/aws-auth
For more information on creating an IAM role, see the Enabling IAM user and role access to your cluster document. - Log in to the AWS environment and run the following command on the AWS CLI to access the Kubernetes cluster:
aws eks --region <region_name> update-kubeconfig --name <cluster_name>
- Ensure that you have free space of approximately 6GB at the location where the FortiSOAR Docker image will be downloaded and then uploaded to the Docker registry. Also, ensure that approximately 15GB of free space is available at the
/var/lib/docker
location so that the images can be loaded into the Docker cache, before being pushed to the container registry.
Recommendations for FortiSOAR deployment on EKS
- By default, FortiSOAR creates a storage class for EBS with type gp2. For better performance, you can override the default class of your EBS volume in the
yaml
file of your FortiSOAR storage class. For more information, see the Amazon EBS volume types document. - Do not delete the Disk/EBS volumes linked with the PV used in the FortiSOAR deployment because this might result in data loss.
- Before creating the FortiSOAR deployment, you must configure the required DNS name for the load balancer.
Note: Changing the DNS names after the deployment is not recommended. - The FortiSOAR Docker image is supported only on x86_64/amd64 machine. Therefore, ensure that worker nodes on which the FortiSOAR Pods runs is x86_64/amd6
- Ensure that the FortiSOAR Pod has a resource request of 16GB RAM and 8 CPU core and a limit of 32GB RAM and 8 CPU core.
FortiSOAR EKS resource requirements
FortiSOAR EKS deployment uses the same Docker image that is designed for FMG/FAZ and enterprise Docker. For more information about deploying FortiSOAR on Docker platforms such as VMware ESX or AWS, see the Deploying FortiSOAR on a Docker Platform chapter. To learn more about the FortiAnalyzer MEA, see the FortiAnalyzer documentation; to learn more about the FortiManager MEA, see the FortiManager documentation.
Storage Class
A StorageClass provides a way for administrators to describe the offered "classes" of storage. Different classes might map to quality-of-service levels, backup policies, or arbitrary policies determined by the cluster administrators. Kubernetes is agnostic about what the classes represent. This concept is sometimes called "profiles" in other storage systems. For more information, see the Storage Classes documentation.
FortiSOAR on EKS defines the following storage class:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fortisoar-storageclass-ebs
provisioner: ebs.csi.aws.com
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp2
fsType: ext4
- reclaimPolicy ‘Retain’ specifies that even if PVC is deleted, the volume is retained. The default behavior is ‘delete’.
- allowVolumeExpansion ‘true’ specifies that the volume should be allowed to expand when the volume is full. For example, if the
/var/log
volume is full, its associated PVC should be re-sized, so that we get expanded disk space inside the pod(container). For information on expanding the PVC, see the Resizing Persistent Volumes using Kubernetes article.
Persistent Volume Claim
All Docker volumes use PVCs, which in turn use the storage class mentioned in the Storage Class section.
The following PVC's are created:
/data (fortisoar-pvc-data 15GB) /var/lib/rabbitmq (fortisoar-pvc-rabbitmq 15GB) /var/lib/elasticsearch (fortisoar-pvc-elasticsearch 40GB) /var/lib/pgsql (fortisoar-pvc-pgsql 150GB) /var/log (fortisoar-pvc-log 5GB) /home/csadmin (fortisoar-pvc-home-csadmin 20GB) /opt/cyops/configs (fortisoar-pvc-configs 15GB)
Load Balancer (Service)
AWS EKS offers ALB, NLB, and Classic load balancers. Their comparisons are listed at: https://aws.amazon.com/elasticloadbalancing/features/.
FortiSOAR has chosen the network load balancer (NLB) based on the following rationale:
- The Classic load balancer is an older version of the load balancer, and AWS itself does not recommend its usage.
- Long-lived TCP connections are supported by NLB, which are needed to connect FortiSOAR to FSR Agent. ALB does not support long-lived TCP connections.
- Websockets and preserving Source IP addresses are supported by NLB.
- FortiSOAR creates TCP listeners on NLB for port 443 and 5671. TCP listeners forwards traffic as is to the pod(container). The other way is to create a TLS listener on load balancer, which needs a TLS certificate deployed with the load balancer. Currently, TLS listeners on the AWS load balancer do not support mTLS. Therefore, if mTLS requires to be used, then use the simple TCP listener and implement the mTLS in application itself, which has already been done in FortiSOAR.
Statefulset
- FortiSOAR's statefulset creates only one pod. Scaling a statefulset, i.e., increasing or decreasing the number of replicas, is not supported.
- FortiSOAR Docker image is only supported on x86_64/amd64 machine. Therefore, you must ensure that the worker nodes on which the FortiSOAR Pods runs is x86_64/amd64.
- FortiSOAR Pod has resource request of 16GB RAM and 8 CPU core and limit of 32GB RAM and 8 CPU core.
- The FortiSOAR container in Kubernetes runs with ‘privileged: true’. In future releases, we will enable the FortiSOAR container to run Kubernetes with limited capabilities.
Deploying the FortiSOAR Docker image on an Amazon Elastic Kubernetes (EKS) cluster
- Download the FortiSOAR Docker image from FortiCare, i.e., https://support.fortinet.com.
- Download the EKS deployment files (.zip) from
https://github.com/fortinet-fortisoar/fortisoar-deployment-templates
- Upload the downloaded FortiSOAR Docker image to your ECR or any other Docker repository that is accessible from within your Kubernetes cluster. For example:
# docker push <account-id>.dkr.ecr.<region>.amazonaws.com/fortisoar/fortisoar:7.4.0
- Create the namespace using the following command:
#kubectl apply -f fortisoar-namespace.yaml
- Create the storage class using the following command:
#kubectl apply -f fortisoar-storageclass.yaml
- Create the service using the following command:
#kubectl apply -f fortisoar-service.yaml
- Create the persistent volume claim using the following command:
#kubectl apply -f fortisoar-pvc.yaml
- Note the name of the load balancer created by the service:
#kubectl get svc -n fsr -o jsonpath="{.items[0].status.loadBalancer.ingress[0].hostname}"
- Update the load balancer hostname placeholder with the value returned in the previous step. For example:
#sed -i 's#@PLACEHOLDER_HOSTNAME_LOAD_BALANCER@#k8s-fsr-fsr-b6fc340816-<account-id-here>.elb.<region>.amazonaws.com#g' fortisoar-statefulset.yaml
- Update the repository hostname placeholder.
Note: Even if you do not have private repository set, then you need to run the following command with the FortiSOAR public repository hostname, i.e.,repo.fortisoar.fortinet.com
:#sed -i 's#@PLACEHOLDER_HOSTNAME_REPO@#repo.fortisoar.fortinet.com#g' fortisoar-statefulset.yaml
- Update the Docker image placeholder with the Docker image path. For example:
#sed -i 's#@PLACEHOLDER_DOCKER_IMAGE@#<account-id-here>.dkr.ecr.<region>.amazonaws.com/fortisoar/fortisoar:7.4.0#g' fortisoar-statefulset.yaml
- Create the statefulset using the following command:
#kubectl apply -f fortisoar-statefulset.yaml
- Access the FortiSOAR UI at
https://<load-balancer-dns-name>/
.
Note: A spinner is displayed while provisioning is in progress. Provisioning takes about 10 minutes as it runs the FortiSOAR VM Config Wizard and configures the embedded SME in its first boot. If there are any provisioning failures, such as failures while FortiSOAR is performing initial configuration phase, using the FortiSOAR Configuration Wizard, or failures while configuring the embedded Secure Message Exchange, then appropriate error messages are displayed on the FortiSOAR UI making it easier to understand the cause of the error.
Uninstalling FortiSOAR from the EKS cluster
When you uninstall FortiSOAR from EKS, the FortiSOAR statefulset and data are deleted from the EKS cluster. This data is lost and cannot be recovered.
- If your storage class is with the '
Retain
' policy, by default, FortiSOAR EKS files creates the storage class with the 'Retain
' policy, then you must write down the PVs that are associated with the FortiSOAR PVCs for deletion in the Kubernetes cluster level. Use the following command to get the PVs that are associated with the FortiSOAR PVCs:#kubectl get pvc -n fsr # note down all the volumes
- Delete all resources in the namespace where FortiSOAR is deployed. The following command considers 'fsr' as the namespace, and deletes 'fsr' statefulset, service, and namespace:
#kubectl delete namespace fsr
- If your storage class is with the '
Retain
' policy, then you must delete the EBS volumes on the Amazon console. You can also use the following command on the AWS CLI to delete the EBS volumes:#aws ec2 delete-volume --volume-id <value>
Limitations of FortiSOAR Docker on a EKS cluster
- Scaling a statefulset, i.e., increasing or decreasing the number of replicas, is not supported
- High availability is not supported.
- Kubernetes Liveness and Readiness probes are not available for FortiSOAR.
Troubleshooting Tips
Logs and Services
- To view the first boot or subsequent boot provisioning logs:
# kubectl exec -ti fsr-0 -c fsr -n fsr -- bash
# vi /var/log/cyops/extension/boot.log
- To view the status of FortiSOAR services:
# kubectl exec -ti fsr-0 -c fsr -n fsr -- bash
# csadm service –status
How to restart a FortiSOAR Pod
To restart a FortiSOAR Pod, do the following:
# kubectl scale statefulset fsr -n fsr --replicas=0
- Ensure that no Pod is visible in statefulset ‘fortisoar’
# kubectl get pods -n fsr
# kubectl scale statefulset fsr -n fsr --replicas=1
Note: You must specify--repliacas=1
only. FortiSOAR does not support scaling a statefulset. Hence, a value other than '1' can cause issues.
How to resolve the issue of Elasticsearch-based recommendations not working on a FortiSOAR Container deployed on an EKS Cluster?
By default, Elasticsearch-based recommendations do not work on a FortiSOAR container deployed on an EKS cluster due to size limitations. To know more about Elasticsearch-based recommendations, see the Recommendation Engine topic in the Application Editor
chapter of the "Administration Guide".
To use Elasticsearch-based recommendations, you must increase the memory allocated to Elasticsearch to 4 GB, using the following steps:
- Update the value of the following parameters in the
/etc/elasticsearch/jvm.options.d/fsr.options
file to 4 GB:-Xms4g
-Xmx4g
- Restart the Elasticsearch service using the following command:
systemctl restart elasticsearch
- Reindex Elasticsearch data using the following command:
sudo -u nginx php /opt/cyops-api/bin/console app:elastic:create --sync=true
Now, you should be able to view Elasticsearch-based recommendations on your FortiSOAR container deployed on your EKS Cluster.
The FortiSOAR login page displays a 'Device UUID Change Detected' message
If your EKS node group contains more than one node, and if the FortiSOAR pod gets scheduled on a different node than the previous node in the node group on which it was previously running, then when you try to log in to your FortiSOAR instance, the login page displays a 'Device UUID Change Detected' message:
Resolution:
Click Continue to Login on the FortiSOAR login page. This screen continues to be visible till an FDN sync is performed after you have clicked Continue to Login. By default, the FDN synchronization occurs hourly.
If you want to get rid of this screen, then you can run the following two commands on the FortiSOAR CLI:
csadm license --refresh-device-uuid
systemctl restart cyops-auth