GDC Virtual support
FortiGate-VM supports the Google Distributed Cloud Virtual (GDC V) environment. In the example, a KVM build of the FortiGate-VM is deployed into a GDC environment. The GDC V runs on a cluster of Ubuntu VMs.
The following diagram depicts traffic sent from the client through the FortiGate-VM to the internet:
The document divides the configuration into two procedures:
- Configuring the GDC V environment. See To configure the GDC V environment:.
- Deploying and configuring the FortiGate-VM. See To deploy and configure the FortiGate-VM:.
To configure the GDC V environment:
- Create four Ubuntu VMs as Plan for a basic installation on your hardware describes.
- Create the admin and user clusters on top of the four VM nodes as Create basic clusters describes. The following shows the example values for the information that you must gather before creating the clusters:
Information
Example value
Basic cluster information The name of the admin cluster you're creating. The location and naming of cluster artifacts on the admin workstation are based on the cluster name. The cluster namespace is derived from the cluster name. admincluster The name of the user cluster you're creating. The location and naming of cluster artifacts on the admin workstation are based on the cluster name. The cluster namespace is derived from the cluster name. usercluster The version of bmctl that you downloaded. 1.30.100-gke.96 Account information The path to the SSH private key file on your admin workstation. By default, the path is /home/USERNAME/.ssh/id_rsa. /home/aturner/.ssh/id_rsa The ID of the Google Cloud project that you want to use for connecting your cluster to Google Cloud and viewing logs and metrics. This project is also referred to as thefleet host project. dev-project-001-166400 The email address that is associated with your Google Cloud account. For example:alex@example.com. aturner@example.com Node machine IP addresses One IP address for the admin cluster control plane node. 172.16.200.71 One IP address for the user cluster control plane node. 172.16.200.72 One IP address for the user cluster worker node. 172.16.200.73 VIP addresses VIP for the Kubernetes API server of the admin cluster. 172.16.200.74 VIP for the Kubernetes API server of the user cluster. 172.16.200.75 One VIP to use as the external address for the ingress proxy. 172.16.200.76 Range of ten IP addresses for use as external IP addresses for Services of type LoadBalancer. Notice that this range includes the ingress VIP, which is required by MetalLB. No other IP addresses can overlap this range. 172.16.200.76-172.16.200.86 Pod and Service CIDRs Range of IP addresses in CIDR block notation for use by Pods on the admin cluster. The recommended starting value, which is pre-filled in the generated cluster configuration file is 192.168.0.0/16. 192.168.0.0/16 Range of IP addresses in CIDR block notation for use by Services on the admin cluster. The recommended starting value, which is pre-filled in the generated cluster configuration file is 10.96.0.0/20. 10.96.0.0/20 Range of IP addresses in CIDR block notation for use by Pods on the user cluster. The recommended starting value, which is pre-filled in the generated cluster configuration file and is the default value in the console is 192.168.0.0/16. 192.168.0.0/16 Range of IP addresses in CIDR block notation for use by Services on the user cluster. The recommended starting value, which is pre-filled in the generated cluster configuration file and is the default value in the console is 10.96.0.0/20. 10.96.0.0/20
- In Google Cloud, go to Clusters. Select the clusters that you created and confirm that you can see the clusters connected on Google Kubernetes Engine (GKE).
- To enable multiple NICs for a pod or VM, you must enable it in usercluster.yaml as Configure multiple network interfaces for Pods describes, specifically to include the following:
apiVersion: v1 multipleNetworkInterfaces: true enableDataplaneV2: true
- On the admin workstatin, run the following to enable
vmruntime
on the user cluster to allow VM virtualization:bmctl enable vmruntime --kubeconfig bmctl-workspace/usercluster/usercluster-kubeconfig
- Create a separate yaml file to create the NetworkAttachmentDefinition (NAD) based on the following yaml. This creates a network definition that you can attach to pods or the FortiGate-VM so that they can communicate on the same internal subnet:
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: test-bridge spec: config: '{ "cniVersion": "0.3.1", "type": "bridge", "bridge": "br0", "ipam": { "type": "host-local", "subnet": "172.16.1.0/24" } }'
- Create the DataVolume for the FortiGate-VM in a separate yaml file. You must download the qcow2 file from the KVM FortiGate-VM image from the Fortinet Support site and place it in an accessible location for the image creation to succeed:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: "fgt-boot-dv" spec: source: http: url: "https://alextestbucket.s3.ap-southeast-1.amazonaws.com/fos3401.qcow2" # S3 or GCS pvc: accessModes: - ReadWriteOnce resources: requests: storage: "5000Mi"
- Create the FortiGate-VM for KVM instance using the boot disk created in step 7 and a secondary interface. The interface configuration in this yaml file uses
multus=test-bridge
which is defined in step 6 for eth1 and a default network name bridge, which is a system default and should not be changed in this configuration file.apiVersion: vm.cluster.gke.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt/vm: fgt name: fgt namespace: default spec: compute: cpu: vcpus: 2 memory: capacity: 4Gi disks: - boot: true driver: virtio virtualMachineDiskName: fgt-boot-dv guestEnvironment: {} interfaces: - default: true name: eth0 networkName: bridge - name: eth1 networkName: multus=test-bridge osType: Linux status: {}
- Create an SSH server on an pod or container by creating a yaml file as follows:
apiVersion: v1 kind: Pod metadata: name: ssh-pod labels: app: ssh-server annotations: k8s.v1.cni.cncf.io/networks: test-bridge spec: containers: - name: ssh-server image: ubuntu:20.04 command: - /bin/bash - -c - | apt-get update && \ apt-get install -y openssh-server && \ mkdir -p /run/sshd && \ echo 'root:Fortinet123#' | chpasswd && \ echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config && \ echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config && \ service ssh start && \ while true; do sleep 3600; done ports: - containerPort: 22 securityContext: privileged: true # Needed for sshd --- apiVersion: v1 kind: Service metadata: name: ssh-service spec: type: NodePort selector: app: ssh-server ports: - port: 22 targetPort: 22 nodePort: 30022 # You can change this port. If you change it, you must specify the port number on your SSH connection string. For example: ssh root@172.16.200.73 -p 30022
- From the admin workstation instance, apply the created yaml files from step 6 through 9 using
kubectl apply -f example.yaml
. Applying the yaml files creates the resources that the files define. - From the adminworkstation instance use
kubectl get vmi
to confirm that the VMs are visible and running, and that you can reach them from the worker node through their pod-network IP address:aturner@adminworkstation:~$ kubectl get vmi NAME AGE PHASE IP NODENAME READY ssh-pod 1/1 Running 0 8d virt-launcher-fgt-6d5nh 2/2 Running 0 8d aturner@userclusterworkernode:~$ ping 192.168.2.202 PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data. 64 bytes from 192.168.2.202: icmp_seq=1 ttl=254 time=0.650 ms ^C --- 192.168.2.202 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms aturner@userclusterworkernode:~$ ping 192.168.2.29 PING 192.168.2.29 (192.168.2.29) 56(84) bytes of data. 64 bytes from 192.168.2.29: icmp_seq=1 ttl=63 time=0.438 ms ^C --- 192.168.2.29 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms aturner@userclusterworkernode:~$ ssh admin@192.168.2.202 admin@192.168.2.202's password: FGVM08TM... # get sys stat Version: FortiGate-VM64-KVM v7.6.0,build3401,240724 (GA.F) First GA patch build date: 240724 Security Level: 2 Firmware Signature: certified Virus-DB: 92.08924(2024-11-19 16:31) Extended DB: 92.08924(2024-11-19 16:30) Extreme DB: 1.00000(2018-04-09 18:07) AV AI/ML Model: 3.01796(2024-11-19 15:50) IPS-DB: 6.00741(2015-12-01 02:30) IPS-ETDB: 6.00741(2015-12-01 02:30) APP-DB: 6.00741(2015-12-01 02:30) Proxy-IPS-DB: 6.00741(2015-12-01 02:30) Proxy-IPS-ETDB: 6.00741(2015-12-01 02:30) Proxy-APP-DB: 6.00741(2015-12-01 02:30) FMWP-DB: 24.00111(2024-11-06 13:20) IPS Malicious URL Database: 1.00001(2015-01-01 01:01) IoT-Detect: 0.00000(2022-08-17 17:31) OT-Detect-DB: 0.00000(2001-01-01 00:00) OT-Patch-DB: 0.00000(2001-01-01 00:00) OT-Threat-DB: 6.00741(2015-12-01 02:30) IPS-Engine: 7.01014(2024-07-02 21:57) Serial-Number: FGVM08TM... License Status: Valid License Expiration Date: 2025-08-24 VM Resources: 2 CPU/8 allowed, 3946 MB RAM Log hard disk: Not available Hostname: FGVM08TM... Private Encryption: Disable Operation Mode: NAT Current virtual domain: root Max number of virtual domains: 10 Virtual domains status: 1 in NAT mode, 0 in TP mode Virtual domain configuration: disable FIPS-CC mode: disable Current HA mode: standalone Branch point: 3401 Release Version Information: GA FortiOS x86-64: Yes System time: Tue Nov 19 17:07:49 2024 Last reboot reason: warm reboot
To deploy and configure the FortiGate-VM:
The test environment uses an SSH session to access the SSH server pod or container and through that session, triggers an EICAR test file download that flows through the FortiGate and triggers UTM processing via a firewall policy.
- Upload a license to the FortiGate-VM:
FortiGate-VM64-KVM # execute restore vmlicense ftp workingfolder/FGVM08TM....lic ...86.126 **omitted** This operation will overwrite the current VM license and reboot the system! Do you want to continue? (y/n)y Please wait... Connect to ftp server ...86.126 ... Get VM license from ftp server OK. VM license install succeeded. Rebooting firewall.
- The primary interface obtains its IP address using DHCP. Therefore, the NAD is the only address that you must configure. Configure the IP address in FortiOS and on the Ubuntu pod using the IP address that the NAD provides:
kubectl describe vmi fgt ... Ip Address: 172.16.1.250 Ip Addresses: 172.16.1.250 ... FGVM08TM24003117 (port2) # show config system interface edit "port2" set vdom "root" set ip 172.16.1.250 255.255.255.0 set allowaccess ping https ssh snmp http telnet fgfm radius-acct probe-response fabric ftm speed-test set type physical set snmp-index 2 set mtu-override enable next end FGVM08TM24003117 # get router info routing-table all Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area V - BGP VPNv4 * - candidate default Routing table for VRF=0 S* 0.0.0.0/0 [5/0] via 192.168.3.33, port1, [1/0] C 172.16.1.0/24 is directly connected, port2 C 192.168.2.202/32 is directly connected, port1 S 192.168.3.33/32 [5/0] is directly connected, port1, [1/0]
- Configure a firewall policy with unified threat management (UTM) and an antivirus (AV) profile:
config firewall policy edit 1 set uuid 2864e7e4-a6d7-51ef-cc59-2a9e5ff5a48e set srcintf "port2" set dstintf "port1" set action accept set srcaddr "all" set dstaddr "all" set schedule "always" set service "ALL" set utm-status enable set av-profile "default" set nat enable next end
- Configure the Ubuntu server with the route pointing to the FortiGate port2 address. In the example, the server IP address is ...86.126:
root@ssh-pod:~# ip route show default via 192.168.3.33 dev eth0 mtu 1450 ...86.126 via 172.16.1.250 dev net1 172.16.1.0/24 dev net1 proto kernel scope link src 172.16.1.253 192.168.3.33 dev eth0 scope link root@ssh-pod:~# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: net1@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 2a:a9:65:6f:1c:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.16.1.253/24 brd 172.16.1.255 scope global net1 valid_lft forever preferred_lft forever inet6 fe80::28a9:65ff:fe6f:1cbc/64 scope link valid_lft forever preferred_lft forever 67: eth0@if68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether be:d5:28:86:c2:27 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.3.179/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::bcd5:28ff:fe86:c227/64 scope link valid_lft forever preferred_lft forever
- To test the configuration, attempt to use cURL to download an Eicar file to the server. Confirm that the UTM and AV features are active and block download of the Eicar file:
root@ssh-pod:~# curl http://...86.126/samplevirus/eicar.txt <!DOCTYPE html> <html lang="en"> **omitted** <h1>High Security Alert</h1> <p>You are not permitted to download the file "eicar.txt" because it is infected with the virus "EICAR_TEST_FILE".</p> <table><tbody> <tr> <td>URL</td> <td>http://...86.126/samplevirus/eicar.txt</td> </tr> <tr> <td>Quarantined File Name</td> <td></td> </tr> <tr> <td>Reference URL</td> <td><a href="https://fortiguard.com/encyclopedia/virus/2172">https://fortiguard.com/encyclopedia/virus/2172</A></td> </tr> </tbody></table> </div></body> </html>