Fortinet black logo

Cookbook

Private Cloud K8s SDN connector

Copy Link
Copy Doc ID 9bd2f947-ece6-11ec-bb32-fa163e15d75b:508319
Download PDF

Private Cloud K8s SDN connector

FortiOS automatically updates dynamic and cluster IP addresses for Kubernetes (K8s) by using a K8s SDN connector, enabling FortiOS to manage K8s pods as global address objects, as with other connectors. This includes mapping the following attributes from K8s instances to dynamic address groups in FortiOS:

Filter Description
Namespace Filter service IP addresses in a given namespace.
ServiceName Filter service IP addresses by the given service name.
NodeName Filter node IP addresses by the given node name.

PodName

Filter IP addresses by the pod name.
Label.XXX Filter service or node IP addresses with the given label XXX. For example: K8S_Label.app=nginx.

FortiOS 6.2.3 and later collects cluster IP addresses in addition to external IP addresses for exposed K8s services.

Note

There is no maximum limit for the number of IP addresses populated with the filters.

To obtain the IP address, port, and secret token in K8s:
  1. When configuring the K8s SDN connector in FortiOS, you must provide the IP address and port that the K8s deployment is running on. Run kubectl cluster-info to obtain the IP address and port. Note down the IP address and port. The following shows the IP address and port for a local cluster:

    The following shows the IP address and port for customer-managed K8s on Google Cloud Platform:

  2. Generate the authentication token:
    1. Create a service account to store the authentication token:
      1. Run the kubectl create serviceaccount <Service_account_name> command. For example, if the service account name is fortigateconnector, the command is kubectl create serviceaccount fortigateconnector.
      2. Run the kubectl get serviceaccounts command to verify that you created the service account. The account should show up in the service account list.
    2. Create a cluster role. K8s 1.6 and later versions allow you to configure role-based access control (RBAC). RBAC is an authorization mechanism to manage resource permissions on K8s. You must create a cluster role to grant the FortiGate permission to perform operations and retrieve objects:
      1. Create the yaml file by running the vi <filename>.yaml command. For example, if the yaml file name is fgtclusterrole, the command is vi fgtclusterrole.yaml. Paste the following:

        apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: fgt-connector rules: - apiGroups: [""] resources: ["pods", "namespaces", "nodes" , "services"] verbs: ["get", "watch", "list"]

        The resources list specifies the objects that FortiOS can retrieve. The verbs list specifies the operations that FortiOS can perform.

      2. Run the Kubectl apply -f <filename>.yaml command to apply the yaml file to create the cluster role. In this example, the command is Kubectl apply -f fgtclusterrole.yaml.
      3. Run the kubectl create clusterrolebinding fgt-connector --clusterrole=<cluster_rolename> --serviceaccount=default:<service_account_name> to attach the cluster role to the service account. In this example, the command is kubectl create clusterrolebinding fgt-connector --clusterrole=fgt-connector --serviceaccount=default:fortigateconnector.
    3. Run the kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='fortigateconnector')].data.token}"| base64 --decode command to obtain the secret token. As the token is Base64 encoded, the command includes base64 --decode to extract the decoded keystring. Note down the token.
To configure K8s SDN connector using the GUI:
  1. Configure the K8s SDN connector:
    1. Go to Security Fabric > External Connectors > Create New Connector.
    2. Select Kubernetes.
    3. In the IP field, enter the IP address that you obtained in To obtain the IP address, port, and secret token in K8s:.
    4. In the Port field, select Specify, then enter the port that you obtained in To obtain the IP address, port, and secret token in K8s:.
    5. In the Secret token field, enter the token that you obtained in To obtain the IP address, port, and secret token in K8s:.
    6. Configure the other fields as desired.
  2. Create a dynamic firewall address for the configured K8S SDN connector:
    1. Go to Policy & Objects > Addresses.
    2. Click Create New, then select Address.
    3. Configure the address as shown, selecting the desired filter in the Filter dropdown list. In this example, the K8s SDN connector will automatically populate and update IP addresses only for node instances that match the specified node name:

  3. Ensure that the K8s SDN connector resolves dynamic firewall IP addresses:
    1. Go to Policy & Objects > Addresses.
    2. Hover over the address created in step 2 to see a list of IP addresses for node instances that match the node name configured in step 2:

To configure K8s SDN connector using CLI commands:
  1. Configure the K8s SDN connector:

    config system sdn-connector

    edit "kubernetes1"

    set type kubernetes

    set server "<IP address obtained in To obtain the IP address, port, and secret token in K8s:>"

    set server-port <Port obtained in To obtain the IP address, port, and secret token in K8s:>

    set secret-token <Secret token obtained in To obtain the IP address, port, and secret token in K8s:>

    set update-interval 30

    next

    end

  2. Create a dynamic firewall address for the configured K8s SDN connector with the supported K8s filter. In this example, the K8s SDN connector will automatically populate and update IP addresses only for node instances that match the specified node name:

    config firewall address

    edit "k8s_nodename"

    set type dynamic

    set sdn "kubernetes1"

    set filter "K8S_NodeName=van-201669-pc1"

    next

    end

  3. Confirm that the K8s SDN connector resolves dynamic firewall IP addresses using the configured filter:

    config firewall address

    edit "k8s_nodename"

    set type dynamic

    set sdn "kubernetes1"

    set filter "K8S_NodeName=van-201669-pc1"

    config list

    edit "172.16.65.227"

    next

    end

    next

    end

To troubleshoot the connection:
  1. In FortiOS, run the following commands:

    diagnose deb application kubed -1

    diagnose debug enable

  2. Reset the connection on the web UI to generate logs and troubleshoot the issue. The following shows the output in the case of a failure:

    The following shows the output in the case of a success:

Related Videos

sidebar video

SDN Connector for Kubernetes Platform

  • 755 views
  • 5 years ago

Private Cloud K8s SDN connector

FortiOS automatically updates dynamic and cluster IP addresses for Kubernetes (K8s) by using a K8s SDN connector, enabling FortiOS to manage K8s pods as global address objects, as with other connectors. This includes mapping the following attributes from K8s instances to dynamic address groups in FortiOS:

Filter Description
Namespace Filter service IP addresses in a given namespace.
ServiceName Filter service IP addresses by the given service name.
NodeName Filter node IP addresses by the given node name.

PodName

Filter IP addresses by the pod name.
Label.XXX Filter service or node IP addresses with the given label XXX. For example: K8S_Label.app=nginx.

FortiOS 6.2.3 and later collects cluster IP addresses in addition to external IP addresses for exposed K8s services.

Note

There is no maximum limit for the number of IP addresses populated with the filters.

To obtain the IP address, port, and secret token in K8s:
  1. When configuring the K8s SDN connector in FortiOS, you must provide the IP address and port that the K8s deployment is running on. Run kubectl cluster-info to obtain the IP address and port. Note down the IP address and port. The following shows the IP address and port for a local cluster:

    The following shows the IP address and port for customer-managed K8s on Google Cloud Platform:

  2. Generate the authentication token:
    1. Create a service account to store the authentication token:
      1. Run the kubectl create serviceaccount <Service_account_name> command. For example, if the service account name is fortigateconnector, the command is kubectl create serviceaccount fortigateconnector.
      2. Run the kubectl get serviceaccounts command to verify that you created the service account. The account should show up in the service account list.
    2. Create a cluster role. K8s 1.6 and later versions allow you to configure role-based access control (RBAC). RBAC is an authorization mechanism to manage resource permissions on K8s. You must create a cluster role to grant the FortiGate permission to perform operations and retrieve objects:
      1. Create the yaml file by running the vi <filename>.yaml command. For example, if the yaml file name is fgtclusterrole, the command is vi fgtclusterrole.yaml. Paste the following:

        apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: fgt-connector rules: - apiGroups: [""] resources: ["pods", "namespaces", "nodes" , "services"] verbs: ["get", "watch", "list"]

        The resources list specifies the objects that FortiOS can retrieve. The verbs list specifies the operations that FortiOS can perform.

      2. Run the Kubectl apply -f <filename>.yaml command to apply the yaml file to create the cluster role. In this example, the command is Kubectl apply -f fgtclusterrole.yaml.
      3. Run the kubectl create clusterrolebinding fgt-connector --clusterrole=<cluster_rolename> --serviceaccount=default:<service_account_name> to attach the cluster role to the service account. In this example, the command is kubectl create clusterrolebinding fgt-connector --clusterrole=fgt-connector --serviceaccount=default:fortigateconnector.
    3. Run the kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='fortigateconnector')].data.token}"| base64 --decode command to obtain the secret token. As the token is Base64 encoded, the command includes base64 --decode to extract the decoded keystring. Note down the token.
To configure K8s SDN connector using the GUI:
  1. Configure the K8s SDN connector:
    1. Go to Security Fabric > External Connectors > Create New Connector.
    2. Select Kubernetes.
    3. In the IP field, enter the IP address that you obtained in To obtain the IP address, port, and secret token in K8s:.
    4. In the Port field, select Specify, then enter the port that you obtained in To obtain the IP address, port, and secret token in K8s:.
    5. In the Secret token field, enter the token that you obtained in To obtain the IP address, port, and secret token in K8s:.
    6. Configure the other fields as desired.
  2. Create a dynamic firewall address for the configured K8S SDN connector:
    1. Go to Policy & Objects > Addresses.
    2. Click Create New, then select Address.
    3. Configure the address as shown, selecting the desired filter in the Filter dropdown list. In this example, the K8s SDN connector will automatically populate and update IP addresses only for node instances that match the specified node name:

  3. Ensure that the K8s SDN connector resolves dynamic firewall IP addresses:
    1. Go to Policy & Objects > Addresses.
    2. Hover over the address created in step 2 to see a list of IP addresses for node instances that match the node name configured in step 2:

To configure K8s SDN connector using CLI commands:
  1. Configure the K8s SDN connector:

    config system sdn-connector

    edit "kubernetes1"

    set type kubernetes

    set server "<IP address obtained in To obtain the IP address, port, and secret token in K8s:>"

    set server-port <Port obtained in To obtain the IP address, port, and secret token in K8s:>

    set secret-token <Secret token obtained in To obtain the IP address, port, and secret token in K8s:>

    set update-interval 30

    next

    end

  2. Create a dynamic firewall address for the configured K8s SDN connector with the supported K8s filter. In this example, the K8s SDN connector will automatically populate and update IP addresses only for node instances that match the specified node name:

    config firewall address

    edit "k8s_nodename"

    set type dynamic

    set sdn "kubernetes1"

    set filter "K8S_NodeName=van-201669-pc1"

    next

    end

  3. Confirm that the K8s SDN connector resolves dynamic firewall IP addresses using the configured filter:

    config firewall address

    edit "k8s_nodename"

    set type dynamic

    set sdn "kubernetes1"

    set filter "K8S_NodeName=van-201669-pc1"

    config list

    edit "172.16.65.227"

    next

    end

    next

    end

To troubleshoot the connection:
  1. In FortiOS, run the following commands:

    diagnose deb application kubed -1

    diagnose debug enable

  2. Reset the connection on the web UI to generate logs and troubleshoot the issue. The following shows the output in the case of a failure:

    The following shows the output in the case of a success: