Fortinet black logo

Multi-Tenancy Support Guide

Troubleshooting

Copy Link
Copy Doc ID 51670b91-c891-11ed-8e6d-fa163e15d75b:812444
Download PDF

Troubleshooting

For troubleshooting the distributed managed service provider model for multi-tenancy issues that you might face, you can use the postman.log located at:

/var/log/cyops/cyops-routing-agent/postman.log

You can change the logging levels for the postman.log by using the following command:

# vi /opt/cyops-routing-agent/postman/config.ini and set the required logging level. By default, the logging level is set to WARN.

You can set the following logging levels in the log files:

  • DEBUG: Low-level system information for debugging purposes.
  • INFO: General system information.
  • WARN: Information describing a minor problem that has occurred.
  • ERROR: Information describing a major problem that has occurred.
  • CRITICAL: Information describing a critical problem that has occurred.

Deployment Troubleshooting

While connecting to a secure message exchange, retries never stop if the DNS is not resolved

You can continue to retry to connect to the secure message exchange for 10 minutes. A long retry has been specified to ensure that you do not have to restart the service every time the network breaks since all consumers stop after a long network break.

Resolution

You can configure the time for which you will continue retrying to establish a connection with the secure message exchange using the config file, located at /opt/cyops-routing-agent/postman/config.ini:

heartbeat = 120
connection_attempts = 50
retry_delay = 10

After adding a tenant on the master, you see a Retry button on the tenant node

This issue can occur due to two reasons:

  • Incorrect secure message exchange configuration.
  • Incorrect selection of secure message exchange while configuring the tenant on the master node.

Resolution

Correct the secure message exchange configuration or selection and click the Retry button.

Configuration Troubleshooting

Failure while configuring master on a tenant node

This issue can occur if you have specified the wrong password or port while configuring the master on the tenant node.

Resolution

On the tenant node, click Master Configuration > Edit Configuration. On the Configure Master dialog and correct the password or TCP port number that you have specified.

If you yet see that the Enabled button is in the NO state, and you want to enable data replication, toggle the Enabled button to YES.

Records created at the tenant node do not replicate to the master node even when data replication is turned on

In this case, the FortiSOAR logs contain the following message: Error message on master "Error: Bad Request for url: <URL> when a field is non-mandatory on tenant and mandatory on master

This issue occurs if you have switched off replication of a required record field on a tenant node, or the field does not exist on the same module defined at the tenant node. This leads to a failure of record creation on the master node.

Resolution

You must ensure that replication is switched on for all required record fields, and you should also note that if you have done any schema changes in a module at the master node, then you must ensure that the required fields are marked as required across all tenants that are replicating that module.

A user is not able to view records and FortiSOAR displays a 500 error

Resolution

Any user who is created in a multi-tenant environment and who requires to view records must have a minimum of read-only access for the Tenants module in their respective role.

Therefore, ensure that you have assigned the user a role that has a minimum of read-only access for the Tenants module.

Tenant modules not getting displayed at the master node after initial configuration

You might not see any modules of a dedicated tenant node at the master node in Tenant Manager > Manage Modules after the initial configuration (addition) of the tenant node if there are errors while configuring the tenant.

Resolution

This issue occurs due to the tenant being in the "Verification Failed" state at the time of configuration. To make the MMD of the tenant visible at the master node, restart the "cyops-postman" service at the tenant node.

In case of clustering of secure message exchanges, FortiSOAR is unable to connect to primary secure message exchange even after the primary node has come back online after a failure

If you have your secure message exchanges setup as a cluster for high availability, and you face a failure, and the primary secure message exchange node comes back online and yet the FortiSOAR node is still not able to connect to the secure message exchange. You will see the following error in the postman logs:

2018-11-20 10:36:21,022 140437372753664 59b195ab94e35ef70e28ed129c58c804 ERROR pika.callback callback process(): Calling <bound method BlockingChannel._on_channel_closed of <BlockingChannel impl=<Channel number=1 CLOSED conn=<SelectConnection OPEN socket=(‘<xxx.xxx.xx.xxx’, 53378)->(‘xxx.xxx.xx.xxx’, 52011) params=>>>> for “1:Channel.Close” failed
Traceback (most recent call last):
File “/opt/cyops-routing-agent/.env/lib/python3.4/site-packages/pika/callback.py”, line 236, in process
callback(*args, **keywords)
File “/opt/cyops-routing-agent/.env/lib/python3.4/site-packages/pika/adapters/blocking_connection.py”, line 1358, in _on_channel_closed
method.reply_text)
pika.exceptions.ChannelClosed: (404, “NOT_FOUND - home node ‘rabbit@’ of durable queue ‘queue.postman.data.remoterequest.805553bfece2f6a5895c8db8c54b9ae0’ in vhost ‘vhost_59b195ab94e35ef70e28ed129c58c804’ is down or inaccessible”)

Resolution

Stop and start the rabbitmq app on the secure message exchange node using the following commands:

# rabbitmqctl stop_app

# rabbitmqctl start_app

Shifting the Secure Message Exchange of tenants leads to MMDs not being pushed to the tenants that have been shifted to the new Secure Message Exchange

If you have shifted the Secure Message Exchange of any tenant in your multi-tenant configuration to a new Secure Message Exchange, then the remote mmd management does not work since the uwsgi service keeps the Secure Message Exchange details in memory and uses it to publish to the Secure Message Exchange. However, when the Secure Message Exchange is changed for any tenant the change gets notified using "rabbitmq publish" and the updated Secure Message Exchange setting is available only to the postman service; the uwsgi still has old Secure Message Exchange details.

Resolution

After you have shifted any tenant to a new Secure Message Exchange, then you must update the router settings in the uwsgi service by restarting the postman and uwsgi services on the master node using the following commands:
# systemctl restart uwsgi
# systemctl restart cyops-postman

You also need to restart the postman service on the tenant node.

Tenant with basic authentication is stuck in the "Awaiting Remote Node Connection" state when a user removes and re-adds certificates

If a user deletes the certificate or key file by mistake and then re-add the certificate on an 'Agent', the status of the agent changes to "Configuration Failed" and then to "Awaiting Remote Node Connection". In this case if the user configures multitenancy again by exporting the latest master configuration from master and importing the configuration on a tenant node, then the record of the tenant on master might remain stuck in the "Awaiting Remote Node Connection"; however, the master configuration on the tenant node shows that the remote node is connected.

Resolution

  1. Open the Tenant node and navigate to Setting > Master Configuration.
  2. Disable and again enable the master configuration. This updates the state of the tenant record on master and displays the correct "Remote Node Connected" state.

Post-upgrade to 7.3.0 the status of the Tenant displays "Remote Node Unreachable"

Once you have updated your MSSP setup to 7.3.0 or later from a release prior to 7.3.0, the status of the tenant displays "Remote Node Unreachable"

Resolution

Restart the cyops-integrations-agent service on the tenant node.

Troubleshooting

For troubleshooting the distributed managed service provider model for multi-tenancy issues that you might face, you can use the postman.log located at:

/var/log/cyops/cyops-routing-agent/postman.log

You can change the logging levels for the postman.log by using the following command:

# vi /opt/cyops-routing-agent/postman/config.ini and set the required logging level. By default, the logging level is set to WARN.

You can set the following logging levels in the log files:

  • DEBUG: Low-level system information for debugging purposes.
  • INFO: General system information.
  • WARN: Information describing a minor problem that has occurred.
  • ERROR: Information describing a major problem that has occurred.
  • CRITICAL: Information describing a critical problem that has occurred.

Deployment Troubleshooting

While connecting to a secure message exchange, retries never stop if the DNS is not resolved

You can continue to retry to connect to the secure message exchange for 10 minutes. A long retry has been specified to ensure that you do not have to restart the service every time the network breaks since all consumers stop after a long network break.

Resolution

You can configure the time for which you will continue retrying to establish a connection with the secure message exchange using the config file, located at /opt/cyops-routing-agent/postman/config.ini:

heartbeat = 120
connection_attempts = 50
retry_delay = 10

After adding a tenant on the master, you see a Retry button on the tenant node

This issue can occur due to two reasons:

  • Incorrect secure message exchange configuration.
  • Incorrect selection of secure message exchange while configuring the tenant on the master node.

Resolution

Correct the secure message exchange configuration or selection and click the Retry button.

Configuration Troubleshooting

Failure while configuring master on a tenant node

This issue can occur if you have specified the wrong password or port while configuring the master on the tenant node.

Resolution

On the tenant node, click Master Configuration > Edit Configuration. On the Configure Master dialog and correct the password or TCP port number that you have specified.

If you yet see that the Enabled button is in the NO state, and you want to enable data replication, toggle the Enabled button to YES.

Records created at the tenant node do not replicate to the master node even when data replication is turned on

In this case, the FortiSOAR logs contain the following message: Error message on master "Error: Bad Request for url: <URL> when a field is non-mandatory on tenant and mandatory on master

This issue occurs if you have switched off replication of a required record field on a tenant node, or the field does not exist on the same module defined at the tenant node. This leads to a failure of record creation on the master node.

Resolution

You must ensure that replication is switched on for all required record fields, and you should also note that if you have done any schema changes in a module at the master node, then you must ensure that the required fields are marked as required across all tenants that are replicating that module.

A user is not able to view records and FortiSOAR displays a 500 error

Resolution

Any user who is created in a multi-tenant environment and who requires to view records must have a minimum of read-only access for the Tenants module in their respective role.

Therefore, ensure that you have assigned the user a role that has a minimum of read-only access for the Tenants module.

Tenant modules not getting displayed at the master node after initial configuration

You might not see any modules of a dedicated tenant node at the master node in Tenant Manager > Manage Modules after the initial configuration (addition) of the tenant node if there are errors while configuring the tenant.

Resolution

This issue occurs due to the tenant being in the "Verification Failed" state at the time of configuration. To make the MMD of the tenant visible at the master node, restart the "cyops-postman" service at the tenant node.

In case of clustering of secure message exchanges, FortiSOAR is unable to connect to primary secure message exchange even after the primary node has come back online after a failure

If you have your secure message exchanges setup as a cluster for high availability, and you face a failure, and the primary secure message exchange node comes back online and yet the FortiSOAR node is still not able to connect to the secure message exchange. You will see the following error in the postman logs:

2018-11-20 10:36:21,022 140437372753664 59b195ab94e35ef70e28ed129c58c804 ERROR pika.callback callback process(): Calling <bound method BlockingChannel._on_channel_closed of <BlockingChannel impl=<Channel number=1 CLOSED conn=<SelectConnection OPEN socket=(‘<xxx.xxx.xx.xxx’, 53378)->(‘xxx.xxx.xx.xxx’, 52011) params=>>>> for “1:Channel.Close” failed
Traceback (most recent call last):
File “/opt/cyops-routing-agent/.env/lib/python3.4/site-packages/pika/callback.py”, line 236, in process
callback(*args, **keywords)
File “/opt/cyops-routing-agent/.env/lib/python3.4/site-packages/pika/adapters/blocking_connection.py”, line 1358, in _on_channel_closed
method.reply_text)
pika.exceptions.ChannelClosed: (404, “NOT_FOUND - home node ‘rabbit@’ of durable queue ‘queue.postman.data.remoterequest.805553bfece2f6a5895c8db8c54b9ae0’ in vhost ‘vhost_59b195ab94e35ef70e28ed129c58c804’ is down or inaccessible”)

Resolution

Stop and start the rabbitmq app on the secure message exchange node using the following commands:

# rabbitmqctl stop_app

# rabbitmqctl start_app

Shifting the Secure Message Exchange of tenants leads to MMDs not being pushed to the tenants that have been shifted to the new Secure Message Exchange

If you have shifted the Secure Message Exchange of any tenant in your multi-tenant configuration to a new Secure Message Exchange, then the remote mmd management does not work since the uwsgi service keeps the Secure Message Exchange details in memory and uses it to publish to the Secure Message Exchange. However, when the Secure Message Exchange is changed for any tenant the change gets notified using "rabbitmq publish" and the updated Secure Message Exchange setting is available only to the postman service; the uwsgi still has old Secure Message Exchange details.

Resolution

After you have shifted any tenant to a new Secure Message Exchange, then you must update the router settings in the uwsgi service by restarting the postman and uwsgi services on the master node using the following commands:
# systemctl restart uwsgi
# systemctl restart cyops-postman

You also need to restart the postman service on the tenant node.

Tenant with basic authentication is stuck in the "Awaiting Remote Node Connection" state when a user removes and re-adds certificates

If a user deletes the certificate or key file by mistake and then re-add the certificate on an 'Agent', the status of the agent changes to "Configuration Failed" and then to "Awaiting Remote Node Connection". In this case if the user configures multitenancy again by exporting the latest master configuration from master and importing the configuration on a tenant node, then the record of the tenant on master might remain stuck in the "Awaiting Remote Node Connection"; however, the master configuration on the tenant node shows that the remote node is connected.

Resolution

  1. Open the Tenant node and navigate to Setting > Master Configuration.
  2. Disable and again enable the master configuration. This updates the state of the tenant record on master and displays the correct "Remote Node Connected" state.

Post-upgrade to 7.3.0 the status of the Tenant displays "Remote Node Unreachable"

Once you have updated your MSSP setup to 7.3.0 or later from a release prior to 7.3.0, the status of the tenant displays "Remote Node Unreachable"

Resolution

Restart the cyops-integrations-agent service on the tenant node.