Fortinet black logo

Whats New in 6.1.2

Whats New in 6.1.2

This document describes new and enhanced features for the FortiSIEM 6.1.2 (build 0119) release.

This is a hardware appliance only release supporting the FortiSIEM 3500F, 2000F, and 500F appliances. It has the same code content as the software only release 6.1.1 (build 0118).

FortiSIEM 3500F Hardware Appliance Support

FortiSIEM 6.1.2 can be installed fresh on 3500F hardware appliance. You can also migrate from 5.x or earlier releases.

For fresh installation and migration procedures, see the FortiSIEM 3500F Hardware Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview below.

FortiSIEM 2000F Hardware Appliance Support

FortiSIEM 6.1.2 can be installed fresh on 2000F hardware appliance. You can also migrate from 5.x releases and upgrade from 6.1.0 and 6.1.1 release.

For fresh installation and migration procedures, see the FortiSIEM 2000F Hardware Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview below.

FortiSIEM 500F Hardware Appliance Support

FortiSIEM 6.1.2 Collector can be installed fresh on 500F appliance. You can also migrate from any previous release.

For fresh installation and migration procedures, see the FortiSIEM 500F Collector Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview.

Upgrade Overview

The following sections provide an overview of how to upgrade to release 6.1.2:

FortiSIEM 2000F and 3500F appliances run mostly as an all-in-one Supervisor. However they can also be deployed as a Cluster with external storage. The following instructions cover the general case. If you are not running Workers, then skip the Worker, NFS, and Elasticsearch-related portions.

Migrate from pre-5.3.0 to 6.1.2

To migrate hardware from a pre-5.3.0 to the 6.1.2 release, follow these steps:

  • Standalone – Complete steps 1b, and 1c.
  • Standalone with Collectors - Complete steps 1b, 1c, 2a, 3, and 5.
  • General setup with Workers and Collectors – Complete all steps.
  1. Upgrade the Supervisor to 5.4.0:
    1. Delete Workers from Supervisor.
    2. Upgrade Supervisor to 5.4.0: follow the instructions here.
    3. Perform health check: log on to the Supervisor and make sure that it is displaying the correct version and all processes are up.
  2. Migrate to 6.1.2 :
    1. Migrate the Supervisor from 5.4.0 to 6.1.2. Migration is platform specific.
    2. If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
    3. Install new 6.1.2 Workers and add them back to the Supervisor.
    4. Go to Admin > Settings > Event Worker and Query Worker and make sure that they are correct.
    5. Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
  3. When you are ready to upgrade Collectors to 6.1.2, do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
    1. Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
    2. Re-register with the update option and the same IP.
  4. Perform health checks. See Post Migration Health Check.
  5. Reinstall the Agents with the latest version when you are ready to upgrade them.
  6. Perform health checks: make sure Agent events are being received.

Migrate from 5.3.x or 5.4.x to 6.1.2

To migrate hardware from 5.3.x or 5.4.x to the 6.1.2 release, follow these steps:

  • Standalone – Complete step 2a.
  • Standalone with Collectors - Complete steps 2a, 3, and 5.
  • General setup with Workers and Collectors – Complete all steps.
  1. Delete Workers from the Supervisor.
  2. Migrate the Supervisor to 6.1.2:
    1. Migration is platform specific.
    2. If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
    3. Install new 6.1.2 Workers and add them back to the Supervisor.
    4. Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
    5. Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
  3. When you are ready to upgrade Collectors to 6.1.2, then do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
    1. Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
    2. Re-register with the update option and the same IP.
  4. Perform health checks. See Post Migration Health Check.
  5. Reinstall the Agents with the latest version when you are ready to upgrade them.
  6. Perform health checks: make sure Agent events are being received.

Upgrade from 6.1.0 or 6.1.1 to 6.1.2

To migrate hardware from 6.1.0 or 6.1.1 to the 6.1.2 release, follow these steps:

  1. Copy the upgrade.py script to the Supervisor. For instructions, see above.
  2. Upgrade the Supervisor to 6.1.2:
    • Standalone install:
      1. Upgrade Supervisor to 6.1.2
    • EventDB on NFS case:
      1. Stop Workers.
      2. Upgrade the Supervisor to 6.1.2.
    • Elasticsearch case:
      1. Delete Workers.
      2. Upgrade the Supervisor to 6.1.2.
      3. Go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
  3. Upgrade Workers to 6.1.2:
    • EventDB on NFS case:
      1. Upgrade 6.1.0 or 6.1.1 Workers to 6.1.2.
    • Elasticsearch case:
      1. Install new 6.1.2 Workers and add them back to the Supervisor.
      2. Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
  4. Perform health checks: old Collectors should work with 6.1.2 Super and Workers.
  5. When you are ready to upgrade Collectors to 6.1.2:
    • Pre-6.1.0 Collectors (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
      1. Copy the HTTP (hashed) passwords file from old Collectors to the new Collectors.
      2. Re-register with update option and the same IP.
    • 6.1.0 or 6.1.1 Collectors:
      1. Upgrade from the GUI.
  6. Perform health checks. See Post Migration Health Check.
  7. Reinstall the Agents when you are ready to upgrade them.
  8. Perform health checks: make sure Agent events are being received.

Upgrade via Proxy

During upgrade, Super/Worker and Hardware appliances FSM-2000F and 3500F must be able to communicate with CentOS OS repositories (os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs.fortisiem.fortinet.com) hosted by Fortinet, to get the latest OS packages. Follow these steps to set up this communication via proxy, before initiating the upgrade.

  1. SSH to the node.
  2. Edit /etc/yum.conf as follows:
    • If your proxy does not require authentication, then add a line like this:
      • proxy=http://<proxy-ip-or-hostname>:<proxy-port>
    • If your proxy requires authentication, then add proxy_username= and proxy_password= entries as well. For example, for squid proxy:
      • proxy_username=<user>
      • proxy_password=<pwd>
  3. Test that you can use the proxy to successfully communicate with the two sites: os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs.fortisiem.fortinet.com.
  4. Begin the upgrade.

Post Migration Health Check

After migration is complete, follow these steps to check the system's health.

  1. Check Cloud health and Collector health from the FortiSIEM GUI:
    • Versions display correctly.
    • All processes are up and running.
    • Resource usage is within limits.
  2. Check that Redis passwords match on Super and Workers:
    • Super: run the command phLicenseTool –showRedisPassword.
    • Worker: run the command grep -i auth /opt/node-rest-service/ecosystem.config.js.
  3. Check that database passwords match on Super and Workers:
    • Super: run the command phLicenseTool –showDatabasePassword.
    • Worker: run the command grep Auth_PQ_dbpass /etc/httpd/conf/httpd.conf.
  4. Elasticsearch case: check the Elasticsearch health
  5. Check that events are received correctly:
    1. Search All Events in last 10 minutes and make sure there is data.
    2. Search for events from Collector and Agents and make sure there is data. Both old and new collectors and agents must work.
    3. Search for events using CMDB Groups (Windows, Linux, Firewalls, etc.) and make sure there is data.
  6. Make sure there are no SVN authentication errors in CMDB when you click any device name.
  7. Make sure recent Incidents and their triggering events are displayed.

Known Issues

Shutting Down Hardware

On hardware appliances running FortiSIEM 6.6.0 or earlier, FortiSIEM execute shutdown CLI does not work correctly. Please use the Linux shutdown command instead.

Remediation Steps for CVE-2021-44228

Two FortiSIEM modules (phFortiInsightAI and 3rd party ThreatConnect SDK) use Apache log4j version 2.11 and 2.8 respectively for logging purposes, and hence are vulnerable to the recently discovered Remote Code Execution vulnerability (CVE-2021-44228) in FortiSIEM 6.1.x.

These instructions specify the steps needed to mitigate this vulnerability without upgrading Apache log4j to the latest stable version 2.16 or higher. Actions need to be taken on the Supervisor node only.

On Supervisor Node

  1. Logon via SSH as root.

  2. Mitigating 3rd party ThreatConnect SDK module:

    1. Delete these log4j jar files under /opt/glassfish/domains/domain1/applications/phoenix/lib

      1. log4j-core-2.8.2.jar

      2. log4j-api-2.8.2.jar

      3. log4j-slf4j-impl-2.6.1.jar

  3. Mitigating phFortiInsightAI module:

    1. Delete these log4j jar files under /opt/fortiinsight-ai/lib/

      1. log4j-api-2.11.1.jar

      2. log4j-core-2.11.1.jar

  4. Restart all Java Processes by running: “killall -9 java”

Migration and Fresh Install Limitations

  1. Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.2, please be aware that the following features will not be available after migration.

    1. Pre-compute feature

    2. Elastic Cloud support

      If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.

  2. Fresh Install limitations:
    1. Can not be installed on Alibaba Cloud.
    2. Linux ISO image is not available.
    3. Does not install on IPV6 networks .
    4. Collector to Supervisor/Worker communication via Proxy is not supported.
    5. Disaster recovery is not supported as PostGreSQL BDR is not yet available on the CentOS 8.2 release.
    6. Report Server is not supported.
  3. STIX/OTX Malware IOC Integration Error: If you see the error below when you log in to Glassfish, it is likely caused by the jsse.enableSNIExtension flag that was added to resolve a httpd issue in Java JDK 7. In JDK8, there is no need to set this flag.
    Error:

    #|2020-09-10T12:30:00.535+0200|SEVERE|glassfish3.1.2|com.accelops.service.threatfeed.BaseOTXUpdateService|_ThreadID=218;_ThreadName=Thread-2;|org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://otx.alienvault.com/api/v1/pulses/subscribed?limit=20&modified_since=2020-09-03T12:30:00%2B02:00&":Unsupported record version Unknown-0.0; nested exception is javax.net.ssl.SSLException: Unsupported record version Unknown-0.0

    To resolve this issue, follow these steps:

    1. Log in to the Supervisor node.

    2. Run the command su - admin.

    3. Enter your Glassfish password and run this command /opt/glassfish/bin/asadmin delete-jvm-options -Djsse.enableSNIExtension=false

    4. Run the command Killall -9 java.

  4. Changing Worker IP via configFSM.sh does not work. To change Worker IP, delete the Worker from Supervisor, change the IP using Linux commands and add it back.

  5. A newly installed 5.x Collector cannot be registered to a 6.x Supervisor. Old Collectors will continue to work. For new installations, install 6.x Collectors.

  6. The following bugs have been discovered.

    • Malware Hash import from a CSV file fails when the CSV file contains 75,000 or more Malware Hash entries.

    • Scheduled bundle reports fail after migration.

    • Update Malware Hash via API does not work as expected, producing "duplicate" errors.

    • Cisco Meraki log discovery does not add devices to CMDB.

    • FortiSIEM does not recognize a UEBA perpetual license, so users with a UEBA perpetual license are unable to add UEBA for their devices.

    • For Elasticsearch cases with inline report mode set to 2, the ReportMaster memory may grow quickly.

    • Malware IP, Domain, and URL Group lookup performance slower than expected.

    • Security incidents always indicate "System Cleared" after 24 hours, even if auto_clear_security_incidents=0 is set.

    • SSL communication sockets between rule worker and rule master are not always closed properly, leading to rules not triggering.

    • Rules with a pattern-based clearing condition do not always clear even if the condition is met. This is because the clear rule’s time window is sometimes read incorrectly.

Elasticsearch Based Deployments Terms Query Limit

In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.

Case 1. For already existing indices, issue the REST API call to update the setting

PUT fortisiem-event-*/_settings
{
  "index" : {
    "max_terms_count" : "1000000"
  }
}

Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

  1. cd /opt/phoenix/config/elastic/7.7

  2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

    Example:

    ...

      "settings": {
        "index.max_terms_count": 1000000,
    

    ...

  3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

  4. Test new indices have the updated terms limit by executing the following simple REST API call.

    GET fortisiem-event-*/_settings

Whats New in 6.1.2

This document describes new and enhanced features for the FortiSIEM 6.1.2 (build 0119) release.

This is a hardware appliance only release supporting the FortiSIEM 3500F, 2000F, and 500F appliances. It has the same code content as the software only release 6.1.1 (build 0118).

FortiSIEM 3500F Hardware Appliance Support

FortiSIEM 6.1.2 can be installed fresh on 3500F hardware appliance. You can also migrate from 5.x or earlier releases.

For fresh installation and migration procedures, see the FortiSIEM 3500F Hardware Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview below.

FortiSIEM 2000F Hardware Appliance Support

FortiSIEM 6.1.2 can be installed fresh on 2000F hardware appliance. You can also migrate from 5.x releases and upgrade from 6.1.0 and 6.1.1 release.

For fresh installation and migration procedures, see the FortiSIEM 2000F Hardware Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview below.

FortiSIEM 500F Hardware Appliance Support

FortiSIEM 6.1.2 Collector can be installed fresh on 500F appliance. You can also migrate from any previous release.

For fresh installation and migration procedures, see the FortiSIEM 500F Collector Configuration Guide.

For an overview of the migration and upgrade process, see Upgrade Overview.

Upgrade Overview

The following sections provide an overview of how to upgrade to release 6.1.2:

FortiSIEM 2000F and 3500F appliances run mostly as an all-in-one Supervisor. However they can also be deployed as a Cluster with external storage. The following instructions cover the general case. If you are not running Workers, then skip the Worker, NFS, and Elasticsearch-related portions.

Migrate from pre-5.3.0 to 6.1.2

To migrate hardware from a pre-5.3.0 to the 6.1.2 release, follow these steps:

  • Standalone – Complete steps 1b, and 1c.
  • Standalone with Collectors - Complete steps 1b, 1c, 2a, 3, and 5.
  • General setup with Workers and Collectors – Complete all steps.
  1. Upgrade the Supervisor to 5.4.0:
    1. Delete Workers from Supervisor.
    2. Upgrade Supervisor to 5.4.0: follow the instructions here.
    3. Perform health check: log on to the Supervisor and make sure that it is displaying the correct version and all processes are up.
  2. Migrate to 6.1.2 :
    1. Migrate the Supervisor from 5.4.0 to 6.1.2. Migration is platform specific.
    2. If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
    3. Install new 6.1.2 Workers and add them back to the Supervisor.
    4. Go to Admin > Settings > Event Worker and Query Worker and make sure that they are correct.
    5. Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
  3. When you are ready to upgrade Collectors to 6.1.2, do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
    1. Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
    2. Re-register with the update option and the same IP.
  4. Perform health checks. See Post Migration Health Check.
  5. Reinstall the Agents with the latest version when you are ready to upgrade them.
  6. Perform health checks: make sure Agent events are being received.

Migrate from 5.3.x or 5.4.x to 6.1.2

To migrate hardware from 5.3.x or 5.4.x to the 6.1.2 release, follow these steps:

  • Standalone – Complete step 2a.
  • Standalone with Collectors - Complete steps 2a, 3, and 5.
  • General setup with Workers and Collectors – Complete all steps.
  1. Delete Workers from the Supervisor.
  2. Migrate the Supervisor to 6.1.2:
    1. Migration is platform specific.
    2. If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
    3. Install new 6.1.2 Workers and add them back to the Supervisor.
    4. Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
    5. Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
  3. When you are ready to upgrade Collectors to 6.1.2, then do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
    1. Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
    2. Re-register with the update option and the same IP.
  4. Perform health checks. See Post Migration Health Check.
  5. Reinstall the Agents with the latest version when you are ready to upgrade them.
  6. Perform health checks: make sure Agent events are being received.

Upgrade from 6.1.0 or 6.1.1 to 6.1.2

To migrate hardware from 6.1.0 or 6.1.1 to the 6.1.2 release, follow these steps:

  1. Copy the upgrade.py script to the Supervisor. For instructions, see above.
  2. Upgrade the Supervisor to 6.1.2:
    • Standalone install:
      1. Upgrade Supervisor to 6.1.2
    • EventDB on NFS case:
      1. Stop Workers.
      2. Upgrade the Supervisor to 6.1.2.
    • Elasticsearch case:
      1. Delete Workers.
      2. Upgrade the Supervisor to 6.1.2.
      3. Go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
  3. Upgrade Workers to 6.1.2:
    • EventDB on NFS case:
      1. Upgrade 6.1.0 or 6.1.1 Workers to 6.1.2.
    • Elasticsearch case:
      1. Install new 6.1.2 Workers and add them back to the Supervisor.
      2. Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
  4. Perform health checks: old Collectors should work with 6.1.2 Super and Workers.
  5. When you are ready to upgrade Collectors to 6.1.2:
    • Pre-6.1.0 Collectors (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
      1. Copy the HTTP (hashed) passwords file from old Collectors to the new Collectors.
      2. Re-register with update option and the same IP.
    • 6.1.0 or 6.1.1 Collectors:
      1. Upgrade from the GUI.
  6. Perform health checks. See Post Migration Health Check.
  7. Reinstall the Agents when you are ready to upgrade them.
  8. Perform health checks: make sure Agent events are being received.

Upgrade via Proxy

During upgrade, Super/Worker and Hardware appliances FSM-2000F and 3500F must be able to communicate with CentOS OS repositories (os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs.fortisiem.fortinet.com) hosted by Fortinet, to get the latest OS packages. Follow these steps to set up this communication via proxy, before initiating the upgrade.

  1. SSH to the node.
  2. Edit /etc/yum.conf as follows:
    • If your proxy does not require authentication, then add a line like this:
      • proxy=http://<proxy-ip-or-hostname>:<proxy-port>
    • If your proxy requires authentication, then add proxy_username= and proxy_password= entries as well. For example, for squid proxy:
      • proxy_username=<user>
      • proxy_password=<pwd>
  3. Test that you can use the proxy to successfully communicate with the two sites: os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs.fortisiem.fortinet.com.
  4. Begin the upgrade.

Post Migration Health Check

After migration is complete, follow these steps to check the system's health.

  1. Check Cloud health and Collector health from the FortiSIEM GUI:
    • Versions display correctly.
    • All processes are up and running.
    • Resource usage is within limits.
  2. Check that Redis passwords match on Super and Workers:
    • Super: run the command phLicenseTool –showRedisPassword.
    • Worker: run the command grep -i auth /opt/node-rest-service/ecosystem.config.js.
  3. Check that database passwords match on Super and Workers:
    • Super: run the command phLicenseTool –showDatabasePassword.
    • Worker: run the command grep Auth_PQ_dbpass /etc/httpd/conf/httpd.conf.
  4. Elasticsearch case: check the Elasticsearch health
  5. Check that events are received correctly:
    1. Search All Events in last 10 minutes and make sure there is data.
    2. Search for events from Collector and Agents and make sure there is data. Both old and new collectors and agents must work.
    3. Search for events using CMDB Groups (Windows, Linux, Firewalls, etc.) and make sure there is data.
  6. Make sure there are no SVN authentication errors in CMDB when you click any device name.
  7. Make sure recent Incidents and their triggering events are displayed.

Known Issues

Shutting Down Hardware

On hardware appliances running FortiSIEM 6.6.0 or earlier, FortiSIEM execute shutdown CLI does not work correctly. Please use the Linux shutdown command instead.

Remediation Steps for CVE-2021-44228

Two FortiSIEM modules (phFortiInsightAI and 3rd party ThreatConnect SDK) use Apache log4j version 2.11 and 2.8 respectively for logging purposes, and hence are vulnerable to the recently discovered Remote Code Execution vulnerability (CVE-2021-44228) in FortiSIEM 6.1.x.

These instructions specify the steps needed to mitigate this vulnerability without upgrading Apache log4j to the latest stable version 2.16 or higher. Actions need to be taken on the Supervisor node only.

On Supervisor Node

  1. Logon via SSH as root.

  2. Mitigating 3rd party ThreatConnect SDK module:

    1. Delete these log4j jar files under /opt/glassfish/domains/domain1/applications/phoenix/lib

      1. log4j-core-2.8.2.jar

      2. log4j-api-2.8.2.jar

      3. log4j-slf4j-impl-2.6.1.jar

  3. Mitigating phFortiInsightAI module:

    1. Delete these log4j jar files under /opt/fortiinsight-ai/lib/

      1. log4j-api-2.11.1.jar

      2. log4j-core-2.11.1.jar

  4. Restart all Java Processes by running: “killall -9 java”

Migration and Fresh Install Limitations

  1. Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.2, please be aware that the following features will not be available after migration.

    1. Pre-compute feature

    2. Elastic Cloud support

      If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.

  2. Fresh Install limitations:
    1. Can not be installed on Alibaba Cloud.
    2. Linux ISO image is not available.
    3. Does not install on IPV6 networks .
    4. Collector to Supervisor/Worker communication via Proxy is not supported.
    5. Disaster recovery is not supported as PostGreSQL BDR is not yet available on the CentOS 8.2 release.
    6. Report Server is not supported.
  3. STIX/OTX Malware IOC Integration Error: If you see the error below when you log in to Glassfish, it is likely caused by the jsse.enableSNIExtension flag that was added to resolve a httpd issue in Java JDK 7. In JDK8, there is no need to set this flag.
    Error:

    #|2020-09-10T12:30:00.535+0200|SEVERE|glassfish3.1.2|com.accelops.service.threatfeed.BaseOTXUpdateService|_ThreadID=218;_ThreadName=Thread-2;|org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://otx.alienvault.com/api/v1/pulses/subscribed?limit=20&modified_since=2020-09-03T12:30:00%2B02:00&":Unsupported record version Unknown-0.0; nested exception is javax.net.ssl.SSLException: Unsupported record version Unknown-0.0

    To resolve this issue, follow these steps:

    1. Log in to the Supervisor node.

    2. Run the command su - admin.

    3. Enter your Glassfish password and run this command /opt/glassfish/bin/asadmin delete-jvm-options -Djsse.enableSNIExtension=false

    4. Run the command Killall -9 java.

  4. Changing Worker IP via configFSM.sh does not work. To change Worker IP, delete the Worker from Supervisor, change the IP using Linux commands and add it back.

  5. A newly installed 5.x Collector cannot be registered to a 6.x Supervisor. Old Collectors will continue to work. For new installations, install 6.x Collectors.

  6. The following bugs have been discovered.

    • Malware Hash import from a CSV file fails when the CSV file contains 75,000 or more Malware Hash entries.

    • Scheduled bundle reports fail after migration.

    • Update Malware Hash via API does not work as expected, producing "duplicate" errors.

    • Cisco Meraki log discovery does not add devices to CMDB.

    • FortiSIEM does not recognize a UEBA perpetual license, so users with a UEBA perpetual license are unable to add UEBA for their devices.

    • For Elasticsearch cases with inline report mode set to 2, the ReportMaster memory may grow quickly.

    • Malware IP, Domain, and URL Group lookup performance slower than expected.

    • Security incidents always indicate "System Cleared" after 24 hours, even if auto_clear_security_incidents=0 is set.

    • SSL communication sockets between rule worker and rule master are not always closed properly, leading to rules not triggering.

    • Rules with a pattern-based clearing condition do not always clear even if the condition is met. This is because the clear rule’s time window is sometimes read incorrectly.

Elasticsearch Based Deployments Terms Query Limit

In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.

Case 1. For already existing indices, issue the REST API call to update the setting

PUT fortisiem-event-*/_settings
{
  "index" : {
    "max_terms_count" : "1000000"
  }
}

Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

  1. cd /opt/phoenix/config/elastic/7.7

  2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

    Example:

    ...

      "settings": {
        "index.max_terms_count": 1000000,
    

    ...

  3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

  4. Test new indices have the updated terms limit by executing the following simple REST API call.

    GET fortisiem-event-*/_settings