Fortinet white logo
Fortinet white logo

What's New in 6.6.1

What's New in 6.6.1

This document describes the additions for the FortiSIEM 6.6.1 release.

Rocky Linux 8.6 OS Updates

This release contains OS updates published until Sept 1, 2022. See the list below for the patches included by Red Hat and picked up by Rocky Linux.

https://access.redhat.com/errata-search/#/?q=&p=1&sort=portal_publication_date%20desc&rows=10&portal_advisory_type=Security%20Advisory&portal_product=Red%20Hat%20Enterprise%20Linux&portal_product_version=8.6

Bug Fixes and Minor Enhancements

Bug ID

Severity

Module

Description

835978

Major

App Server

After the 6.5.0 upgrade, Custom rules defined for specific Orgs need to be re-enabled for them to trigger.

831456

Major

App Server

When there is a very large number of Malware IOCs (~2 Million), then upgrade may fail due to Java out of memory. App Server restart may also fail for the same reason.

824607

Major

App Server

Incidents may not show after 6.5.0 upgrade, when there are Low severity Incidents.

838600

Minor

App Server

Device name change does not take effect on collectors, other than the one that discovers and monitors the device.

830438

Minor

App Server

Incidents may trigger from 'System Collector Event Delayed' Rule despite events being received.

825752

Minor

App Server

Malware Domain update does not work with AlienVault.

821197

Minor

App Server

Retention policy table still contains references from a deleted Organization after an organization is deleted.

825764

Minor

App Server, Query

For a large event archive database in NFS, query on one Org may result in timeout because all Org directories are scanned.

835339

Minor

App Server, Rule Engine

Security Incidents triggering from custom rules may be cleared by system.

729023

Minor

App Server, ClickHouse

SQLite header and source version mismatch causes upgrade failure.

837950

Minor

ClickHouse

If supervisor IP changes after ClickHouse has been configured, IP updates to ClickHouse does not occur.

821110

Minor

Event Pulling Agents

CrowdStrike Falcon Data Replicator is unable to ingest logs due to unzipping incomplete package.

829644

Minor

GUI

Admin > Health > Collector Health page hangs when sorting by organization.

826450

Minor

GUI

Unable to validate or save a cloned system parser that contains '&' character.

825383

Minor

GUI

Unable to export configurations of FortiGate device from CMDB.

825068

Minor

GUI

In HTTP(S) notification, protocol https is incorrectly parsed as https: which causes request to default to http.

827264

Minor

Query Engine

Query using IN operator doesn't return proper results when name contains '-'.

833618

Minor

System

Missing dos2unix package causes config discoveries to fail in some devices (h3c).

833411

Minor

System

On hardware appliances, "execute shutdown" command may sometime fail when run repeatedly.

823098

Enhancement

Data

Checkpoint device is discovered as Linux since Checkpoint sysObjectID are not built in. Workaround is to define them from GUI.

Known Issues

  1. Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

  2. When retention policies are defined, there is a memory corruption issue in Parser module that can cause Parser module to crash or consume high memory. This may not always happen. This may result in event parsing being delayed, events being missed or Supervisor GUI being slow. From the Cloud Health page, you can see phParser process is down or has high CPU for any node. This issue has been resolved in release 6.6.2.

  3. This applies only if you are upgrading from 6.5.0 and using ClickHouse. FortiSIEM 6.5.0 ran ClickHouse on a single node and used the Merge Tree engine. FortiSIEM 6.6.0 onwards runs Replicated Merge Tree engine, even if Replication is not turned on. So after upgrading to FortiSIEM 6.6.0, you will need to do the following steps to migrate the event data previously stored in Merge Tree to Replicated Merge Tree. Without these steps, old events in 6.5.0 will not be searchable in 6.6.0. Once you are on post 6.5.0 release, you will not need to do this procedure again.

    To upgrade your FortiSIEM from 6.5.0 to 6.6.0 or later, take the following steps.

    1. Navigate to ADMIN >Settings > Database > ClickHouse Config.

    2. Click Test, then click Deploy to enable the ClickHouse Keeper service which is new in 6.6.0.

    3. Migrate the event data in 6.5.0 to 6.6.0 by running the script /opt/phoenix/phscripts/clickhouse/clickhouse-migrate-650.sh.

  4. This applies only if you are upgrading from 6.5.0 and using ClickHouse. Go to Storage > Online Settings and click Test, it will fail. Fortinet introduced a new disk attribute called "Mounted On" to facilitate disk addition/deletion that was not present in 6.5.0. Follow these steps to fix the problem.

    1. Go to ADMIN > Setup > Storage > Online. ClickHouse should be the selected database.

    2. For Hot tier and for every configured disk within the tier, do the following:

      1. The existing disk should have empty Mounted On.

      2. Click + to add a disk. For the new disk, Disk Path should be empty and Mounted On set to /data-clickhouse-hot-1.

      3. Copy the Disk Path from the existing disk into this newly disk. The new disk should have the proper Disk Path and Mounted On fields.

      4. Delete the first disk with empty Mounted On.


        Do this for all disks you have configured in 6.5.0. After your changes, the disks should be ordered /data-clickhouse-hot-1, /data-clickhouse-hot-2, /data-clickhouse-hot-3 from top to bottom.

    3. Repeat the same steps for the Warm tier (if one was configured in 6.5.0), except that the Mounted On fields should be /data-clickhouse-warm-1, /data-clickhouse-warm-2, /data-clickhouse-warm-3 from top to bottom.

    4. When done, click Test, then click Deploy.

  5. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  6. If you set up Disaster Recovery (DR) on FortiSIEM 6.5.0, then upgrading the Secondary to 6.6.0 will fail. If you set up DR on older versions, this issue does not occur. The following workarounds are available, depending on your situation.

    If you have not started the upgrade to 6.6.0 on DR yet, take the following steps:

    Instructions before Upgrading Secondary in DR Configuration

    1. Step 1: Back up Primary Glassfish Key into Secondary

      1. ssh into the Primary Supervisor as root.

      2. Run the following command.

        phLicenseTool --showDatabasePassword
      3. Note the password.

      4. Run the following command.

        cat /opt/glassfish/domains/domain1/config/admin-keyfile
      5. Note the output.

    2. Step 2: Insert into Secondary before Upgrade

      1. Log into the Secondary Supervisor as root.

      2. Run phsecondary2primary

        Note: This disables disaster recovery for the time being.

      3. Run the following command to modify the admin-keyfile file.

        vi /opt/glassfish/domains/domain1/config/admin-keyfile
      4. Paste the output from Step 1 v. to the admin-keyfile file, replacing the current entry and save.

    3. Step 3: Verify that the Key has Taken Effect on Secondary Supervisor

      1. Run the following commands.

        su - admin; /opt/glassfish/bin/asadmin stop-domain
        /opt/glassfish/bin/asadmin start-domain
        /opt/glassfish/bin/asadmin login
        

        For user name, enter admin

        For password, enter the password from Step 1 iii.

        Example success login:

        /opt/glassfish/bin/asadmin login
        Enter admin user name [Enter to accept default]> admin
        Enter admin password> 
        Login information relevant to admin user name [admin] for host [localhost] and admin port [4848] stored at [/opt/phoenix/bin/.gfclient/pass] successfully.
        Make sure that this file remains protected. Information stored in this file will be used by administration commands to manage associated domain.
        Command login executed successfully.

        Note: Step 3 must work in order to proceed to Step 4.

    4. Step 4: Upgrade to 6.6.0 on Secondary

      1. Follow the Upgrade Guide.

    5. Step 5: Add Secondary Back into the System

      1. Log into the Primary Supervisor’s GUI.

      2. Navigate to ADMIN > License > Nodes.

      3. Select the Secondary entry in the GUI and click on Edit.

      4. Click on Save to re-establish connection to the Secondary.

    Instructions if you are Already in a Failed State in the Secondary

    1. Step 1: Grab the Supervisor's DB Password

      1. Log into the Primary Supervisor as root.

      2. Run the following command.
        phLicenseTool --showDatabasePassword
      3. Note the password.

      4. Run the following command.

        cat /opt/glassfish/domains/domain1/config/admin-keyfile
      5. Note the output.

    2. Step 2: Update the Glassfish Password on Secondary to Continue with the Upgrade

      1. Log into the Secondary Supervisor as root.

      2. Run the following commands to modify the admin-keyfile file.

        cd /opt/glassfish/domains/domain1/config/
        cp -a admin-keyfile admin-keyfile.bad
        vi /opt/glassfish/domains/domain1/config/admin-keyfile
      3. Paste the output from Step 1 v. to the admin-keyfile, replacing the current entry and save.

    3. Step 3: Deploy appserver Manually after Admin Passkey has Changed

      1. Run the following command.

        vi /opt/phoenix/deployment/deploy-fresh.sh
      2. Replace the password with the password from Step 1 iii by finding the following line, and making the password replacement there.

        echo "AS_ADMIN_PASSWORD="$dbpasswd > $DEPLOYMENR_HOME/glassfish-pwd.txt

        Example:

        echo ‘AS_ADMIN_PASSWORD=ad1$dnsk%’ > $DEPLOYMENR_HOME/glassfish-pwd.txt

        Note: The use of single quote character (‘) in the replacement vs the double quote character (“).

      3. Save the file deploy-fresh.sh.

      4. Run the following commands.

        su - admin
        /opt/phoenix/deployment/deploy-fresh.sh /opt/phoenix/deployment/phoenix.ear
    4. Step 4: Modify Ansible Playbook to Finish Upgrade

      1. Run the following command.

        vi /usr/local/upgrade/post-upgrade.yml
      2. Remove only the following:

         - configure
            - update_configs
            - fortiinsight-integration
            - setup-python
            - setup-node
            - setup-clickhouse
            - setup-zookeeper
            - setup-redis
            - migrate-database
            - appserver
        
      3. Save the modification.

      4. Resume the upgrade by running the following command.

        ansible-playbook /usr/local/upgrade/post-upgrade.yml | tee -a /usr/local/upgrade/logs/ansible_upgrade_continued.log
      5. Reboot the Supervisor if system does not reboot itself.

  7. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.

What's New in 6.6.1

What's New in 6.6.1

This document describes the additions for the FortiSIEM 6.6.1 release.

Rocky Linux 8.6 OS Updates

This release contains OS updates published until Sept 1, 2022. See the list below for the patches included by Red Hat and picked up by Rocky Linux.

https://access.redhat.com/errata-search/#/?q=&p=1&sort=portal_publication_date%20desc&rows=10&portal_advisory_type=Security%20Advisory&portal_product=Red%20Hat%20Enterprise%20Linux&portal_product_version=8.6

Bug Fixes and Minor Enhancements

Bug ID

Severity

Module

Description

835978

Major

App Server

After the 6.5.0 upgrade, Custom rules defined for specific Orgs need to be re-enabled for them to trigger.

831456

Major

App Server

When there is a very large number of Malware IOCs (~2 Million), then upgrade may fail due to Java out of memory. App Server restart may also fail for the same reason.

824607

Major

App Server

Incidents may not show after 6.5.0 upgrade, when there are Low severity Incidents.

838600

Minor

App Server

Device name change does not take effect on collectors, other than the one that discovers and monitors the device.

830438

Minor

App Server

Incidents may trigger from 'System Collector Event Delayed' Rule despite events being received.

825752

Minor

App Server

Malware Domain update does not work with AlienVault.

821197

Minor

App Server

Retention policy table still contains references from a deleted Organization after an organization is deleted.

825764

Minor

App Server, Query

For a large event archive database in NFS, query on one Org may result in timeout because all Org directories are scanned.

835339

Minor

App Server, Rule Engine

Security Incidents triggering from custom rules may be cleared by system.

729023

Minor

App Server, ClickHouse

SQLite header and source version mismatch causes upgrade failure.

837950

Minor

ClickHouse

If supervisor IP changes after ClickHouse has been configured, IP updates to ClickHouse does not occur.

821110

Minor

Event Pulling Agents

CrowdStrike Falcon Data Replicator is unable to ingest logs due to unzipping incomplete package.

829644

Minor

GUI

Admin > Health > Collector Health page hangs when sorting by organization.

826450

Minor

GUI

Unable to validate or save a cloned system parser that contains '&' character.

825383

Minor

GUI

Unable to export configurations of FortiGate device from CMDB.

825068

Minor

GUI

In HTTP(S) notification, protocol https is incorrectly parsed as https: which causes request to default to http.

827264

Minor

Query Engine

Query using IN operator doesn't return proper results when name contains '-'.

833618

Minor

System

Missing dos2unix package causes config discoveries to fail in some devices (h3c).

833411

Minor

System

On hardware appliances, "execute shutdown" command may sometime fail when run repeatedly.

823098

Enhancement

Data

Checkpoint device is discovered as Linux since Checkpoint sysObjectID are not built in. Workaround is to define them from GUI.

Known Issues

  1. Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

  2. When retention policies are defined, there is a memory corruption issue in Parser module that can cause Parser module to crash or consume high memory. This may not always happen. This may result in event parsing being delayed, events being missed or Supervisor GUI being slow. From the Cloud Health page, you can see phParser process is down or has high CPU for any node. This issue has been resolved in release 6.6.2.

  3. This applies only if you are upgrading from 6.5.0 and using ClickHouse. FortiSIEM 6.5.0 ran ClickHouse on a single node and used the Merge Tree engine. FortiSIEM 6.6.0 onwards runs Replicated Merge Tree engine, even if Replication is not turned on. So after upgrading to FortiSIEM 6.6.0, you will need to do the following steps to migrate the event data previously stored in Merge Tree to Replicated Merge Tree. Without these steps, old events in 6.5.0 will not be searchable in 6.6.0. Once you are on post 6.5.0 release, you will not need to do this procedure again.

    To upgrade your FortiSIEM from 6.5.0 to 6.6.0 or later, take the following steps.

    1. Navigate to ADMIN >Settings > Database > ClickHouse Config.

    2. Click Test, then click Deploy to enable the ClickHouse Keeper service which is new in 6.6.0.

    3. Migrate the event data in 6.5.0 to 6.6.0 by running the script /opt/phoenix/phscripts/clickhouse/clickhouse-migrate-650.sh.

  4. This applies only if you are upgrading from 6.5.0 and using ClickHouse. Go to Storage > Online Settings and click Test, it will fail. Fortinet introduced a new disk attribute called "Mounted On" to facilitate disk addition/deletion that was not present in 6.5.0. Follow these steps to fix the problem.

    1. Go to ADMIN > Setup > Storage > Online. ClickHouse should be the selected database.

    2. For Hot tier and for every configured disk within the tier, do the following:

      1. The existing disk should have empty Mounted On.

      2. Click + to add a disk. For the new disk, Disk Path should be empty and Mounted On set to /data-clickhouse-hot-1.

      3. Copy the Disk Path from the existing disk into this newly disk. The new disk should have the proper Disk Path and Mounted On fields.

      4. Delete the first disk with empty Mounted On.


        Do this for all disks you have configured in 6.5.0. After your changes, the disks should be ordered /data-clickhouse-hot-1, /data-clickhouse-hot-2, /data-clickhouse-hot-3 from top to bottom.

    3. Repeat the same steps for the Warm tier (if one was configured in 6.5.0), except that the Mounted On fields should be /data-clickhouse-warm-1, /data-clickhouse-warm-2, /data-clickhouse-warm-3 from top to bottom.

    4. When done, click Test, then click Deploy.

  5. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  6. If you set up Disaster Recovery (DR) on FortiSIEM 6.5.0, then upgrading the Secondary to 6.6.0 will fail. If you set up DR on older versions, this issue does not occur. The following workarounds are available, depending on your situation.

    If you have not started the upgrade to 6.6.0 on DR yet, take the following steps:

    Instructions before Upgrading Secondary in DR Configuration

    1. Step 1: Back up Primary Glassfish Key into Secondary

      1. ssh into the Primary Supervisor as root.

      2. Run the following command.

        phLicenseTool --showDatabasePassword
      3. Note the password.

      4. Run the following command.

        cat /opt/glassfish/domains/domain1/config/admin-keyfile
      5. Note the output.

    2. Step 2: Insert into Secondary before Upgrade

      1. Log into the Secondary Supervisor as root.

      2. Run phsecondary2primary

        Note: This disables disaster recovery for the time being.

      3. Run the following command to modify the admin-keyfile file.

        vi /opt/glassfish/domains/domain1/config/admin-keyfile
      4. Paste the output from Step 1 v. to the admin-keyfile file, replacing the current entry and save.

    3. Step 3: Verify that the Key has Taken Effect on Secondary Supervisor

      1. Run the following commands.

        su - admin; /opt/glassfish/bin/asadmin stop-domain
        /opt/glassfish/bin/asadmin start-domain
        /opt/glassfish/bin/asadmin login
        

        For user name, enter admin

        For password, enter the password from Step 1 iii.

        Example success login:

        /opt/glassfish/bin/asadmin login
        Enter admin user name [Enter to accept default]> admin
        Enter admin password> 
        Login information relevant to admin user name [admin] for host [localhost] and admin port [4848] stored at [/opt/phoenix/bin/.gfclient/pass] successfully.
        Make sure that this file remains protected. Information stored in this file will be used by administration commands to manage associated domain.
        Command login executed successfully.

        Note: Step 3 must work in order to proceed to Step 4.

    4. Step 4: Upgrade to 6.6.0 on Secondary

      1. Follow the Upgrade Guide.

    5. Step 5: Add Secondary Back into the System

      1. Log into the Primary Supervisor’s GUI.

      2. Navigate to ADMIN > License > Nodes.

      3. Select the Secondary entry in the GUI and click on Edit.

      4. Click on Save to re-establish connection to the Secondary.

    Instructions if you are Already in a Failed State in the Secondary

    1. Step 1: Grab the Supervisor's DB Password

      1. Log into the Primary Supervisor as root.

      2. Run the following command.
        phLicenseTool --showDatabasePassword
      3. Note the password.

      4. Run the following command.

        cat /opt/glassfish/domains/domain1/config/admin-keyfile
      5. Note the output.

    2. Step 2: Update the Glassfish Password on Secondary to Continue with the Upgrade

      1. Log into the Secondary Supervisor as root.

      2. Run the following commands to modify the admin-keyfile file.

        cd /opt/glassfish/domains/domain1/config/
        cp -a admin-keyfile admin-keyfile.bad
        vi /opt/glassfish/domains/domain1/config/admin-keyfile
      3. Paste the output from Step 1 v. to the admin-keyfile, replacing the current entry and save.

    3. Step 3: Deploy appserver Manually after Admin Passkey has Changed

      1. Run the following command.

        vi /opt/phoenix/deployment/deploy-fresh.sh
      2. Replace the password with the password from Step 1 iii by finding the following line, and making the password replacement there.

        echo "AS_ADMIN_PASSWORD="$dbpasswd > $DEPLOYMENR_HOME/glassfish-pwd.txt

        Example:

        echo ‘AS_ADMIN_PASSWORD=ad1$dnsk%’ > $DEPLOYMENR_HOME/glassfish-pwd.txt

        Note: The use of single quote character (‘) in the replacement vs the double quote character (“).

      3. Save the file deploy-fresh.sh.

      4. Run the following commands.

        su - admin
        /opt/phoenix/deployment/deploy-fresh.sh /opt/phoenix/deployment/phoenix.ear
    4. Step 4: Modify Ansible Playbook to Finish Upgrade

      1. Run the following command.

        vi /usr/local/upgrade/post-upgrade.yml
      2. Remove only the following:

         - configure
            - update_configs
            - fortiinsight-integration
            - setup-python
            - setup-node
            - setup-clickhouse
            - setup-zookeeper
            - setup-redis
            - migrate-database
            - appserver
        
      3. Save the modification.

      4. Resume the upgrade by running the following command.

        ansible-playbook /usr/local/upgrade/post-upgrade.yml | tee -a /usr/local/upgrade/logs/ansible_upgrade_continued.log
      5. Reboot the Supervisor if system does not reboot itself.

  7. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.