Fortinet white logo
Fortinet white logo

What's New in 6.7.1

What's New in 6.7.1

New Features

Diode Collector

This release enables you to deploy a diode collector that has the following functionalities

  • Install without Internet connectivity

  • Work without registering with Supervisor node

  • Collect syslog, SNMP trap and Windows log via WMI/OMI protocol using local configuration

  • Send events to another Collector or Worker via UDP/514 using syslog protocol

A diode collector only requires a strictly one-way communication from itself to another Collector or Worker.

There are two deployment modes:

  1. Diode Collector -> Worker

  2. Diode Collector -> Regular Collector -> Worker

A Diode Collector sends events to Worker or Regular Collector via syslog over port UDP/514; while a Regular Collector uses HTTPS over port TCP/443 to register with Supervisor and to send events to Workers.

For details on Configuring a Diode Collector, see the Diode Collector Installation Guide.

Key Enhancements

System Upgrade

This release runs Rocky Linux 8.7.

Install on IPV6 Networks

Ability to install Supervisor (Leader, Follower), Worker, Collector and Manager on IPV6 networks. This feature was broken in 6.7.0.

Multiple ClickHouse Keeper Deletion

This simplifies Disaster Recovery Steps under Disaster Recovery Operations for ClickHouse, see Step 3 in the 6.7.1 High Availability and Disaster Recovery Procedures - ClickHouse Guide.

FortiSIEM Collector as Management Extension Application (MEA) on FortiAnalyzer

You can now run FortiSIEM Collector 6.7.1 as a Management Extension Application (MEA) image on FortiAnalyzer 7.2.2 or higher. A FortiSIEM MEA Collector functionally works the same way as a regular virtual machine based FortiSIEM Collector or a hardware appliance, but the set up and upgrade processes are slightly different.

For general setup, troubleshooting, event collection, discovery and performance monitoring using a FortiSIEM MEA Collector, see the FortiSIEM MEA Collector Administration Guide in FortiAnalyzer 7.x docs.

Bug Fixes and Minor Enhancements

Bug ID

Severity

Module

Description

882706

Major

App Server

Windows or Linux agent image upgrade may not work in Active-Active Supervisor Cluster setup.

881856

Major

App Server

Content update may not work for Follower and Worker in Active-Active Supervisor Cluster setup.

869378

Major

App Server

Incidents cannot be searched from FortiSIEM Manager.

882317

Minor

App Server

For ServiceNow integration in Service Provider scenario, Organization mapping with mixed case will result in empty organization field in ServiceNow Incidents.

874046

Minor

App Server

After upgrading from 6.6.0 to 6.7.0, Content update version shows the available versions in 6.6.0. It is a display cache issue and older versions are never imported.

867468

Minor

App Server

If you register Supervisor to FortiSIEM Manager, and add a Follower node, then 'FortiSIEM super cluster FQDN/IP' shows blank on Supervisor and Follower UI.

876301

Minor

GUI

Queries from the ANALYTICS > Shortcut report folder may not apply the right values in the Filter field.

873203

Minor

Query

QueryMaster crashes when lookup table schema uses a column that has the same name as an event attribute.

880604

Minor

System

Sometimes, ClickHouse does not start after machine reboot.

870202

Minor

System

In a Disaster Recovery (DR) Environment , queries may fail because Secondary worker doesn't get correct redis password during DR setup.

867999

Minor

System

Changing the Supervisor IP address using configFSM.sh will cause svn_url to change to repos/cmdb/.

876509

Enhancement

System

AWS deployments need to utilize AWS NTP servers.

874179

Enhancement

System

Provide a method to backup and restore ClickHouse event data from command line.

870006

Enhancement

System

Allow removal of more than one ClickHouse Keeper nodes from GUI (needed for DR).

868464

Enhancement

System

Enhance the EventDB to ClickHouse event import tool to map event org in EventDB to another org in ClickHouse.

829791

Enhancement

System

Enable verbose output for ansible while upgrading.

Known Issues

System Related

In 6.7.1, the mod_security module was added to throttle brute force attacks. However, the rotation of logs was not configured, which may cause the root disk to become full and can affect the availability of the FortiSIEM node. It is important that the following workaround is taken on every FortiSIEM node (Supervisor, Worker and Collector).

  1. SSH as root.

  2. Delete the following two files.

    /var/log/httpd/modsec_audit.log

    /var/log/httpd/modsec_debug.log

  3. Download the script auth_fail-throttle.sh from here.

  4. Run auth_fail-throttle.sh

  5. Add the following lines to the two files /etc/logrotate_phoenix_semihourly.conf and /etc/logrotate_phoenix.conf.

    /var/log/httpd/modsec_audit.log {
       rotate 7
       size 100M
       copytruncate
       create 644 root root
       postrotate
      /sbin/service httpd reload > /dev/null 2>/dev/null || true
    endscript
    } 
    
    /var/log/httpd/modsec_debug.log {       
        rotate 7
        size 50M
        copytruncate
        create 644 root root
        postrotate
            /sbin/service httpd reload > /dev/null 2>/dev/null || true
        endscript
    }
    

ClickHouse Related

  1. If you are running ClickHouse event database and want to do Active-Active Supervisor failover, then your Supervisor should not be the only ClickHouse Keeper node. In that case, once the Supervisor is down, the ClickHouse cluster will be down and inserts will fail. It is recommended that you have 3 ClickHouse Keeper nodes running on Workers.

  2. If you are running ClickHouse, then during a Supervisor upgrade to FortiSIEM 6.7.0 or later, instead of shutting down Worker nodes, you need to stop the backend processes by running the following command from the command line.

    phtools --stop all

  3. If you are running Elasticsearch or FortiSIEM EventDB and switch to ClickHouse, then you need to follow two steps to complete the database switch.

    1. Set up the disks on each node in ADMIN > Setup> Storage and ADMIN > License > Nodes.

    2. Configure ClickHouse topology in ADMIN > Settings > Database > ClickHouse Config.

  4. In a ClickHouse environment, Queries will not return results if none of the query nodes within a shard are reachable from Supervisor and responsive. In other words, if at least 1 query node in every shard is healthy and responds to queries, then query results will be returned. To avoid this condition, make sure all Query Worker nodes are healthy.

Elasticsearch Related

  1. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. FortiSIEM has been tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  2. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.

EventDB Related

Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

HDFS Related

If you are running real-time Archive with HDFS, and have added Workers after the real-time Archive has been configured, then you will need to perform a Test and Deploy for HDFS Archive again from the GUI. This will enable HDFSMgr to know about the newly added Workers.

High Availability Related

If you make changes to the following files on any node in the FortiSIEM Cluster, then you will have to manually copy these changes to other nodes.

  1. FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change a Supervisor (respectively Worker, Collector) related change in this file, then the modified file should be copied to all Supervisors (respectively Workers, Collectors).

  2. FortiSIEM Identity and Location Configuration file (/opt/phoenix/config/identity_Def.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  3. FortiSIEM Profile file (ProfileReports.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  4. SSL Certificate (/etc/httpd/conf.d/ssl.conf): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  5. Java SSL Certificates (files cacerts.jks, keyfile and keystore.jks under /opt/glassfish/domains/domain1/config/): If you change these files on a Supervisor, then you have to copy these files to all Supervisors.

  6. Log pulling External Certificates: Copy all log pulling external certificates to each Supervisor.

  7. Event forwarding Certificates define in FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change on one node, you need to change on all nodes.

  8. Custom cron job: If you change this file on a Supervisor, then you have to copy this file to all Supervisors.

What's New in 6.7.1

What's New in 6.7.1

New Features

Diode Collector

This release enables you to deploy a diode collector that has the following functionalities

  • Install without Internet connectivity

  • Work without registering with Supervisor node

  • Collect syslog, SNMP trap and Windows log via WMI/OMI protocol using local configuration

  • Send events to another Collector or Worker via UDP/514 using syslog protocol

A diode collector only requires a strictly one-way communication from itself to another Collector or Worker.

There are two deployment modes:

  1. Diode Collector -> Worker

  2. Diode Collector -> Regular Collector -> Worker

A Diode Collector sends events to Worker or Regular Collector via syslog over port UDP/514; while a Regular Collector uses HTTPS over port TCP/443 to register with Supervisor and to send events to Workers.

For details on Configuring a Diode Collector, see the Diode Collector Installation Guide.

Key Enhancements

System Upgrade

This release runs Rocky Linux 8.7.

Install on IPV6 Networks

Ability to install Supervisor (Leader, Follower), Worker, Collector and Manager on IPV6 networks. This feature was broken in 6.7.0.

Multiple ClickHouse Keeper Deletion

This simplifies Disaster Recovery Steps under Disaster Recovery Operations for ClickHouse, see Step 3 in the 6.7.1 High Availability and Disaster Recovery Procedures - ClickHouse Guide.

FortiSIEM Collector as Management Extension Application (MEA) on FortiAnalyzer

You can now run FortiSIEM Collector 6.7.1 as a Management Extension Application (MEA) image on FortiAnalyzer 7.2.2 or higher. A FortiSIEM MEA Collector functionally works the same way as a regular virtual machine based FortiSIEM Collector or a hardware appliance, but the set up and upgrade processes are slightly different.

For general setup, troubleshooting, event collection, discovery and performance monitoring using a FortiSIEM MEA Collector, see the FortiSIEM MEA Collector Administration Guide in FortiAnalyzer 7.x docs.

Bug Fixes and Minor Enhancements

Bug ID

Severity

Module

Description

882706

Major

App Server

Windows or Linux agent image upgrade may not work in Active-Active Supervisor Cluster setup.

881856

Major

App Server

Content update may not work for Follower and Worker in Active-Active Supervisor Cluster setup.

869378

Major

App Server

Incidents cannot be searched from FortiSIEM Manager.

882317

Minor

App Server

For ServiceNow integration in Service Provider scenario, Organization mapping with mixed case will result in empty organization field in ServiceNow Incidents.

874046

Minor

App Server

After upgrading from 6.6.0 to 6.7.0, Content update version shows the available versions in 6.6.0. It is a display cache issue and older versions are never imported.

867468

Minor

App Server

If you register Supervisor to FortiSIEM Manager, and add a Follower node, then 'FortiSIEM super cluster FQDN/IP' shows blank on Supervisor and Follower UI.

876301

Minor

GUI

Queries from the ANALYTICS > Shortcut report folder may not apply the right values in the Filter field.

873203

Minor

Query

QueryMaster crashes when lookup table schema uses a column that has the same name as an event attribute.

880604

Minor

System

Sometimes, ClickHouse does not start after machine reboot.

870202

Minor

System

In a Disaster Recovery (DR) Environment , queries may fail because Secondary worker doesn't get correct redis password during DR setup.

867999

Minor

System

Changing the Supervisor IP address using configFSM.sh will cause svn_url to change to repos/cmdb/.

876509

Enhancement

System

AWS deployments need to utilize AWS NTP servers.

874179

Enhancement

System

Provide a method to backup and restore ClickHouse event data from command line.

870006

Enhancement

System

Allow removal of more than one ClickHouse Keeper nodes from GUI (needed for DR).

868464

Enhancement

System

Enhance the EventDB to ClickHouse event import tool to map event org in EventDB to another org in ClickHouse.

829791

Enhancement

System

Enable verbose output for ansible while upgrading.

Known Issues

System Related

In 6.7.1, the mod_security module was added to throttle brute force attacks. However, the rotation of logs was not configured, which may cause the root disk to become full and can affect the availability of the FortiSIEM node. It is important that the following workaround is taken on every FortiSIEM node (Supervisor, Worker and Collector).

  1. SSH as root.

  2. Delete the following two files.

    /var/log/httpd/modsec_audit.log

    /var/log/httpd/modsec_debug.log

  3. Download the script auth_fail-throttle.sh from here.

  4. Run auth_fail-throttle.sh

  5. Add the following lines to the two files /etc/logrotate_phoenix_semihourly.conf and /etc/logrotate_phoenix.conf.

    /var/log/httpd/modsec_audit.log {
       rotate 7
       size 100M
       copytruncate
       create 644 root root
       postrotate
      /sbin/service httpd reload > /dev/null 2>/dev/null || true
    endscript
    } 
    
    /var/log/httpd/modsec_debug.log {       
        rotate 7
        size 50M
        copytruncate
        create 644 root root
        postrotate
            /sbin/service httpd reload > /dev/null 2>/dev/null || true
        endscript
    }
    

ClickHouse Related

  1. If you are running ClickHouse event database and want to do Active-Active Supervisor failover, then your Supervisor should not be the only ClickHouse Keeper node. In that case, once the Supervisor is down, the ClickHouse cluster will be down and inserts will fail. It is recommended that you have 3 ClickHouse Keeper nodes running on Workers.

  2. If you are running ClickHouse, then during a Supervisor upgrade to FortiSIEM 6.7.0 or later, instead of shutting down Worker nodes, you need to stop the backend processes by running the following command from the command line.

    phtools --stop all

  3. If you are running Elasticsearch or FortiSIEM EventDB and switch to ClickHouse, then you need to follow two steps to complete the database switch.

    1. Set up the disks on each node in ADMIN > Setup> Storage and ADMIN > License > Nodes.

    2. Configure ClickHouse topology in ADMIN > Settings > Database > ClickHouse Config.

  4. In a ClickHouse environment, Queries will not return results if none of the query nodes within a shard are reachable from Supervisor and responsive. In other words, if at least 1 query node in every shard is healthy and responds to queries, then query results will be returned. To avoid this condition, make sure all Query Worker nodes are healthy.

Elasticsearch Related

  1. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. FortiSIEM has been tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  2. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.

EventDB Related

Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

HDFS Related

If you are running real-time Archive with HDFS, and have added Workers after the real-time Archive has been configured, then you will need to perform a Test and Deploy for HDFS Archive again from the GUI. This will enable HDFSMgr to know about the newly added Workers.

High Availability Related

If you make changes to the following files on any node in the FortiSIEM Cluster, then you will have to manually copy these changes to other nodes.

  1. FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change a Supervisor (respectively Worker, Collector) related change in this file, then the modified file should be copied to all Supervisors (respectively Workers, Collectors).

  2. FortiSIEM Identity and Location Configuration file (/opt/phoenix/config/identity_Def.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  3. FortiSIEM Profile file (ProfileReports.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  4. SSL Certificate (/etc/httpd/conf.d/ssl.conf): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  5. Java SSL Certificates (files cacerts.jks, keyfile and keystore.jks under /opt/glassfish/domains/domain1/config/): If you change these files on a Supervisor, then you have to copy these files to all Supervisors.

  6. Log pulling External Certificates: Copy all log pulling external certificates to each Supervisor.

  7. Event forwarding Certificates define in FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change on one node, you need to change on all nodes.

  8. Custom cron job: If you change this file on a Supervisor, then you have to copy this file to all Supervisors.