Fortinet black logo

What's New in 6.7.6

What's New in 6.7.6

Important Notes

  1. FortiSIEM 6.7.5 and later API documentation is transitioning to https://fndn.fortinet.net/index.php?/fortiapi/2627-fortisiem/. Fortinet recommends checking this link first for the latest API updates.

  2. 5.x Collector will not work with FortiSIEM 6.7.2 or later. This step is taken for improved security. Follow these steps to make the 5.x Collectors operational after upgrade.

    1. Upgrade the Supervisor to the latest version: 7.0.0 or higher.

    2. Copy phProvisionCollector.collector from the Supervisor to all 5.x Collectors.

      1. Login to Supervisor.

      2. Run the following command.

        scp /opt/phoenix/phscripts/bin/phProvisionCollector.collector root@<Collector_IP>:/opt/phoenix/bin/phProvisionCollector

    3. Update 5.x Collector password.

      1. SSH to the Collector.

      2. Run the following command.

        phProvisionCollector --update <Organization-user-name> <Organization-user-password> <Supervisor-IP> <Organization-name> <Collector-name>

      3. Make sure the Collector ID and password are present in the file /etc/httpd/accounts/passwds on Supervisors and Workers.

    4. Reboot the Collector.

Key Enhancements

This release includes published Rocky Linux OS updates until May 5, 2023. The list of updates can be found here - https://errata.rockylinux.org/. FortiSIEM Rocky Linux Repositories (os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs-r8.fortisiem.fortinet.com) have also been updated to the latest. Therefore, FortiSIEM customers in versions 6.4.1 and above, can upgrade only their Rocky Linux versions by following the procedures described in FortiSIEM OS Update Procedure.

Bug Fixes and Enhancements

This release resolves the following issues:

Bug Id

Severity

Module

Description

914571

Minor

Agent Manager

phAgentManager memory leak, while receiving Kafka events, caused by a memory leak in librdkafka module. This module has been upgraded to the latest version. Fortinet tests show that a FortiSIEM Collector with 8 vCPU and 24GB memory, can collect up to 4K EPS from Kafka.

921351

Minor

App Server

Multiple Incident REST API issues are fixed:

  • JSON APIs return error responses in JSON format, instead of XML format.

  • POST API filtering allow these event attributes: eventSeverity, eventSeverityCat, phIncidentCategory, incidentStatus, customer, phCustId, incidentReso, incidentId.

  • Two parameters are required for Trigger event queries - timeFrom and timeTo, to provide response to trigger event queries in reasonable time. These two parameters should not be more than 1 day apart.

For details, see FortiSIEM REST API.

921190

Minor

App Server

Rule Tests time out when run on Enterprise deployment.

918854

Minor

App Server

AppServer incorrectly invalidates older log integrity XML data, resulting in these files not being written to database.

917625

Minor

App Server

During CMDB Merge for Windows Agents, Windows GUID should be ignored. This prevents the situation of two different Windows Servers with different names but with the same IP or GUID from being merged into the same entry in CMDB.

916014

Minor

App Server

Incident Trigger Event lookup in GUI is optimized for long running Incidents. In previous releases, the trigger events are searched over the First Seen Time and Last Seen Time window, which can be very large, if the incident is constantly triggering and is not resolved. In such cases, GUI may fail to display trigger events. In new design, for an Incident, the latest 100 trigger events are shown over a maximum 30-day period. Additionally, for ClickHouse, the eventType field is stored for every trigger event and used in the queries. Since eventType is a ClickHouse Primary Index, queries are faster (See ClickHouse Index Design for more information), but the additional speedup will impact newer incidents. Consider these examples:

  • If 100 trigger events occur in the last 1 day, then only these trigger events are shown.

  • If 50 trigger events occur in each of the last 2 days, then only these trigger events over the last 2 days are shown.

  • If 1 trigger event occurs on each of the last 100 days, then 30 trigger events are shown.

914009

Minor

App Server

LDAP discovery with sub domain sends users to CMDB > Users > Ungrouped.

588863

Minor

App Server

For Windows Servers monitored via Agent, OMI/WMI and Ping, they do not show all Methods in CMDB.

918590

Minor

ClickHouse

Unable to add the Disaster Recovery Supervisor as ClickHouse Keeper and Data Node.

921662

Minor

Data Purger

Excessive logging by phDataPurger when it hits a Value Group lookup error, fills up /opt disk.

916318

Enhancement

App Server

App Server limit discovery threads to prevent multiple simultaneous large discoveries consuming over all resources and preventing GUI logins.

918654

Enhancement

Parser

Make phParser EoL character recognition configurable for TLS syslog. The following phoenix_config.txt entry is added:

tcp_syslog_delimiter=0x0a  # or 0x00,0x0a

743793

Enhancement

Parser

Enable SASL_SSL (authentication plus encryption) for Kafka producer and consumer. In this release, there is no GUI support for this. Customer needs to choose SASL_PLAINTEXT on GUI and configure this in phoenix_config.txt.

sasl_ssl_ca_cert=/etc/pki/kafka/ca-cert
sasl_ssl_cert_file=/etc/pki/kafka/client_client.pem
sasl_ssl_key_file=/etc/pki/kafka/client_client.key
sasl_ssl_password=
sasl_ssl_verify=true
 

See the Appendix > Configuration Notes > Editing phoenix_config.txt File for guidance on changing the file. Specifically, on the Collector, you need to make the same change in 2 places:

  • Change the /opt/config/phoenix_config.txt file on the Collector and restart the Collector.

  • Make the same change on /opt/phoenix/config/collector_config_template.txt. This ensures that new Collectors registering will get the new parameters and the changes are preserved across upgrades.

Known Issues

ClickHouse Related

  1. If you are running ClickHouse event database and want to do Active-Active Supervisor failover, then your Supervisor should not be the only ClickHouse Keeper node. In that case, once the Supervisor is down, the ClickHouse cluster will be down and inserts will fail. It is recommended that you have 3 ClickHouse Keeper nodes running on Workers.

  2. If you are running ClickHouse, then during a Supervisor upgrade to FortiSIEM 6.7.0 or later, instead of shutting down Worker nodes, you need to stop the backend processes by running the following command from the command line.

    phtools --stop all

  3. If you are running Elasticsearch or FortiSIEM EventDB and switch to ClickHouse, then you need to follow two steps to complete the database switch.

    1. Set up the disks on each node in ADMIN > Setup> Storage and ADMIN > License > Nodes.

    2. Configure ClickHouse topology in ADMIN > Settings > Database > ClickHouse Config.

  4. In a ClickHouse environment, Queries will not return results if none of the query nodes within a shard are reachable from Supervisor and responsive. In other words, if at least 1 query node in every shard is healthy and responds to queries, then query results will be returned. To avoid this condition, make sure all Query Worker nodes are healthy.

Discovery Related

Test Connectivity & Discovery may get stuck with Database update 0% when a few discoveries are running.

Elasticsearch Related

  1. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. FortiSIEM has been tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  2. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.

EventDB Related

Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

HDFS Related

If you are running real-time Archive with HDFS, and have added Workers after the real-time Archive has been configured, then you will need to perform a Test and Deploy for HDFS Archive again from the GUI. This will enable HDFSMgr to know about the newly added Workers.

High Availability Related

If you make changes to the following files on any node in the FortiSIEM Cluster, then you will have to manually copy these changes to other nodes.

  1. FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change a Supervisor (respectively Worker, Collector) related change in this file, then the modified file should be copied to all Supervisors (respectively Workers, Collectors).

  2. FortiSIEM Identity and Location Configuration file (/opt/phoenix/config/identity_Def.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  3. FortiSIEM Profile file (ProfileReports.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  4. SSL Certificate (/etc/httpd/conf.d/ssl.conf): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  5. Java SSL Certificates (files cacerts.jks, keyfile and keystore.jks under /opt/glassfish/domains/domain1/config/): If you change these files on a Supervisor, then you have to copy these files to all Supervisors.

  6. Log pulling External Certificates: Copy all log pulling external certificates to each Supervisor.

  7. Event forwarding Certificates define in FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change on one node, you need to change on all nodes.

  8. Custom cron job: If you change this file on a Supervisor, then you have to copy this file to all Supervisors.

What's New in 6.7.6

Important Notes

  1. FortiSIEM 6.7.5 and later API documentation is transitioning to https://fndn.fortinet.net/index.php?/fortiapi/2627-fortisiem/. Fortinet recommends checking this link first for the latest API updates.

  2. 5.x Collector will not work with FortiSIEM 6.7.2 or later. This step is taken for improved security. Follow these steps to make the 5.x Collectors operational after upgrade.

    1. Upgrade the Supervisor to the latest version: 7.0.0 or higher.

    2. Copy phProvisionCollector.collector from the Supervisor to all 5.x Collectors.

      1. Login to Supervisor.

      2. Run the following command.

        scp /opt/phoenix/phscripts/bin/phProvisionCollector.collector root@<Collector_IP>:/opt/phoenix/bin/phProvisionCollector

    3. Update 5.x Collector password.

      1. SSH to the Collector.

      2. Run the following command.

        phProvisionCollector --update <Organization-user-name> <Organization-user-password> <Supervisor-IP> <Organization-name> <Collector-name>

      3. Make sure the Collector ID and password are present in the file /etc/httpd/accounts/passwds on Supervisors and Workers.

    4. Reboot the Collector.

Key Enhancements

This release includes published Rocky Linux OS updates until May 5, 2023. The list of updates can be found here - https://errata.rockylinux.org/. FortiSIEM Rocky Linux Repositories (os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs-r8.fortisiem.fortinet.com) have also been updated to the latest. Therefore, FortiSIEM customers in versions 6.4.1 and above, can upgrade only their Rocky Linux versions by following the procedures described in FortiSIEM OS Update Procedure.

Bug Fixes and Enhancements

This release resolves the following issues:

Bug Id

Severity

Module

Description

914571

Minor

Agent Manager

phAgentManager memory leak, while receiving Kafka events, caused by a memory leak in librdkafka module. This module has been upgraded to the latest version. Fortinet tests show that a FortiSIEM Collector with 8 vCPU and 24GB memory, can collect up to 4K EPS from Kafka.

921351

Minor

App Server

Multiple Incident REST API issues are fixed:

  • JSON APIs return error responses in JSON format, instead of XML format.

  • POST API filtering allow these event attributes: eventSeverity, eventSeverityCat, phIncidentCategory, incidentStatus, customer, phCustId, incidentReso, incidentId.

  • Two parameters are required for Trigger event queries - timeFrom and timeTo, to provide response to trigger event queries in reasonable time. These two parameters should not be more than 1 day apart.

For details, see FortiSIEM REST API.

921190

Minor

App Server

Rule Tests time out when run on Enterprise deployment.

918854

Minor

App Server

AppServer incorrectly invalidates older log integrity XML data, resulting in these files not being written to database.

917625

Minor

App Server

During CMDB Merge for Windows Agents, Windows GUID should be ignored. This prevents the situation of two different Windows Servers with different names but with the same IP or GUID from being merged into the same entry in CMDB.

916014

Minor

App Server

Incident Trigger Event lookup in GUI is optimized for long running Incidents. In previous releases, the trigger events are searched over the First Seen Time and Last Seen Time window, which can be very large, if the incident is constantly triggering and is not resolved. In such cases, GUI may fail to display trigger events. In new design, for an Incident, the latest 100 trigger events are shown over a maximum 30-day period. Additionally, for ClickHouse, the eventType field is stored for every trigger event and used in the queries. Since eventType is a ClickHouse Primary Index, queries are faster (See ClickHouse Index Design for more information), but the additional speedup will impact newer incidents. Consider these examples:

  • If 100 trigger events occur in the last 1 day, then only these trigger events are shown.

  • If 50 trigger events occur in each of the last 2 days, then only these trigger events over the last 2 days are shown.

  • If 1 trigger event occurs on each of the last 100 days, then 30 trigger events are shown.

914009

Minor

App Server

LDAP discovery with sub domain sends users to CMDB > Users > Ungrouped.

588863

Minor

App Server

For Windows Servers monitored via Agent, OMI/WMI and Ping, they do not show all Methods in CMDB.

918590

Minor

ClickHouse

Unable to add the Disaster Recovery Supervisor as ClickHouse Keeper and Data Node.

921662

Minor

Data Purger

Excessive logging by phDataPurger when it hits a Value Group lookup error, fills up /opt disk.

916318

Enhancement

App Server

App Server limit discovery threads to prevent multiple simultaneous large discoveries consuming over all resources and preventing GUI logins.

918654

Enhancement

Parser

Make phParser EoL character recognition configurable for TLS syslog. The following phoenix_config.txt entry is added:

tcp_syslog_delimiter=0x0a  # or 0x00,0x0a

743793

Enhancement

Parser

Enable SASL_SSL (authentication plus encryption) for Kafka producer and consumer. In this release, there is no GUI support for this. Customer needs to choose SASL_PLAINTEXT on GUI and configure this in phoenix_config.txt.

sasl_ssl_ca_cert=/etc/pki/kafka/ca-cert
sasl_ssl_cert_file=/etc/pki/kafka/client_client.pem
sasl_ssl_key_file=/etc/pki/kafka/client_client.key
sasl_ssl_password=
sasl_ssl_verify=true
 

See the Appendix > Configuration Notes > Editing phoenix_config.txt File for guidance on changing the file. Specifically, on the Collector, you need to make the same change in 2 places:

  • Change the /opt/config/phoenix_config.txt file on the Collector and restart the Collector.

  • Make the same change on /opt/phoenix/config/collector_config_template.txt. This ensures that new Collectors registering will get the new parameters and the changes are preserved across upgrades.

Known Issues

ClickHouse Related

  1. If you are running ClickHouse event database and want to do Active-Active Supervisor failover, then your Supervisor should not be the only ClickHouse Keeper node. In that case, once the Supervisor is down, the ClickHouse cluster will be down and inserts will fail. It is recommended that you have 3 ClickHouse Keeper nodes running on Workers.

  2. If you are running ClickHouse, then during a Supervisor upgrade to FortiSIEM 6.7.0 or later, instead of shutting down Worker nodes, you need to stop the backend processes by running the following command from the command line.

    phtools --stop all

  3. If you are running Elasticsearch or FortiSIEM EventDB and switch to ClickHouse, then you need to follow two steps to complete the database switch.

    1. Set up the disks on each node in ADMIN > Setup> Storage and ADMIN > License > Nodes.

    2. Configure ClickHouse topology in ADMIN > Settings > Database > ClickHouse Config.

  4. In a ClickHouse environment, Queries will not return results if none of the query nodes within a shard are reachable from Supervisor and responsive. In other words, if at least 1 query node in every shard is healthy and responds to queries, then query results will be returned. To avoid this condition, make sure all Query Worker nodes are healthy.

Discovery Related

Test Connectivity & Discovery may get stuck with Database update 0% when a few discoveries are running.

Elasticsearch Related

  1. In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.

    The workaround is to change the “max_terms_count” setting for each event index. FortiSIEM has been tested up to 1 million entries. The query response time will be proportional to the size of the group.

    Case 1. For already existing indices, issue the REST API call to update the setting

    PUT fortisiem-event-*/_settings
    {
      "index" : {
        "max_terms_count" : "1000000"
      }
    }
    

    Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting

    1. cd /opt/phoenix/config/elastic/7.7

    2. Add "index.max_terms_count": 1000000 (including quotations) to the “settings” section of the fortisiem-event-template.

      Example:

      ...

        "settings": {
          "index.max_terms_count": 1000000,
      

      ...

    3. Navigate to ADMIN > Storage > Online and perform Test and Deploy.

    4. Test new indices have the updated terms limit by executing the following simple REST API call.

      GET fortisiem-event-*/_settings

  2. FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing group by on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.

EventDB Related

Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.

HDFS Related

If you are running real-time Archive with HDFS, and have added Workers after the real-time Archive has been configured, then you will need to perform a Test and Deploy for HDFS Archive again from the GUI. This will enable HDFSMgr to know about the newly added Workers.

High Availability Related

If you make changes to the following files on any node in the FortiSIEM Cluster, then you will have to manually copy these changes to other nodes.

  1. FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change a Supervisor (respectively Worker, Collector) related change in this file, then the modified file should be copied to all Supervisors (respectively Workers, Collectors).

  2. FortiSIEM Identity and Location Configuration file (/opt/phoenix/config/identity_Def.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  3. FortiSIEM Profile file (ProfileReports.xml): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  4. SSL Certificate (/etc/httpd/conf.d/ssl.conf): This file should be identical in Supervisors and Workers. If you make a change to this file on any Supervisor or Worker, then you need to copy this file to all other Supervisors and Workers.

  5. Java SSL Certificates (files cacerts.jks, keyfile and keystore.jks under /opt/glassfish/domains/domain1/config/): If you change these files on a Supervisor, then you have to copy these files to all Supervisors.

  6. Log pulling External Certificates: Copy all log pulling external certificates to each Supervisor.

  7. Event forwarding Certificates define in FortiSIEM Config file (/opt/phoenix/config/phoenix_config.txt): If you change on one node, you need to change on all nodes.

  8. Custom cron job: If you change this file on a Supervisor, then you have to copy this file to all Supervisors.