What's New in 6.6.1
This document describes the additions for the FortiSIEM 6.6.1 release.
Rocky Linux 8.6 OS Updates
This release contains OS updates published until Sept 1, 2022. See the list below for the patches included by Red Hat and picked up by Rocky Linux.
Bug Fixes and Minor Enhancements
Bug ID |
Severity |
Module |
Description |
---|---|---|---|
835978 |
Major |
App Server |
After the 6.5.0 upgrade, Custom rules defined for specific Orgs need to be re-enabled for them to trigger. |
831456 |
Major |
App Server |
When there is a very large number of Malware IOCs (~2 Million), then upgrade may fail due to Java out of memory. App Server restart may also fail for the same reason. |
824607 |
Major |
App Server |
Incidents may not show after 6.5.0 upgrade, when there are Low severity Incidents. |
838600 |
Minor |
App Server |
Device name change does not take effect on collectors, other than the one that discovers and monitors the device. |
830438 |
Minor |
App Server |
Incidents may trigger from 'System Collector Event Delayed' Rule despite events being received. |
825752 |
Minor |
App Server |
Malware Domain update does not work with AlienVault. |
821197 |
Minor |
App Server |
Retention policy table still contains references from a deleted Organization after an organization is deleted. |
825764 |
Minor |
App Server, Query |
For a large event archive database in NFS, query on one Org may result in timeout because all Org directories are scanned. |
835339 |
Minor |
App Server, Rule Engine |
Security Incidents triggering from custom rules may be cleared by system. |
729023 |
Minor |
App Server, ClickHouse |
SQLite header and source version mismatch causes upgrade failure. |
837950 |
Minor |
ClickHouse |
If supervisor IP changes after ClickHouse has been configured, IP updates to ClickHouse does not occur. |
821110 |
Minor |
Event Pulling Agents |
CrowdStrike Falcon Data Replicator is unable to ingest logs due to unzipping incomplete package. |
829644 |
Minor |
GUI |
Admin > Health > Collector Health page hangs when sorting by organization. |
826450 |
Minor |
GUI |
Unable to validate or save a cloned system parser that contains '&' character. |
825383 |
Minor |
GUI |
Unable to export configurations of FortiGate device from CMDB. |
825068 |
Minor |
GUI |
In HTTP(S) notification, protocol https is incorrectly parsed as https: which causes request to default to http. |
827264 |
Minor |
Query Engine |
Query using IN operator doesn't return proper results when name contains '-'. |
833618 |
Minor |
System |
Missing dos2unix package causes config discoveries to fail in some devices (h3c). |
833411 |
Minor |
System |
On hardware appliances, "execute shutdown" command may sometime fail when run repeatedly. |
823098 |
Enhancement |
Data |
Checkpoint device is discovered as Linux since Checkpoint sysObjectID are not built in. Workaround is to define them from GUI. |
Known Issues
-
Currently, Policy based retention for EventDB does not cover two event categories: (a) System events with phCustId = 0, e.g. a FortiSIEM External Integration Error, FortiSIEM process crash etc., and (b) Super/Global customer audit events with phCustId = 3, e.g. audit log generated from a Super/Global user running an adhoc query. These events are purged when disk usage reaches high watermark.
-
When retention policies are defined, there is a memory corruption issue in Parser module that can cause Parser module to crash or consume high memory. This may not always happen. This may result in event parsing being delayed, events being missed or Supervisor GUI being slow. From the Cloud Health page, you can see
phParser
process is down or has high CPU for any node. This issue has been resolved in release 6.6.2. -
This applies only if you are upgrading from 6.5.0 and using ClickHouse. FortiSIEM 6.5.0 ran ClickHouse on a single node and used the Merge Tree engine. FortiSIEM 6.6.0 onwards runs Replicated Merge Tree engine, even if Replication is not turned on. So after upgrading to FortiSIEM 6.6.0, you will need to do the following steps to migrate the event data previously stored in Merge Tree to Replicated Merge Tree. Without these steps, old events in 6.5.0 will not be searchable in 6.6.0. Once you are on post 6.5.0 release, you will not need to do this procedure again.
To upgrade your FortiSIEM from 6.5.0 to 6.6.0 or later, take the following steps.
-
Navigate to ADMIN >Settings > Database > ClickHouse Config.
-
Click Test, then click Deploy to enable the ClickHouse Keeper service which is new in 6.6.0.
-
Migrate the event data in 6.5.0 to 6.6.0 by running the script
/opt/phoenix/phscripts/clickhouse/clickhouse-migrate-650.sh
.
-
-
This applies only if you are upgrading from 6.5.0 and using ClickHouse. Go to Storage > Online Settings and click Test, it will fail. Fortinet introduced a new disk attribute called "Mounted On" to facilitate disk addition/deletion that was not present in 6.5.0. Follow these steps to fix the problem.
-
Go to ADMIN > Setup > Storage > Online. ClickHouse should be the selected database.
-
For Hot tier and for every configured disk within the tier, do the following:
-
The existing disk should have empty Mounted On.
-
Click + to add a disk. For the new disk, Disk Path should be empty and Mounted On set to /data-clickhouse-hot-1.
-
Copy the Disk Path from the existing disk into this newly disk. The new disk should have the proper Disk Path and Mounted On fields.
-
Delete the first disk with empty Mounted On.
Do this for all disks you have configured in 6.5.0. After your changes, the disks should be ordered /data-clickhouse-hot-1, /data-clickhouse-hot-2, /data-clickhouse-hot-3 from top to bottom.
-
-
Repeat the same steps for the Warm tier (if one was configured in 6.5.0), except that the Mounted On fields should be /data-clickhouse-warm-1, /data-clickhouse-warm-2, /data-clickhouse-warm-3 from top to bottom.
-
When done, click Test, then click Deploy.
-
-
In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.
The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.
Case 1. For already existing indices, issue the REST API call to update the setting
PUT fortisiem-event-*/_settings { "index" : { "max_terms_count" : "1000000" } }
Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting
-
cd /opt/phoenix/config/elastic/7.7
-
Add
"index.max_terms_count": 1000000
(including quotations) to the “settings” section of thefortisiem-event-template
.Example:
...
"settings": { "index.max_terms_count": 1000000,
...
-
Navigate to ADMIN > Storage > Online and perform Test and Deploy.
-
Test new indices have the updated terms limit by executing the following simple REST API call.
GET fortisiem-event-*/_settings
-
-
If you set up Disaster Recovery (DR) on FortiSIEM 6.5.0, then upgrading the Secondary to 6.6.0 will fail. If you set up DR on older versions, this issue does not occur. The following workarounds are available, depending on your situation.
If you have not started the upgrade to 6.6.0 on DR yet, take the following steps:
Instructions before Upgrading Secondary in DR Configuration
-
Step 1: Back up Primary Glassfish Key into Secondary
-
Step 2: Insert into Secondary before Upgrade
-
Log into the Secondary Supervisor as root.
-
Run
phsecondary2primary
Note: This disables disaster recovery for the time being.
-
Run the following command to modify the
admin-keyfile
file.vi /opt/glassfish/domains/domain1/config/admin-keyfile
-
Paste the output from Step 1 v. to the
admin-keyfile
file, replacing the current entry and save.
-
-
Step 3: Verify that the Key has Taken Effect on Secondary Supervisor
-
Run the following commands.
su - admin; /opt/glassfish/bin/asadmin stop-domain /opt/glassfish/bin/asadmin start-domain /opt/glassfish/bin/asadmin login
For user name, enter admin
For password, enter the password from Step 1 iii.
Example success login:
/opt/glassfish/bin/asadmin login Enter admin user name [Enter to accept default]> admin Enter admin password> Login information relevant to admin user name [admin] for host [localhost] and admin port [4848] stored at [/opt/phoenix/bin/.gfclient/pass] successfully. Make sure that this file remains protected. Information stored in this file will be used by administration commands to manage associated domain. Command login executed successfully.
Note: Step 3 must work in order to proceed to Step 4.
-
-
Step 4: Upgrade to 6.6.0 on Secondary
-
Follow the Upgrade Guide.
-
-
Step 5: Add Secondary Back into the System
-
Log into the Primary Supervisor’s GUI.
-
Navigate to ADMIN > License > Nodes.
-
Select the Secondary entry in the GUI and click on Edit.
-
Click on Save to re-establish connection to the Secondary.
-
Instructions if you are Already in a Failed State in the Secondary
-
Step 1: Grab the Supervisor's DB Password
-
Step 2: Update the Glassfish Password on Secondary to Continue with the Upgrade
-
Log into the Secondary Supervisor as root.
-
Run the following commands to modify the
admin-keyfile
file.cd /opt/glassfish/domains/domain1/config/
cp -a admin-keyfile admin-keyfile.bad
vi /opt/glassfish/domains/domain1/config/admin-keyfile
-
Paste the output from Step 1 v. to the
admin-keyfile
, replacing the current entry and save.
-
-
Step 3: Deploy
appserver
Manually after Admin Passkey has Changed-
Run the following command.
vi /opt/phoenix/deployment/deploy-fresh.sh
-
Replace the password with the password from Step 1 iii by finding the following line, and making the password replacement there.
echo "AS_ADMIN_PASSWORD="$dbpasswd > $DEPLOYMENR_HOME/glassfish-pwd.txt
Example:
echo ‘AS_ADMIN_PASSWORD=ad1$dnsk%’ > $DEPLOYMENR_HOME/glassfish-pwd.txt
Note: The use of single quote character (‘) in the replacement vs the double quote character (“).
-
Save the file
deploy-fresh.sh
. -
Run the following commands.
su - admin
/opt/phoenix/deployment/deploy-fresh.sh /opt/phoenix/deployment/phoenix.ear
-
-
Step 4: Modify Ansible Playbook to Finish Upgrade
-
Run the following command.
vi /usr/local/upgrade/post-upgrade.yml
-
Remove only the following:
- configure - update_configs - fortiinsight-integration - setup-python - setup-node - setup-clickhouse - setup-zookeeper - setup-redis - migrate-database - appserver
-
Save the modification.
-
Resume the upgrade by running the following command.
ansible-playbook /usr/local/upgrade/post-upgrade.yml | tee -a /usr/local/upgrade/logs/ansible_upgrade_continued.log
-
Reboot the Supervisor if system does not reboot itself.
-
-
-
FortiSIEM uses dynamic mapping for Keyword fields to save Cluster state. Elasticsearch needs to encounter some events containing these fields before it can determine their type. For this reason, queries containing
group by
on any of these fields will fail if Elasticsearch has not seen any event containing these fields. Workaround is to first run a non-group by query with these fields to make sure that these fields have non-null haves.