What's New in 7.1.3
This release fixes the following issues:
-
An important security issue described in Fortinet PSIRT Advisory FG-IR-23-130 impacting Supervisor and Worker nodes.
-
Fixes known issue in FortiSIEM 7.1.2, where Supervisor and Worker nodes may fail to restart properly for VMs running on ClickHouse or EventDB with local disk. For more detailed information, see 7.1.2 Installation / Upgrade Related Known Issue.
Known Issue
After upgrading to 7.1.1 or later, in ClickHouse based deployments, searching on IP fields (namely, Reporting IP, Source IP, Destination IP, and Host IP) does not show correct results for events stored prior to upgrade. Events are still stored in ClickHouse, but searches on events before the upgrade do not return results, while searches on events stored after the upgrade work correctly. All other searches work correctly.
This issue is related to a recent change in ClickHouse version 23.3 in how IPV6 fields are represented. See the following URLs for more information.
Workaround
The workaround requires recreating old indices involving Reporting IP, Source IP, Destination IP, and Host IP that were created before the 7.1.1 upgrade. In our experiments, Fortinet has not seen any event loss or FortiSIEM service interruption during this process.
-
Go to root shell by running the following command:
sudo -s
-
Change directory to
/tmp
by running the following command:cd /tmp
-
Run the following command:
clickhouse-client
-
Ensure that
/data-clickhouse-hot-1
has at least 10% free disk space. This space is required during index re-creation (Step 5 below). If free disk space is less that 10%, then run the following SQL command (4a.) to get the list of oldest ClickHouse partitions residing on the/data-clickhouse-hot-1
disk and either move them to another disk or tier, or delete them until/data-clickhouse-hot-1
has at least 10% free disk space. These commands need to be run only on ALL data nodes in every shard. The first command (4a.), identifies the largest partitions on the/data-clickhouse-hot-1
disk. The remaining commands enable you to move the data to another tier (4b.), or another disk (4c.), or delete the data (4d.).-
Identify the largest ClickHouse partitions in Hot node:
SELECT disk_name, partition, extract(partition, '\(\d+,(\d+)\)') as date, formatReadableSize(sum(bytes_on_disk)), formatReadableSize(sum(data_uncompressed_bytes)) FROM system.parts WHERE (table = 'events_replicated') AND path LIKE '%hot-1%' AND active GROUP BY disk_name, partition ORDER BY disk_name ASC, date ASC limit 10
-
Move the data to another tier:
ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO VOLUME <next tier>
-
Move the data to another disk:
ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO disk <another disk>
-
Delete the data:
ALTER TABLE fsiem.events_replicated DROP PARTITION <partition expression from (a) >
Example:
Output from command in 4a.:
To move the first partition (size 3.98 GiB) to Warm tier, issue the following command as shown in 4b.
ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240115) TO VOLUME 'warm'
To move the first partition (size 3.98 GiB) to another disk in Hot tier, issue the following command as shown in 4c.
ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240116) TO disk 'data_clickhouse_hot_2'
To delete the first partition (size 3.98 GiB), issue the following command as shown in 4d.
ALTER TABLE fsiem.events_replicated DROP PARTITION (18250, 20240116)
-
-
Run the following commands sequentially. This will drop/add/recreate all affected indices: Reporting IP, Source IP, Destination IP, and Host IP within ClickHouse. These commands need to be run only on one data node per shard. Note that the first command (
drop
) in every index may take some time to complete. User must wait until the command completes before issuing the next command.alter table fsiem.events_replicated drop index index_reptDevIpAddr_bloom_filter alter table fsiem.events_replicated add INDEX index_reptDevIpAddr_bloom_filter reptDevIpAddr TYPE bloom_filter GRANULARITY 5 AFTER index_customer_set alter table fsiem.events_replicated materialize index index_reptDevIpAddr_bloom_filter alter table fsiem.events_replicated drop index index_srcIpAddr_bloom_filter alter table fsiem.events_replicated add INDEX index_srcIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'srcIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER collectorId_set alter table fsiem.events_replicated materialize index index_srcIpAddr_bloom_filter alter table fsiem.events_replicated drop index index_destIpAddr_bloom_filter alter table fsiem.events_replicated add INDEX index_destIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'destIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_srcIpAddr_bloom_filter alter table fsiem.events_replicated materialize index index_destIpAddr_bloom_filter alter table fsiem.events_replicated drop index index_hostIpAddr_bloom_filter alter table fsiem.events_replicated add INDEX index_hostIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'hostIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_user_bloom_filter alter table fsiem.events_replicated materialize index index_hostIpAddr_bloom_filter