Fortinet white logo
Fortinet white logo

What's New in 7.1.2

What's New in 7.1.2

This release fixes an important security issue described in Fortinet PSIRT Advisory FG-IR-23-130 impacting Supervisor and Worker nodes.

Known Issues

Installation / Upgrade Related

FortiSIEM Supervisor and Worker nodes may fail to come up properly after restart in certain situations. This impacts VM based installs where the storage type is ClickHouse or EventDB with local disk. Hardware platforms are not affected by this issue. The root cause is that the sanitized scripts, as part of security fix, failed to insert the disk UUID in /etc/fstab file entries.

The failing scenarios are as follows:

Case 1: Fresh install 7.1.2, then save ClickHouse or EventDB local storage and try to reboot.

Case 2: Upgrade to 7.1.2, then save ClickHouse or EventDB local storage and try to reboot.


Workarounds are as follows:

Case 1: If you have not yet rebooted the nodes, login via SSH as root and make the following changes:

ClickHouse:

The following empty UUIDs need to be replaced with the actual value from the blkid command output for the corresponding disk.

  1. Use the following command to print the UUID of the disk.

    blkid <devicename> | awk '-F"' '{ print $2; }'

    For example:

    blkid /dev/sde | awk '-F"' '{ print $2; }'

    cc111789-ead1-485c-9880-138db408b7c1

    blkid /dev/sdf | awk '-F"' '{ print $2; }'

    c877e6a7-b16c-4fd2-988e-5450b963d7c7

  2. Edit /etc/fstab file and replace empty UUIDs as follows:

    UUID=cc111789-ead1-485c-9880-138db408b7c1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    UUID=c877e6a7-b16c-4fd2-988e-5450b963d7c7 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

  3. Once the above changes are made, run the following command:

    systemctl daemon-reload

    Now, it is safe to reboot the system.


EventDB on Local Disk:

The following empty UUID needs to be replaced with the actual value from the blkid command output for the corresponding disk.

  1. Use the following command to print the UUID of the disk.

    blkid <devicename> | awk '-F"' '{ print $2; }'

  2. Edit /etc/fstab file and replace empty UUID as follows:

    UUID= /data xfs defaults,nodev,noatime,inode64 0 0

  3. Once the above changes are made, run the following command:

    systemctl daemon-reload

    Now, it is safe to reboot the system.


Case 2: If you rebooted the system and the system appears to be hanging, then you will need to perform the following steps while the system is hung:

  1. Send Ctrl+Alt+Delete or reset the VM in VCenter or Hypervisor console.

  2. On the VM console, when you get to the list of boot up options (Rocky Linux ..., Rescue ...), the default is the first entry.

  3. Before it boots up automatically into the first entry, hit the 'e' key to edit grub options. Then you are brought to a simple editor of grub options.

  4. Scroll down to the line which starts with "linux". Go to the end of the line with Ctrl+E.

  5. Remove the options "quiet" and "console=ttyS0" if running on a Hypervisor platform, such as VMWare VCenter. In the case of AWS, Azure, or GCP, if you are using the serial console, you only need to remove the "quiet" option since the console is already correctly going to the serial console ttyS0.

  6. Hit Ctrl+X.

  7. Now the system boots up in verbose mode, and it will hang for a few minutes attempting to mount ClickHouse hot and warm disks, or EventDB local disk as the case may be. Once it fails to mount, it will bring you to the prompt "Give root password for maintenance".

  8. Enter root password. You will now be in the root prompt.

  9. Use the following command to print the UUID of the disk. You need to know which disk you provided for hot and warm disk.
    blkid <devicename> | awk '-F"' '{ print $2; }'

    For example:

    blkid /dev/sde | awk '-F"' '{ print $2; }'

    cc111789-ead1-485c-9880-138db408b7c1

    blkid /dev/sdf | awk '-F"' '{ print $2; }'

    c877e6a7-b16c-4fd2-988e-5450b963d7c7

  10. Take this UUID and edit the /etc/fstab hot disk entry with the UUID, and the warm disk entry (if present) or local EventDB directory.

    For example:

    UUID=cc111789-ead1-485c-9880-138db408b7c1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    UUID=c877e6a7-b16c-4fd2-988e-5450b963d7c7 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

  11. Run the following command:

    systemctl daemon-reload

  12. Reboot.

ClickHouse Related

After upgrading to 7.1.1 or later, in ClickHouse based deployments, searching on IP fields (namely, Reporting IP, Source IP, Destination IP, and Host IP) does not show correct results for events stored prior to upgrade. Events are still stored in ClickHouse, but searches on events before the upgrade do not return results, while searches on events stored after the upgrade work correctly. All other searches work correctly.

This issue is related to a recent change in ClickHouse version 23.3 in how IPV6 fields are represented. See the following URLs for more information.

Workaround

The workaround requires recreating old indices involving Reporting IP, Source IP, Destination IP, and Host IP that were created before the 7.1.1 upgrade. In our experiments, Fortinet has not seen any event loss or FortiSIEM service interruption during this process.

  1. Go to root shell by running the following command:

    sudo -s

  2. Change directory to /tmp by running the following command:

    cd /tmp

  3. Run the following command:

    clickhouse-client

  4. Ensure that /data-clickhouse-hot-1 has at least 10% free disk space. This space is required during index re-creation (Step 5 below). If free disk space is less that 10%, then run the following SQL command (4a.) to get the list of oldest ClickHouse partitions residing on the /data-clickhouse-hot-1 disk and either move them to another disk or tier, or delete them until /data-clickhouse-hot-1 has at least 10% free disk space. These commands need to be run only on ALL data nodes in every shard. The first command (4a.), identifies the largest partitions on the /data-clickhouse-hot-1 disk. The remaining commands enable you to move the data to another tier (4b.), or another disk (4c.), or delete the data (4d.).

    1. Identify the largest ClickHouse partitions in Hot node:

      SELECT disk_name, partition, extract(partition, '\(\d+,(\d+)\)') as date, formatReadableSize(sum(bytes_on_disk)), formatReadableSize(sum(data_uncompressed_bytes)) FROM system.parts WHERE (table = 'events_replicated') AND path LIKE '%hot-1%' AND active GROUP BY disk_name, partition ORDER BY disk_name ASC, date ASC limit 10

    2. Move the data to another tier:

      ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO VOLUME <next tier>

    3. Move the data to another disk:

      ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO disk <another disk>

    4. Delete the data:

      ALTER TABLE fsiem.events_replicated DROP PARTITION <partition expression from (a) >

      Example:

      Output from command in 4a.:

      To move the first partition (size 3.98 GiB) to Warm tier, issue the following command as shown in 4b.

      ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240115) TO VOLUME 'warm'

      To move the first partition (size 3.98 GiB) to another disk in Hot tier, issue the following command as shown in 4c.

      ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240116) TO disk 'data_clickhouse_hot_2'

      To delete the first partition (size 3.98 GiB), issue the following command as shown in 4d.

      ALTER TABLE fsiem.events_replicated DROP PARTITION (18250, 20240116)

  5. Run the following commands sequentially. This will drop/add/recreate all affected indices: Reporting IP, Source IP, Destination IP, and Host IP within ClickHouse. These commands need to be run only on one data node per shard. Note that the first command (drop) in every index may take some time to complete. User must wait until the command completes before issuing the next command.

    alter table fsiem.events_replicated drop index index_reptDevIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_reptDevIpAddr_bloom_filter reptDevIpAddr TYPE bloom_filter GRANULARITY 5 AFTER index_customer_set
    alter table fsiem.events_replicated materialize index index_reptDevIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_srcIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_srcIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'srcIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER collectorId_set
    alter table fsiem.events_replicated materialize index index_srcIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_destIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_destIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'destIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_srcIpAddr_bloom_filter
    alter table fsiem.events_replicated materialize index index_destIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_hostIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_hostIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'hostIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_user_bloom_filter
    alter table fsiem.events_replicated materialize index index_hostIpAddr_bloom_filter
    

What's New in 7.1.2

What's New in 7.1.2

This release fixes an important security issue described in Fortinet PSIRT Advisory FG-IR-23-130 impacting Supervisor and Worker nodes.

Known Issues

Installation / Upgrade Related

FortiSIEM Supervisor and Worker nodes may fail to come up properly after restart in certain situations. This impacts VM based installs where the storage type is ClickHouse or EventDB with local disk. Hardware platforms are not affected by this issue. The root cause is that the sanitized scripts, as part of security fix, failed to insert the disk UUID in /etc/fstab file entries.

The failing scenarios are as follows:

Case 1: Fresh install 7.1.2, then save ClickHouse or EventDB local storage and try to reboot.

Case 2: Upgrade to 7.1.2, then save ClickHouse or EventDB local storage and try to reboot.


Workarounds are as follows:

Case 1: If you have not yet rebooted the nodes, login via SSH as root and make the following changes:

ClickHouse:

The following empty UUIDs need to be replaced with the actual value from the blkid command output for the corresponding disk.

  1. Use the following command to print the UUID of the disk.

    blkid <devicename> | awk '-F"' '{ print $2; }'

    For example:

    blkid /dev/sde | awk '-F"' '{ print $2; }'

    cc111789-ead1-485c-9880-138db408b7c1

    blkid /dev/sdf | awk '-F"' '{ print $2; }'

    c877e6a7-b16c-4fd2-988e-5450b963d7c7

  2. Edit /etc/fstab file and replace empty UUIDs as follows:

    UUID=cc111789-ead1-485c-9880-138db408b7c1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    UUID=c877e6a7-b16c-4fd2-988e-5450b963d7c7 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

  3. Once the above changes are made, run the following command:

    systemctl daemon-reload

    Now, it is safe to reboot the system.


EventDB on Local Disk:

The following empty UUID needs to be replaced with the actual value from the blkid command output for the corresponding disk.

  1. Use the following command to print the UUID of the disk.

    blkid <devicename> | awk '-F"' '{ print $2; }'

  2. Edit /etc/fstab file and replace empty UUID as follows:

    UUID= /data xfs defaults,nodev,noatime,inode64 0 0

  3. Once the above changes are made, run the following command:

    systemctl daemon-reload

    Now, it is safe to reboot the system.


Case 2: If you rebooted the system and the system appears to be hanging, then you will need to perform the following steps while the system is hung:

  1. Send Ctrl+Alt+Delete or reset the VM in VCenter or Hypervisor console.

  2. On the VM console, when you get to the list of boot up options (Rocky Linux ..., Rescue ...), the default is the first entry.

  3. Before it boots up automatically into the first entry, hit the 'e' key to edit grub options. Then you are brought to a simple editor of grub options.

  4. Scroll down to the line which starts with "linux". Go to the end of the line with Ctrl+E.

  5. Remove the options "quiet" and "console=ttyS0" if running on a Hypervisor platform, such as VMWare VCenter. In the case of AWS, Azure, or GCP, if you are using the serial console, you only need to remove the "quiet" option since the console is already correctly going to the serial console ttyS0.

  6. Hit Ctrl+X.

  7. Now the system boots up in verbose mode, and it will hang for a few minutes attempting to mount ClickHouse hot and warm disks, or EventDB local disk as the case may be. Once it fails to mount, it will bring you to the prompt "Give root password for maintenance".

  8. Enter root password. You will now be in the root prompt.

  9. Use the following command to print the UUID of the disk. You need to know which disk you provided for hot and warm disk.
    blkid <devicename> | awk '-F"' '{ print $2; }'

    For example:

    blkid /dev/sde | awk '-F"' '{ print $2; }'

    cc111789-ead1-485c-9880-138db408b7c1

    blkid /dev/sdf | awk '-F"' '{ print $2; }'

    c877e6a7-b16c-4fd2-988e-5450b963d7c7

  10. Take this UUID and edit the /etc/fstab hot disk entry with the UUID, and the warm disk entry (if present) or local EventDB directory.

    For example:

    UUID=cc111789-ead1-485c-9880-138db408b7c1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    UUID=c877e6a7-b16c-4fd2-988e-5450b963d7c7 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

  11. Run the following command:

    systemctl daemon-reload

  12. Reboot.

ClickHouse Related

After upgrading to 7.1.1 or later, in ClickHouse based deployments, searching on IP fields (namely, Reporting IP, Source IP, Destination IP, and Host IP) does not show correct results for events stored prior to upgrade. Events are still stored in ClickHouse, but searches on events before the upgrade do not return results, while searches on events stored after the upgrade work correctly. All other searches work correctly.

This issue is related to a recent change in ClickHouse version 23.3 in how IPV6 fields are represented. See the following URLs for more information.

Workaround

The workaround requires recreating old indices involving Reporting IP, Source IP, Destination IP, and Host IP that were created before the 7.1.1 upgrade. In our experiments, Fortinet has not seen any event loss or FortiSIEM service interruption during this process.

  1. Go to root shell by running the following command:

    sudo -s

  2. Change directory to /tmp by running the following command:

    cd /tmp

  3. Run the following command:

    clickhouse-client

  4. Ensure that /data-clickhouse-hot-1 has at least 10% free disk space. This space is required during index re-creation (Step 5 below). If free disk space is less that 10%, then run the following SQL command (4a.) to get the list of oldest ClickHouse partitions residing on the /data-clickhouse-hot-1 disk and either move them to another disk or tier, or delete them until /data-clickhouse-hot-1 has at least 10% free disk space. These commands need to be run only on ALL data nodes in every shard. The first command (4a.), identifies the largest partitions on the /data-clickhouse-hot-1 disk. The remaining commands enable you to move the data to another tier (4b.), or another disk (4c.), or delete the data (4d.).

    1. Identify the largest ClickHouse partitions in Hot node:

      SELECT disk_name, partition, extract(partition, '\(\d+,(\d+)\)') as date, formatReadableSize(sum(bytes_on_disk)), formatReadableSize(sum(data_uncompressed_bytes)) FROM system.parts WHERE (table = 'events_replicated') AND path LIKE '%hot-1%' AND active GROUP BY disk_name, partition ORDER BY disk_name ASC, date ASC limit 10

    2. Move the data to another tier:

      ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO VOLUME <next tier>

    3. Move the data to another disk:

      ALTER TABLE fsiem.events_replicated MOVE PARTITION <partition expression from (a) > TO disk <another disk>

    4. Delete the data:

      ALTER TABLE fsiem.events_replicated DROP PARTITION <partition expression from (a) >

      Example:

      Output from command in 4a.:

      To move the first partition (size 3.98 GiB) to Warm tier, issue the following command as shown in 4b.

      ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240115) TO VOLUME 'warm'

      To move the first partition (size 3.98 GiB) to another disk in Hot tier, issue the following command as shown in 4c.

      ALTER TABLE fsiem.events_replicated MOVE PARTITION (18250, 20240116) TO disk 'data_clickhouse_hot_2'

      To delete the first partition (size 3.98 GiB), issue the following command as shown in 4d.

      ALTER TABLE fsiem.events_replicated DROP PARTITION (18250, 20240116)

  5. Run the following commands sequentially. This will drop/add/recreate all affected indices: Reporting IP, Source IP, Destination IP, and Host IP within ClickHouse. These commands need to be run only on one data node per shard. Note that the first command (drop) in every index may take some time to complete. User must wait until the command completes before issuing the next command.

    alter table fsiem.events_replicated drop index index_reptDevIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_reptDevIpAddr_bloom_filter reptDevIpAddr TYPE bloom_filter GRANULARITY 5 AFTER index_customer_set
    alter table fsiem.events_replicated materialize index index_reptDevIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_srcIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_srcIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'srcIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER collectorId_set
    alter table fsiem.events_replicated materialize index index_srcIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_destIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_destIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'destIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_srcIpAddr_bloom_filter
    alter table fsiem.events_replicated materialize index index_destIpAddr_bloom_filter
    
    alter table fsiem.events_replicated drop index index_hostIpAddr_bloom_filter
    alter table fsiem.events_replicated add INDEX index_hostIpAddr_bloom_filter metrics_ip.value[indexOf(metrics_ip.name, 'hostIpAddr')] TYPE bloom_filter GRANULARITY 5 AFTER index_user_bloom_filter
    alter table fsiem.events_replicated materialize index index_hostIpAddr_bloom_filter