Whats New in 6.1.2
This document describes new and enhanced features for the FortiSIEM 6.1.2 (build 0119) release.
This is a hardware appliance only release supporting the FortiSIEM 3500F, 2000F, and 500F appliances. It has the same code content as the software only release 6.1.1 (build 0118).
- FortiSIEM 3500F Hardware Appliance Support
- FortiSIEM 2000F Hardware Appliance Support
- FortiSIEM 500F Hardware Appliance Support
- Upgrade Overview
- Known Issues
FortiSIEM 3500F Hardware Appliance Support
FortiSIEM 6.1.2 can be installed fresh on 3500F hardware appliance. You can also migrate from 5.x or earlier releases.
For fresh installation and migration procedures, see the FortiSIEM 3500F Hardware Configuration Guide.
For an overview of the migration and upgrade process, see Upgrade Overview below.
FortiSIEM 2000F Hardware Appliance Support
FortiSIEM 6.1.2 can be installed fresh on 2000F hardware appliance. You can also migrate from 5.x releases and upgrade from 6.1.0 and 6.1.1 release.
For fresh installation and migration procedures, see the FortiSIEM 2000F Hardware Configuration Guide.
For an overview of the migration and upgrade process, see Upgrade Overview below.
FortiSIEM 500F Hardware Appliance Support
FortiSIEM 6.1.2 Collector can be installed fresh on 500F appliance. You can also migrate from any previous release.
For fresh installation and migration procedures, see the FortiSIEM 500F Collector Configuration Guide.
For an overview of the migration and upgrade process, see Upgrade Overview.
Upgrade Overview
The following sections provide an overview of how to upgrade to release 6.1.2:
- Migrate from pre-5.3.0 to 6.1.2
- Migrate from 5.3.x or 5.4.x to 6.1.2
- Upgrade from 6.1.0 or 6.1.1 to 6.1.2
- Upgrade via Proxy
- Post Migration Health Check
FortiSIEM 2000F and 3500F appliances run mostly as an all-in-one Supervisor. However they can also be deployed as a Cluster with external storage. The following instructions cover the general case. If you are not running Workers, then skip the Worker, NFS, and Elasticsearch-related portions.
Migrate from pre-5.3.0 to 6.1.2
To migrate hardware from a pre-5.3.0 to the 6.1.2 release, follow these steps:
- Standalone – Complete steps 1b, and 1c.
- Standalone with Collectors - Complete steps 1b, 1c, 2a, 3, and 5.
- General setup with Workers and Collectors – Complete all steps.
- Upgrade the Supervisor to 5.4.0:
- Delete Workers from Supervisor.
- Upgrade Supervisor to 5.4.0: follow the instructions here.
- Perform health check: log on to the Supervisor and make sure that it is displaying the correct version and all processes are up.
- Migrate to 6.1.2 :
- Migrate the Supervisor from 5.4.0 to 6.1.2. Migration is platform specific.
- If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
- Install new 6.1.2 Workers and add them back to the Supervisor.
- Go to Admin > Settings > Event Worker and Query Worker and make sure that they are correct.
- Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
- When you are ready to upgrade Collectors to 6.1.2, do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
- Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
- Re-register with the update option and the same IP.
- Perform health checks. See Post Migration Health Check.
- Reinstall the Agents with the latest version when you are ready to upgrade them.
- Perform health checks: make sure Agent events are being received.
Migrate from 5.3.x or 5.4.x to 6.1.2
To migrate hardware from 5.3.x or 5.4.x to the 6.1.2 release, follow these steps:
- Standalone – Complete step 2a.
- Standalone with Collectors - Complete steps 2a, 3, and 5.
- General setup with Workers and Collectors – Complete all steps.
- Delete Workers from the Supervisor.
- Migrate the Supervisor to 6.1.2:
- Migration is platform specific.
- If you are using Elasticsearch, then go to ADMIN > Setup > Storage > Elasticsearch and click Test and Save.
- Install new 6.1.2 Workers and add them back to the Supervisor.
- Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
- Perform health checks. Old Collectors and Agents should work with 6.1.2 Supervisor and Workers.
- When you are ready to upgrade Collectors to 6.1.2, then do the following (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
- Copy the HTTP (hashed) passwords file from the old Collectors to the new Collector.
- Re-register with the update option and the same IP.
- Perform health checks. See Post Migration Health Check.
- Reinstall the Agents with the latest version when you are ready to upgrade them.
- Perform health checks: make sure Agent events are being received.
Upgrade from 6.1.0 or 6.1.1 to 6.1.2
To migrate hardware from 6.1.0 or 6.1.1 to the 6.1.2 release, follow these steps:
- Standalone – Complete step 2.i (Standalone).
- Standalone with Collectors - Complete steps 2 (EventDB on NFS) or 2 (Elasticsearch), 3, and 5.
- General setup with Workers and Collectors – Complete all steps.
- Copy the
upgrade.py
script to the Supervisor. For instructions, see above. - Upgrade the Supervisor to 6.1.2:
- Upgrade Workers to 6.1.2:
- EventDB on NFS case:
- Upgrade 6.1.0 or 6.1.1 Workers to 6.1.2.
- Elasticsearch case:
- Install new 6.1.2 Workers and add them back to the Supervisor.
- Go to ADMIN > Settings > Event Worker and Query Worker and make sure that they are correct.
- EventDB on NFS case:
- Perform health checks: old Collectors should work with 6.1.2 Super and Workers.
- When you are ready to upgrade Collectors to 6.1.2:
- Pre-6.1.0 Collectors (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
- Copy the HTTP (hashed) passwords file from old Collectors to the new Collectors.
- Re-register with update option and the same IP.
- 6.1.0 or 6.1.1 Collectors:
- Upgrade from the GUI.
- Pre-6.1.0 Collectors (for migrating 500F Collectors, see FortiSIEM 500F Collector Configuration Guide; for migrating hypervisor-based collectors, see the respective platform upgrade documents):
- Perform health checks. See Post Migration Health Check.
- Reinstall the Agents when you are ready to upgrade them.
- Perform health checks: make sure Agent events are being received.
Upgrade via Proxy
During upgrade, Super/Worker and Hardware appliances FSM-2000F and 3500F must be able to communicate with CentOS OS repositories (os-pkgs-cdn.fortisiem.fortinet.com
and os-pkgs.fortisiem.fortinet.com
) hosted by Fortinet, to get the latest OS packages. Follow these steps to set up this communication via proxy, before initiating the upgrade.
- SSH to the node.
- Edit
/etc/yum.conf
as follows:- If your proxy does not require authentication, then add a line like this:
proxy=http://<proxy-ip-or-hostname>:<proxy-port>
- If your proxy requires authentication, then add
proxy_username=
andproxy_password=
entries as well. For example, for squid proxy:proxy_username=<user>
proxy_password=<pwd>
- If your proxy does not require authentication, then add a line like this:
- Test that you can use the proxy to successfully communicate with the two sites:
os-pkgs-cdn.fortisiem.fortinet.com
andos-pkgs.fortisiem.fortinet.com
. - Begin the upgrade.
Post Migration Health Check
After migration is complete, follow these steps to check the system's health.
- Check Cloud health and Collector health from the FortiSIEM GUI:
- Versions display correctly.
- All processes are up and running.
- Resource usage is within limits.
- Check that Redis passwords match on Super and Workers:
- Super: run the command
phLicenseTool –showRedisPassword
. - Worker: run the command
grep -i auth /opt/node-rest-service/ecosystem.config.js
.
- Super: run the command
- Check that database passwords
match on Super and Workers:
- Super: run the command
phLicenseTool –showDatabasePassword
. - Worker: run the command
grep Auth_PQ_dbpass /etc/httpd/conf/httpd.conf
.
- Super: run the command
- Elasticsearch case: check the Elasticsearch health
- Check that events are
received correctly:
- Search All Events in last 10 minutes and make sure there is data.
- Search for events from Collector and Agents and make sure there is data. Both old and new collectors and agents must work.
- Search for events using CMDB Groups (Windows, Linux, Firewalls, etc.) and make sure there is data.
- Make sure there are no SVN authentication errors in CMDB when you click any device name.
- Make sure recent Incidents and their triggering events are displayed.
Known Issues
Shutting Down Hardware
On hardware appliances running FortiSIEM 6.6.0 or earlier, FortiSIEM execute shutdown
CLI does not work correctly. Please use the Linux shutdown
command instead.
Remediation Steps for CVE-2021-44228
Two FortiSIEM modules (phFortiInsightAI and 3rd party ThreatConnect SDK) use Apache log4j version 2.11 and 2.8 respectively for logging purposes, and hence are vulnerable to the recently discovered Remote Code Execution vulnerability (CVE-2021-44228) in FortiSIEM 6.1.x.
These instructions specify the steps needed to mitigate this vulnerability without upgrading Apache log4j to the latest stable version 2.16 or higher. Actions need to be taken on the Supervisor node only.
On Supervisor Node
-
Logon via SSH as root.
-
Mitigating 3rd party ThreatConnect SDK module:
-
Delete these log4j jar files under
/opt/glassfish/domains/domain1/applications/phoenix/lib
-
log4j-core-2.8.2.jar
-
log4j-api-2.8.2.jar
-
log4j-slf4j-impl-2.6.1.jar
-
-
-
Mitigating phFortiInsightAI module:
-
Delete these log4j jar files under
/opt/fortiinsight-ai/lib/
-
log4j-api-2.11.1.jar
-
log4j-core-2.11.1.jar
-
-
-
Restart all Java Processes by running:
“killall -9 java”
Migration and Fresh Install Limitations
-
Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.2, please be aware that the following features will not be available after migration.
-
Pre-compute feature
-
Elastic Cloud support
If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.
-
- Fresh Install
limitations:
- Can not be installed on Alibaba Cloud.
- Linux ISO image is not available.
- Does not install on IPV6 networks .
- Collector to Supervisor/Worker communication via Proxy is not supported.
- Disaster recovery is not supported as PostGreSQL BDR is not yet available on the CentOS 8.2 release.
- Report Server is not supported.
- STIX/OTX Malware IOC Integration Error: If you see the error below when you log in to Glassfish, it is likely
caused by the
jsse.enableSNIExtension
flag that was added to resolve ahttpd
issue in Java JDK 7. In JDK8, there is no need to set this flag.
Error:#|2020-09-10T12:30:00.535+0200|SEVERE|glassfish3.1.2|com.accelops.service.threatfeed.BaseOTXUpdateService|_ThreadID=218;_ThreadName=Thread-2;|org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://otx.alienvault.com/api/v1/pulses/subscribed?limit=20&modified_since=2020-09-03T12:30:00%2B02:00&":Unsupported record version Unknown-0.0; nested exception is javax.net.ssl.SSLException: Unsupported record version Unknown-0.0
To resolve this issue, follow these steps:
Log in to the Supervisor node.
Run the command
su - admin
.Enter your Glassfish password and run this command
/opt/glassfish/bin/asadmin delete-jvm-options -Djsse.enableSNIExtension=false
Run the command
Killall -9 java
.
-
Changing Worker IP via configFSM.sh does not work. To change Worker IP, delete the Worker from Supervisor, change the IP using Linux commands and add it back.
-
A newly installed 5.x Collector cannot be registered to a 6.x Supervisor. Old Collectors will continue to work. For new installations, install 6.x Collectors.
-
The following bugs have been discovered.
-
Malware Hash import from a CSV file fails when the CSV file contains 75,000 or more Malware Hash entries.
-
Scheduled bundle reports fail after migration.
-
Update Malware Hash via API does not work as expected, producing "duplicate" errors.
-
Cisco Meraki log discovery does not add devices to CMDB.
-
FortiSIEM does not recognize a UEBA perpetual license, so users with a UEBA perpetual license are unable to add UEBA for their devices.
-
For Elasticsearch cases with inline report mode set to 2, the ReportMaster memory may grow quickly.
-
Malware IP, Domain, and URL Group lookup performance slower than expected.
-
Security incidents always indicate "System Cleared" after 24 hours, even if
auto_clear_security_incidents=0
is set. -
SSL communication sockets between rule worker and rule master are not always closed properly, leading to rules not triggering.
-
Rules with a pattern-based clearing condition do not always clear even if the condition is met. This is because the clear rule’s time window is sometimes read incorrectly.
-
Elasticsearch Based Deployments Terms Query Limit
In Elasticsearch based deployments, queries containing "IN Group X" are handled using Elastic Terms Query. By default, the maximum number of terms that can be used in a Terms Query is set to 65,536. If a Group contains more than 65,536 entries, the query will fail.
The workaround is to change the “max_terms_count” setting for each event index. Fortinet has tested up to 1 million entries. The query response time will be proportional to the size of the group.
Case 1. For already existing indices, issue the REST API call to update the setting
PUT fortisiem-event-*/_settings { "index" : { "max_terms_count" : "1000000" } }
Case 2. For new indices that are going to be created in the future, update fortisiem-event-template so those new indices will have a higher max_terms_count setting
-
cd /opt/phoenix/config/elastic/7.7
-
Add
"index.max_terms_count": 1000000
(including quotations) to the “settings” section of thefortisiem-event-template
.Example:
...
"settings": { "index.max_terms_count": 1000000,
...
-
Navigate to ADMIN > Storage > Online and perform Test and Deploy.
-
Test new indices have the updated terms limit by executing the following simple REST API call.
GET fortisiem-event-*/_settings