Important Notes for 6.2.0 to 6.2.1 Upgrade
- For your Supervisor and Worker, do not use the upgrade menu item in configFSM.sh to upgrade from 6.2.0 to 6.2.1. This is deprecated, so it will not work. Use the new method as instructed in this guide (See Upgrade Supervisor for your appropriate deployment in Upgrading From 6.1.x or 6.2.0 to 6.2.1).
- Before upgrading Collectors to 6.2.1, you will need to copy the
phcollectorimageinstaller.py
file from your Supervisor to your Collectors. See steps 1-3 in Upgrade Collectors. - In 6.1.x releases, new 5.x collectors could not register to the Supervisor. This restriction has been removed in 6.2.x so long as the Supervisor is running in non-FIPS mode. However, 5.x collectors are not recommended since CentOS 6 has been declared End of Life.
- Remember to remove the browser cache after logging on to the 6.2.1 GUI and before doing any operations.
- Make sure to follow the listed upgrade order.
- Upgrade the Supervisor first. It must be upgraded prior to upgrading any Workers or Collectors.
- Upgrade all existing Workers next, after upgrading the Supervisor. The Supervisor and Workers must be on the same version.
- Older Collectors will work with the upgraded Supervisor and Workers. You can decide to upgrade Collectors to get the full feature set in 6.2.1 after you have upgraded all Workers.
Important Notes for 6.1.x to 6.2.1 Upgrade
-
For your Supervisor and Worker, do not use the upgrade menu item in configFSM.sh to upgrade from 6.1.x to 6.2.1. This is deprecated, so it will not work. Use the new method as instructed in this guide (See Upgrade Supervisor for your appropriate deployment in Upgrading From 6.1.x or 6.2.0 to 6.2.1).
-
Before upgrading Collectors to 6.2.1, you will need to copy the
phcollectorimageinstaller.py
file from your Supervisor to your Collectors. See steps 1-3 in Upgrade Collectors. -
In 6.1.x releases, new 5.x collectors could not register to the Supervisor. This restriction has been removed in 6.2.x so long as the Supervisor is running in non-FIPS mode. However, 5.x collectors are not recommended since CentOS 6 has been declared End of Life.
-
The 6.2.1 upgrade will attempt to migrate existing SVN files (stored in
/svn
) from the old svn format to the new svn-lite format. During this process, it will first export/svn
to/opt
and then import them back to/svn
in the new svn-lite format. If your/svn
uses a large amount of disk space, and/opt
does not have enough disk space left, then migration will fail. Fortinet recommends doing the following steps before upgrading:-
Check /svn usage
-
Check if there is enough disk space left in
/opt
to accommodate/svn
-
Expand
/opt
by the size of/svn
-
Begin upgrade
See Steps for Expanding /opt Disk for more information.
-
-
If you are using AWS Elasticsearch, then after upgrading to 6.2.1, take the following steps:
-
Go to ADMIN > Setup > Storage> Online.
-
Select "ES-type" and re-enter the credential.
-
-
If you have more than 5 Workers, Fortinet recommends using at least 16 vCPU for the Supervisor and to increase the number of notification threads for RuleMaster (See the sizing guide for more information). To do this, SSH to the Supervisor and take the following steps:
-
Modify the
phoenix_config.txt
file, located at/opt/phoenix/config/
with#notification will open threads to accept connections
#FSM upgrade preserves customer changes to the parameter value
notification_server_thread_num=50
Note: The default notification_server_thread_num is 20. -
Restart phRuleMaster.
-
-
Upgrading Elasticsearch Transport Client usage - The Transport Client option has been removed as Elasticsearch no longer supports that client. If you are using Transport Client in pre-6.2.1, you will need to modify the existing URL by adding "http://" or "https://" in front of the URL field after upgrading, as displayed in ADMIN > Setup > Storage > Online > with Elasticsearch selected.
-
When upgrading to 6.2.1, the Elasticsearch Cluster IP/Host field changes to the URL field. In the URL field, add "http://" or "https://" to your IP address.
-
Next, select Test to confirm functionality, and select Save to save the updated settings.
-
-
Prior to upgrading, ensure that hot node and warm node counts are both greater than the number of replicas. Failure to do so will result in Test and Save operation failure after an upgrade. This basic requirement check has been added for version 6.2.0 and later.
-
Remember to remove the browser cache after logging on to the 6.2.1 GUI and before doing any operations.
Known Issue after 6.2.1 Upgrade
As part of a fix for excessive SSL communication errors between phReportWorker and phReportMaster (Bug 710109) in 6.2.1, the value for count_distinct_precision
in the /opt/phoenix/config/phoenix_config.txt
file is reduced from 14 to 9 for the Supervisor and Worker nodes. While 6.2.1 fresh install sets the value correctly, 6.2.1 upgrade may keep the old value (14) and fail to set the new value (9). Because of this, you can still have excessive SSL communication errors between phReportWorker and phReportMaster and it may appear that bug 710109 is not fixed.
To fix this issue, follow the instructions in Modification on the Supervisor and Modification on each Worker for your Supervisor and Workers.
Modification on the Supervisor
-
SSH into the super as root and edit the
/opt/phoenix/config/phoenix_config.txt
file.ssh root@<supervisor FQDN/IP>
su admin
vi /opt/phoenix/config/phoenix_config.txt
-
Find "count_distinct_precision=".
-
Modify the value to that shown here.
count_distinct_precision=9 # in range 4-18
-
Save the configuration.
-
Stop phRerportMaster/Worker by running the following commands.
phtools --stop phReportWorker
phtools --stop phReportMaster
-
Start phReportMaster/Worker by running the following commands.
phtools --start phReportMaster
phtools --start phReportWorker
-
Monitor Stability by running the following command.
phstatus
Modification on each Worker
-
SSH into each Worker as root and edit the
/opt/phoenix/config/phoenix_config.txt
file by running the following commands.ssh root@<Worker FQDN/IP>
su admin
vi /opt/phoenix/config/phoenix_config.txt
-
Find "count_distinct_precision=".
-
Modify the value to that shown here.
count_distinct_precision=9 # in range 4-18
-
Save the configuration.
-
Stop phReportWorker by running the following command.
phtools --stop phReportWorker
-
Start phReportWorker by running the following command.
phtools --start phReportWorker
-
Monitor Stability by running the following command.
phstatus
Pre-Upgrade Checklist
To perform an upgrade, the following prerequisites must be met.
-
Carefully consider the known issues, if any, in the Release Notes.
-
Make sure the Supervisor processes are all up.
-
Make sure you can login to the FortiSIEM GUI and successfully discover your devices.
-
Take a snapshot of the running FortiSIEM instance.
-
Make sure the FortiSIEM license is not expired.
-
Make sure the Supervisor, Workers and Collectors can connect to the Internet on port 443 to the CentOS OS repositories (os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs.fortisiem.fortinet.com) hosted by Fortinet, to get the latest OS packages. Connectivity can be either directly or via a proxy. For proxy based upgrades, see Upgrade via Proxy. If Internet connectivity is not available, then follow the Offline Upgrade Guide.
Upgrade Overview
The general upgrade paths are:
Upgrading From Pre-5.3.0
If you are running FortiSIEM that is pre-5.3.0, take the following steps:
- Upgrade to 5.4.0 by using the 5.4.0 Upgrade Guide: Single Node Deployment / Cluster Deployment.
- Perform a health check to make sure the system has upgraded to 5.4.0 successfully.
-
If you are running a Software Virtual Appliance, you must migrate to 6.1.1. Since the base OS changed from CentOS 6 to CentOS 8, the steps are platform specific. Use the appropriate 6.1.1 guide and follow the migration instructions.
If you are running a hardware appliance (3500G, 3500F, 2000F, 500F), you must migrate to 6.1.2. Since the base OS changed from CentOS 6 to CentOS 8, the steps are platform specific. Follow the "Migrating from 5.3.x or 5.4.x to 6.1.2" instructions from the appropriate appliance specific documents listed here.
Note: If you are upgrading from a 2000F, 3500F, or 3500G appliance, make sure to follow the instructions at Fix After Upgrading 2000F, 3500F, or 3500G from 5.3.x or 5.4.0 to 6.1.2 after migrating to 6.1.2. - Perform a health check to make sure the system is upgraded to 6.1.1 or 6.1.2 successfully.
- Upgrade to 6.2.x by following the steps in Upgrading From 6.1.x.
Upgrading From 5.3.x or 5.4.0
Start at step 3 from Upgrading From Pre-5.3.0, and follow the progressive steps.
Note: If you are upgrading from a 2000F, 3500F, or 3500G appliance, make sure to follow the instructions at Fix After Upgrading 2000F, 3500F, or 3500G From 5.3.x or 5.4.0 to 6.1.2 after migrating to 6.1.2.
Upgrading From 6.1.x or 6.2.0 to 6.2.1
Note: Prior to your 6.1.x to 6.2.x upgrade,
If a proxy is needed for FortiSIEM Supervisor, Worker or Hardware appliances (FSM-2000F, 3500F, and 3500G) to access the Internet, please refer to Upgrade via Proxy before starting.
After completion of your upgrade, follow the appropriate steps in Post Upgrade Health Check.
There are two possible upgrades after upgrading to 6.1.x. Follow the steps for your appropriate FortiSIEM setup for single node deployment or cluster deployment.
Upgrade via Proxy
During upgrade, the FortiSIEM Supervisor, Worker or Hardware appliances (FSM-2000F, 3500F, and 3500G) must be able to communicate with CentOS OS repositories (os-pkgs-cdn.fortisiem.fortinet.com
and os-pkgs.fortisiem.fortinet.com
) hosted by Fortinet, to get the latest OS packages. Follow these steps to set up this communication via proxy, before initiating the upgrade.
-
SSH to the node.
-
Create this file
etc/profile.d/proxy.sh
with the following content and then save the file.PROXY_URL="<proxy-ip-or-hostname>:<proxy-port>" export http_proxy="$PROXY_URL" export https_proxy="$PROXY_URL" export ftp_proxy="$PROXY_URL" export no_proxy="127.0.0.1,localhost"
-
Run
source /etc/profile.d/proxy.sh
. -
Test that you can use the proxy to successfully communicate with the two sites here:
os-pkgs-cdn.fortisiem.fortinet.com
os-pkgs.fortisiem.fortinet.com
. -
Begin the upgrade.
Post Upgrade Health Check
- Check Cloud health and Collector health from the FortiSIEM GUI:
- Versions display correctly.
- All processes are up and running.
- Resource usage is within limits.
- Check that the Redis passwords match on the Supervisor and Workers:
- Supervisor: run the command
phLicenseTool --showRedisPassword
- Worker: run the command
grep -i auth /opt/node-rest-service/ecosystem.config.js
- Supervisor: run the command
- Check that the database passwords
match on the Supervisor and Workers:
- Supervisor: run the command
phLicenseTool --showDatabasePassword
- Worker: run the command
grep Auth_PQ_dbpass /etc/httpd/conf/httpd.conf
- Supervisor: run the command
- Elasticsearch case: check the Elasticsearch health
- Check that events are
received correctly:
- Search All Events in last 10 minutes and make sure there is data.
- Search for events from Collector and Agents and make sure there is data. Both old and new collectors and agents must work.
- Search for events using CMDB Groups (Windows, Linux, Firewalls, etc.) and make sure there is data.
- Make sure there are no SVN authentication errors in CMDB when you click any device name.
- Make sure recent Incidents and their triggering events are displayed.
Upgrade Single Node Deployment
Upgrading a single node deployment requires upgrading your supervisor. If you have any collectors, make sure to upgrade your supervisor first before upgrading them.
Upgrade Supervisor
To upgrade your Supervisor, take the following steps.
- Make sure Workers are shut down. Collectors can remain up and running.
- Login to the Supervisor via SSH.
- Create the
/upgrade
directory.mkdir -p /opt/upgrade
- Download the upgrade zip package
FSM_Upgrade_All_6.2.1_build0223.zip
, then upload it to your Supervisor node under the/opt/upgrade/
folder.
- Go to
/opt/upgrade
.cd /opt/upgrade
- Unzip the upgrade zip package.
unzip FSM_Upgrade_All_6.2.1_build0223.zip
- Go to the FSM_Upgrade_All_6.2.1_build0223 directory.
cd FSM_Upgrade_All_6.2.1_build0223
- Run a screen.
screen -S upgrade
Note: This is intended for situations where network connectivity is less than favorable. If there is any connection loss, log back into the SSH console and return to the virtual screen by using the following command.
screen -r
- Run a screen.
- Start the upgrade process by entering the following.
sh upgrade.sh
- After the process is completed, perform a basic health check. All processes should be up and running.
Upgrade Collectors
To upgrade your Collectors, take the following steps.
Extra Upgrade Steps from 6.1.x to 6.2.1
If you are upgrading from 6.1.x to 6.2.1, then take the following steps first. If you are already on 6.2.1, these steps are not required.
-
Login to the Collector via SSH as root.
-
Copy
/opt/phoenix/phscripts/bin/phcollectorimageinstaller.py
from your Supervisor by running the following command. (Note: This is copied from the 6.2.1 upgraded Supervisor)scp root@<SupervisorIP>:/opt/phoenix/phscripts/bin/phcollectorimageinstaller.py /opt/phoenix/phscripts/bin/
-
Change permission by running the following command.
chmod 755 /opt/phoenix/phscripts/bin/phcollectorimageinstaller.py
Main Upgrade Steps
-
Login to the Supervisor via SSH as root.
-
Prepare the Collector upgrade image by running the following command on the Supervisor.
phSetupCollectorUpgrade.sh /opt/upgrade/FSM_Upgrade_All_6.2.1_build0223.zip <SupervisorFQDN>
Note: Replace <SupervisorFQDN> with the fully qualified domain name of the Supervisor.
This is what the output is in shell:
# phSetupCollectorUpgrade.sh
Usage:
/opt/phoenix/phscripts/bin/phSetupCollectorUpgrade.sh coImageZipPath superFQDN
-
Go to ADMIN > Health > Collector Health.
-
Select a Collector.
-
Download the image by clicking Download Image.
-
Upgrade the image by clicking Install Image.
-
-
Make sure the Collector and all its processes are up.
-
Repeat steps 3 through 5 for all Collectors.
Upgrade Cluster Deployment
It is critical to review Overview prior to taking the detailed steps to upgrade your FortiSIEM cluster.
Overview
- Shut down all Workers.
- Collectors can be up and running.
- Upgrade the Supervisor first, while all Workers are shut down.
- After the Supervisor upgrade is complete, verify it is up and running.
- Upgrade each Worker individually.
- If your online storage is Elasticsearch, take the following steps:
- Navigate to ADMIN > Setup > Storage > Online.
- Click Test to verify the space.
- Click Save to save.
- Upgrade each Collector individually.
Step 1 prevents the accumulation of Report files when the Supervisor is not available during its upgrade. If these steps are not followed, the Supervisor may not come up after the upgrade because of excessive unprocessed report file accumulation.
Note: Both the Supervisor and Workers must be on the same FortiSIEM version, otherwise various software modules may not work properly. However, Collectors can be in an older version, one version older to be exact. These Collectors will work, however they may not have the latest discovery and performance monitoring features offered in the latest Supervisor/Worker versions. FortiSIEM recommends that you upgrade your Collectors as soon as possible. If you have Collectors in your deployment, make sure you have configured an image server to use as a repository for them.
Detailed Steps
Take the following steps to upgrade your FortiSIEM cluster.
- Shutdown all Worker nodes.
# shutdown now
- Upgrade your Supervisor using the steps in Upgrade Supervisor. Make sure the Supervisor is running the version you have upgraded to and that all processes are up and running.
# phshowVersion.sh
# phstatus - If you are running Elasticsearch, and upgrading from 6.1.x to 6.2.1, then take the following steps, else skip this step and proceed to Step 4.
- Navigate to ADMIN > Storage > Online Elasticsearch.
- Verify that the Elasticsearch cluster has enough nodes (each type node >= replica + 1).
- Go to ADMIN > Setup > Storage > Online.
- Select "ES-type" and re-enter the credential of the Elasticsearch cluster.
- Click Test and Save. This important step pushes the latest event attribute definitions to Elasticsearch.
- Upgrade each Worker one by one, using the procedure in Upgrade Workers.
- Login to the Supervisor and go to ADMIN > Health > Cloud Health to ensure that all Workers and Supervisor have been upgraded to the intended version.
Note: The Supervisor and Workers must be on the same version. - Upgrade Collectors using the steps in Upgrade Collectors.
Upgrade Supervisor
To upgrade your Supervisor, take the following steps.
- Make sure Workers are shut down. Collectors can remain up and running.
- Login to the Supervisor via SSH.
- Create the
/upgrade
directory.mkdir -p /opt/upgrade
- Download the upgrade zip package
FSM_Upgrade_All_6.2.1_build0223.zip
, then upload it to your Supervisor node under the/opt/upgrade/
folder.
- Go to
/opt/upgrade
.cd /opt/upgrade
- Unzip the upgrade zip package.
unzip FSM_Upgrade_All_6.2.1_build0223.zip
- Go to the FSM_Upgrade_All_6.2.1_build0223 directory.
cd FSM_Upgrade_All_6.2.1_build0223
- Run a screen.
screen -S upgrade
Note: This is intended for situations where network connectivity is less than favorable. If there is any connection loss, log back into the SSH console and return to the virtual screen by using the following command.
screen -r
- Run a screen.
- Start the upgrade process by entering the following.
sh upgrade.sh
- After the process is completed, perform a basic health check. All processes should be up and running.
Upgrade Workers
To upgrade your Workers, take the following steps for each Worker.
- Login to a worker via SSH.
- Create the
/upgrade
directory.mkdir -p /opt/upgrade
- Download the upgrade zip package
FSM_Upgrade_All_6.2.1_build0223.zip
to/opt/upgrade
. - Go to
/opt/upgrade
.cd /opt/upgrade
- Unzip the upgrade zip package.
unzip FSM_Upgrade_All_6.2.1_build0223.zip
- Go to the FSM_Upgrade_All_6.2.1_build0223 directory.
cd FSM_Upgrade_All_6.2.1_build0223
- Run a screen.
screen -S upgrade
Note: This is intended for situations where network connectivity is less than favorable. If there is any connection loss, log back into the SSH console and return to the virtual screen by using the following command.
screen -r
- Run a screen.
- Start the upgrade process by entering the following.
sh upgrade.sh
- After the process is completed, perform a basic health check. All processes should be up and running.
Upgrade Collectors
Extra Upgrade Steps from 6.1.x to 6.2.1
If you are upgrading from 6.1.x to 6.2.1, then take the following steps first. If you are already on 6.2.1, these steps are not required.
-
Login to the Collector via SSH as root.
-
Copy
/opt/phoenix/phscripts/bin/phcollectorimageinstaller.py
from your Supervisor by running the following command. (Note: This is copied from the 6.2.1 upgraded Supervisor)scp root@<SupervisorIP>:/opt/phoenix/phscripts/bin/phcollectorimageinstaller.py /opt/phoenix/phscripts/bin/
-
Change permission by running the following command.
chmod 755 /opt/phoenix/phscripts/bin/phcollectorimageinstaller.py
Main Upgrade Steps
To upgrade your Collectors, take the following steps.
-
Login to the Supervisor via SSH as root.
-
Prepare the Collector upgrade image by running the following command on the Supervisor.
phSetupCollectorUpgrade.sh /opt/upgrade/FSM_Upgrade_All_6.2.1_build0223.zip <SupervisorFQDN>
Note: Replace <SupervisorFQDN> with the fully qualified domain name of the Supervisor.
This is what the output is in shell:
# phSetupCollectorUpgrade.sh
Usage:
/opt/phoenix/phscripts/bin/phSetupCollectorUpgrade.sh coImageZipPath superFQDN
-
Go to ADMIN > Health > Collector Health.
-
Select a Collector.
-
Download the image by clicking Download Image.
-
Upgrade the image by clicking Install Image.
-
-
Make sure the Collector and all its processes are up.
-
Repeat steps 3 through 5 for all Collectors.
Upgrade Log
The 6.2.1_build0223 Upgrade ansible log file is located here: /usr/local/upgrade/logs/ansible.log
.
Errors can be found at the end of the file.
Migrate Log
The 5.3.x/5.4.x to 6.1.x Migrate ansible log file is located here: /usr/local/migrate/logs/ansible.log
.
Errors can be found at the end of the file.
Reference
Steps for Expanding /opt Disk
-
Go to the Hypervisor and increase the
/opt
disk by the size of/svn
disk -
# ssh into the supervisor as
root
-
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ... sdb 8:16 0 100G 0 disk << old size ├─sdb1 8:17 0 22.4G 0 part [SWAP] └─sdb2 8:18 0 68.9G 0 part /opt ...
-
# yum -y install cloud-utils-growpart gdisk
-
# growpart /dev/sdb 2
CHANGED: partition=2 start=50782208 old: size=144529408 end=195311616 new: size=473505759 end=524287967
-
# lsblk
Changed the size to 250GB for example: #lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ... sdb 8:16 0 250G 0 disk <<< NOTE the new size for the disk in /opt ├─sdb1 8:17 0 22.4G 0 part [SWAP] └─sdb2 8:18 0 68.9G 0 part /opt ...
-
# xfs_growfs /dev/sdb2
meta-data=/dev/sdb2 isize=512 agcount=4, agsize=4516544 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=18066176, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=8821, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 18066176 to 59188219
-
# df -hz
Filesystem Size Used Avail Use% Mounted on ... /dev/sdb2 226G 6.1G 220G 3% / << NOTE the new disk size
Fix After Upgrading 2000F, 3500F, or 3500G from 5.3.x or 5.4.0 to 6.1.2
After upgrading hardware appliances 2000F, 3500F, or 3500G from 5.3.x or 5.4.0 to 6.1.2, the swap is reduced from 24GB to 2GB. Note that the upgrade from 6.1.2 to 6.2.x does not have this problem. This will impact performance. To fix this issue, take the following steps.
-
First, run the following command based on your hardware appliance model.
For 2000Fswapon –s /dev/mapper/FSIEM2000F-phx_swap
For 3500Fswapon –s /dev/mapper/FSIEM3500F-phx_swap
For 3500Gswapon –s /dev/mapper/FSIEM3500G-phx_swap
-
Add the following line to
/etc/fstab
for the above swap partition based on your hardware appliance model.
For 2000F/dev/FSIEM2000F/phx_swap /swapfile swap defaults 0 0
For 3500F
/dev/FSIEM3500F/phx_swap /swapfile swap defaults 0 0
For 3500G
/dev/FSIEM3500G/phx_swap /swapfile swap defaults 0 0
-
Reboot the hardware appliance.
-
Run the following command
swapon --show
and make sure there are 2 swap partitions mounted instead of just 1, as shown here.
[root@sp5753 ~]# swapon --show
NAME TYPE SIZE USED PRIO
/dev/dm-5 partition 30G 0B -3
/dev/dm-0 partition 2.5G 0B -2