Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.1, please be aware that the following features will not be available after migration.
Elastic Cloud support
If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.
This section describes how upgrade from FortiSIEM
To perform the migration, the following prerequisites must be met:
- Delete the Worker from the Super GUI.
- Stop/Shutdown the Worker.
- Create a
/svn/53x-settingsdirectory and symlink it to
/images. For FSM running on Hyper-V, you only need a tiny amount of space to backup
5.3.x or 5.4.0system settings, so use the
/svnpartition (a partition other than
root) instead of a new disk. The following screen shot illustrates this:
Download the FortiSIEM Hyper-V backup script to start migration. Follow these steps:
- Download the file FSM_Backup_5.3_Files_6.1.1_build0118.zip from the support site.
- Copy the file to the
5.3.x or 5.4.0Hyper-V instance (for example,
/svn/53x-settings) that you are planning to migrate to 6.1.1.
- Unzip the
# cd /svn/53x-settings
# unzip FSM_Backup_5.3_Files_6.1.1_build0118.zip
Follow these steps to run the backup script:
- Go to the directory that contains the
backup-configfile, for example:
# cd /svn/53x-settings/fsm-53x-backup-config
- Run the
sh backupscript to backup the
5.3.x or 5.4.0settings that will be migrated later into the new 6.1 OS.
# sh backup
- Shutdown the system.
# shutdown -h now
- Download and Uncompress 6.1.1 Hyper-V Root VHDX
5.3.x or 5.4.0Instance to Use the New VHDX
- Migrate to FortiSIEM 6.1.1
Download the compressed FortiSIEM Hyper-V root VHDX migration. Follow these steps:
- Download the file
FortiSIEM-HyperV-6.1.1.0118.zipfrom the support site.
- Copy the file to your
5.3.x or 5.4.0Hyper-V host that is currently running the 5.3.x or 5.4.0instance.
- Use unzip tools to uncompress the
.zipfile to obtain the
FortiSIEM-HyperV-6.1.1.0118.zipfile. Store it in the same folder where you have your
5.3.x or 5.4.0disks.
- Open the Hyper-V Manager and select your
5.3.x or 5.4.0VM.
- Right-click on the VM, the click Settings.
- Navigate to the first hard drive under IDE Controller 0. Click Browse and select the new 6.1 VHDX you just uncompressed. Click Open.
- Navigate to Processor, change 8 vCPUs to 16.
- Navigate to Memory, change 16GB to 64GB. Click Apply.
- Click SCSI Controller, Hard Drive, Click Add. Similar to Fresh Install steps 12- 19, add a new hard drive of size 100GB for the
/optpartition. Below is a screen shot of the final screen of Add new hard drive.
- Click OK on the VM settings screen to complete making changes to the VM for migration.
- Connect to the VM Console and Start the VM from Hyper-V Manager.
- The system will start with the FortiSIEM 6.1 OS.
- The system will boot up. When the command prompt window opens, log in with the default login credentials: user:
- You will be required to change the password. Remember this password for future use.
- Find the device name of the original
5.3.x or 5.4.0SVN volume using
fdisk -land mount it to
/mnt. This contains the backup of
5.3.x or 5.4.0system settings that will be used during migration. Copy the 5.3.x or 5.4.0settings that were previously backed up and then umount
/mnt, for example:
# mount /dev/sdb1 /mnt
# mkdir /restore-53x-settings
# cd /restore-53x-settings
# rsync -av /mnt/53x-settings/. .
# ln -sf /restore-53x-settings /images
# umount /mnt
- Run the
configFSM.shcommand to configure the migration via a GUI, for example:
- In the Configure TIMEZONE screen of the GUI select 2 No.
- Select your node type: Supervisor, Worker, or Collector. This step is usually performed on Supervisor. Press Next.
- On the Configure Supervisor screen, select the operation 6 migrate_6_1_1. Press Next.
- Test network connectivity by entering a host name that can be resolved by your DNS Server (entered in the previous step) and responds to ping. The host can either be an internal host or a public domain host like google.com. Press Next.
- Click Run on the confirmation page once you make sure all the values are correct. The options for
configureFSM.pyscript are described in the table here.
- Wait for the operations to complete, and system to reboot.
- Login to the system after a few minutes. Wait several more minutes for all processes to start up.
phstatuscommand, for example:
- Remove the restored settings directories because you no longer need them, for example:
# rm -rf /restore-53x-settings
# rm -rf /svn/53x-settings
# rm -f /images
This section provides instructions on how to migrate Supervisor, Workers, and Collectors separately in a cluster environment,
- Delete Workers
- Migrate Supervisor
- Install New Worker(s)
- Register Workers
- Set Up Collector-to-Worker Communication
- Working with Pre-6.1.0 Collectors
- Install 6.1.1 Collectors
- Register 6.1.1 Collectors
- Login to the Supervisor.
- Go to Admin > License > Nodes and delete the Workers one-by-one.
- Go to the Admin > Cloud Health page and make sure that the Workers are not present.
Note that the Collectors will buffer events while the Workers are down.
- Shutdown the Workers.
SSH to the Workers one-by-one and shutdown the Workers.
Follow the steps in Migrate All-in-one Installation to migrate the supervisor node. Note: FortiSIEM 6.1.1 does not support Worker or Collector migration.
Follow the steps in Cluster Installation > Install Workers to install new Workers. You can either keep the same IP address or change the address.
Follow the steps in Cluster Installation > Register Workers to register the newly created 6.1.1 Workers to the 6.1.1 Supervisor. The 6.1.1 FortiSIEM Cluster is now ready.
- Go to Admin > Systems > Settings.
- Add the Workers to the Event Worker or Query Worker as appropriate.
- Click Save.
Pre-6.1.0 Collectors and agents will work with 6.1.1 Supervisor and Workers. You can install 6.1.1 collectors at your convenience.
FortiSIEM does not support Collector migration to 6.1.1. You can install new 6.1.1 Collectors and register them to 6.1.1 Supervisor in a specific way so that existing jobs assigned to Collectors and Windows agent associations are not lost. Follow these steps:
- Copy the http hashed password file (
/etc/httpd/accounts/passwds) from the old Collector.
- Disconnect the pre-6.1.1 Collector.
- Install the 6.1.1 Collector with the old IP address by the following the steps in Cluster Installation > Install Collectors.
- Copy the saved http hashed password file (
/etc/httpd/accounts/passwds) from the old Collector to the 6.1.1 Collector.
This step is needed for Agents to work seamlessly with 6.1.1 Collectors. The reason for this step is that when the Agent registers, a password for Agent-to-Collector communication is created and the hashed version is stored in the Collector. During 6.1.1 migration, this password is lost.
Follow the steps in Cluster Installation > Register Collectors, with the following difference: in the
phProvisionCollector command, use the
--update option instead of
--add. Other than this, use the exactly the same parameters that were used to register the pre-
6.1.1 Collector. Specifically, use this form of the
phProvisionCollector command to register a
6.1.1 Collector and keep the old associations:
# /opt/phoenix/bin/phProvisionCollector --update <user> '<password>' <Super IP or Host> <Organization> <CollectorName>
The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.
Re-install new Windows Agents with the old
InstallSettings.xml file. Both the migrated and the new agents will work. The new Linux Agent and migrated Linux Agent will also work.