Migrating from FortiSIEM 5.3.x or 5.4.0
Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.1, please be aware that the following features will not be available after migration.
-
Pre-compute feature
-
Elastic Cloud support
If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.
This section describes how upgrade from FortiSIEM
Pre-Migration Checklist
To perform the migration, the following prerequisites must be met
- Release 6.1.1 requires at least ESX 6.5, and ESX 6.7 Update 2 is recommended.
- Ensure that your system can connect to the network. You will be asked to provide a DNS Server and a host that can be resolved by the DNS Server and can respond to a ping. The host can either be an internal host or a public domain host like google.com.
- Make sure you are running
5.3.x or 5.4.0 , since 6.1.1 migration is only supported from these versions. If you are running a version earlier than 5.3.0, then upgrade to any of these versions first (recommended5.4.0 ) and then follow the procedures below. - Take a SnapShot of the running FortiSIEM instance.
- Delete Worker from Super GUI.
- Stop/Shutdown the Worker.
- Make sure the
root
directory (/
) has at least 1 GB of available space. - Right click the FortiSIEM OVA in VCenter and choose Edit Settings.
- In the VM Hardware tab, click Add New Device > Hard Disk to add a disk with 25GB of space. Repeat this process to add disks with 50GB and 100GB of space. There should be a total of 7 disks: 4 existing disks using local storage and the 3 disks you just added.
Click OK when you are finished.
You can find detailed information about installing FortiSIEM and configuring disks in Fresh Installation.
- Review the list of Datastores and click Apply.
- In VCenter, right click the FortiSIEM VM and select Power On.
- In the VCenter Summary tab, click Launch Web Console.
- Log in to the console as user
root
, with passwordProspectHills
. - In the console, run
fdisk -l
, for example:# fdisk -l
Note the list of the partition tables, the disk names, their approximate sizes and the UUID value. You will need this information for a later step.
- Mount the ~50GB disk to the
/images
directory. In the console, enter these commands and options:- Enter
# fdisk /dev/<your_50GB_disk>
Press Return. - Enter
n
to add a new partition. Press Return. - Enter
p
to choose primary partition. Press Return. - Enter
1
to choose partition number. Press Return. - Press Return to accept the default.
- Press Return to accept the default.
- Enter
w
to write the table to disk and exit. Press Return. - Enter the
mkfs.ext4 /dev/sdf1
command (wheresdf1
is the 50GB disk) to make a file system. - Enter the
mkdir -p /images
command to create animages
directory. - Enter
mount /dev/sdf1 /images
command to mount the 50GB disk to the/images
directory.Or using the UUID if the disk name changed, for example:
# blkid /dev/sdf1 /dev/sdf1: UUID="d4a5b82f-6e73-456b-ab08-d6e6d845d1aa" TYPE="ext4"
# mount -U d4a5b82f-6e73-456b-ab08-d6e6d845d1aa /images
- Enter
- Enter the
df -h
command to get the file system disk space usage.The following screen shot illustrates steps 12 and 13.
- Copy the
FSM_Full_All_RAW_VM-6.1.1_build0118.zip
file to the/images
directory. Useunzip
to extract the 6.1.1 FortiSIEM hardware image.# unzip FSM_Full_All_RAW_VM-6.1.1_build0118.zip
Note: The image size is about 25GB after extracting.
- Create a soft link to the image folder, for example:
# ln -sf /images/FortiSIEM-
RAW-VM- 6.1.1.0118.img /images/latest - Enter the
ll
command to ensurelatest
link is defined, for example:# ll
Migrate All-in-one Installation
- Download the Bootloader
- Prepare the Bootloader
- Load the FortiSIEM 6.1.1 Image
- Prepare the FortiSIEM VM for 6.1.1
- Migrate to FortiSIEM 6.1.1
- Finishing Up
Download the Bootloader
Install and configure the FortiSIEM bootloader to start migration. Follow these steps:
- Download the bootloader
FSM_Bootloader_6.1.1_build0118.zip
from the support site and copy it to the/images
directory. - Unzip the file, for example:
# unzip FSM_Bootloader_6.1.1_build0118.zip
Prepare the Bootloader
Follow these steps to run the prepare_bootloader
script:
- Go to the
bootloader
directory, for example:# cd /images/FSM_Bootloader_6.1.1_build0118
- Run the
prepare_bootloader
script to install and configure the bootloader. This script installs, configures, and reboots the system. The script may take a few minutes to complete.# sh prepare_bootloader
- The script will open the FortiSIEM bootloader shell.
Note: you might have to reboot the system manually if auto-reboot does not work.
- In the FortiSIEM bootloader shell, choose FortiSIEM Boot Loader. Press Return.
Load the FortiSIEM 6.1.1 Image
Follow these steps to load the FortiSIEM image:
- Log in to the bootloader shell as user
root
with passwordProspectHills
. - Create and mount the
/images
directory:- Create a
/images
directory if it is not already present, for example:# mkdir -p /images
- Mount the
sdf1
(the 50GB disk) to the/images
directory, for example:# mount /dev/sdf1 /images
Or using the UUID if the disk name changed:
# mount -U d4a5b82f-6e73-456b-ab08-d6e6d845d1aa /images
- Change to the
/images
directory, for example:# cd /images
- Run the
ll
command to check disk usage.# ll
These steps are illustrated in the following screen shot.
- Create a
- Run the
load_image
script to swipe the old image with the new image, for example:- Change to the
root
directory and check the contents, for example:# cd /
# ll
- Run the
load_image
script, for example:# sh load_image
When the script completes. Press Return.
- Press Return again to end the
load_image
script. - Run the
fdisk -l
command to check that the disks have been configured, for example:# fdisk -l
- Change to the
- In VCenter, power off the VM after
load_image
completes.
Prepare the FortiSIEM VM for 6.1.1
On the powered off machine from ESXi console, follow these steps to prepare the FortiSIEM VM.
- In VCenter, right-click the FortiSIEM VM and select Compatibility > Upgrade VM Compatibility.
- In the VM Compatibility Upgrade screen, click Yes.
- In the Configure VM Compatibility screen, select ESXi 6.7 Update 2 and later from the Compatible with: drop-down list. Click OK.
- Right-click the FortiSIEM VM in VCenter and choose Edit Settings.
- In the Edit Settings dialog box click the VM Options tab.
- In Guest OS, select Linux from the drop-down list.
- In Guest OS Version, select CentOS 8 (64-bit) from the drop-down list.
- Click OK.
- Open the Virtual Hardware tab.
- Open the section for the 25GB disk.
- Note the SCSI device number of the 25GB disk, for example,
SCSI(0:4)
. You will need this information for a later step. - Click OK.
- In the VM Options tab, open the Boot Options section.
- In Force BIOS setup, select During the next boot, force entry into the BIOS setup screen.
- Click OK.
- In VCenter, right-click the FortiSIEM VM and select Power > Power On.
- In the Summary tab for the VM, click the Launch Web Console link.
The Phoenix Setup Utility will open.
- In the Phoenix Setup Utility, use the arrow keys to go to the Boot tab.
Identify your SCSI hard drive (in this case,
VMware Virtual SCSI Hard Drive (0:4)
) , for example: - Select the new disk (in this case,
VMware Virtual SCSI Hard Drive (0:4)
) and use the + key to move it to the top of the list of virtual hard drives, for example: - Select Save and Exit (F10) to save your changes and exit the Phoenix Setup Utility.
- The VM will restart automatically and you will be presented with a log in screen.
Migrate to FortiSIEM 6.1.1
Follow these steps to complete the migration process:
- Log in to the bootloader shell as user
root
with passwordProspectHills
. You will immediately be asked to change your password. - Create and mount the
/images
directory:- Change directory to
root
, for example:# cd /
- Create the
/images
directory, for example:# mkdir -p /images
- Mount the
sdf1
(the 50GB disk) to/images
, for example:# mount /dev/sdf1 /images
Or using the UUID if the disk name changed:
# mount -U d4a5b82f-6e73-456b-ab08-d6e6d845d1aa /images
- Change directory to
- Run the
configFSM.sh
command to configure the migration via a GUI, for example:# configFSM.sh
- In the first screen of the GUI select 1 Yes to set a timezone. Press Next.
- Select a region for the timezone. In this example, US is selected. Press Next.
- Select a timezone in the selected region. In this example, Pacific is selected. Press Next.
- Select a target to configure. In this example, the Supervisor is selected. Press Next.
- Select the 6 migrate_6_1_1 Operation option. Press Next.
- Test network connectivity by entering a host name that can be resolved by your DNS Server (entered in the previous step) and responds to ping. The host can either be an internal host or a public domain host like google.com. Press Next.
- Press the Run command to complete migration, for example:
- The script will take some minutes to run. When it is finished, migration is complete.
- To ensure
phMonitor
is running, execute thephstatus
command, for example:# phstatus
The options for the configureFSM.py
script are described in the table here.
Finishing Up
After successfully migrating to 6.1.1, two unmounted disks will be present in the Supervisor node.
- SDA: 80 GB: previous version root partition (unmounted).
- SDE: 50 GB: installation images (unmounted).
These are there to recover VM from a disaster or in case of an upgrade/migration failure. If everything is up and running after the upgrade or migration you can remove them from the VM.
Migrate Cluster Installation
This section provides instructions on how to migrate Supervisor, Workers, and Collectors separately in a cluster environment,
- Delete Workers
- Migrate Supervisor
- Install New Worker(s)
- Register Workers
- Set Up Collector-to-Worker Communication
- Working with Pre-6.1.0 Collectors
- Install 6.1.1 Collectors
- Register 6.1.1 Collectors
Delete Workers
- Login to the Supervisor.
- Go to Admin > License > Nodes and delete the Workers one-by-one.
- Go to the Admin > Cloud Health page and make sure that the Workers are not present.
Note that the Collectors will buffer events while the Workers are down.
- Shutdown the Workers.
SSH to the Workers one-by-one and shutdown the Workers.
Migrate Supervisor
Follow the steps in Migrate All-in-one Installation to migrate the supervisor node. Note: FortiSIEM 6.1.1 does not support Worker or Collector migration.
Install New Worker(s)
Follow the steps in Cluster Installation > Install Workers to install new Workers. You can either keep the same IP address or change the address.
Register Workers
Follow the steps in Cluster Installation > Register Workers to register the newly created 6.1.1 Workers to the 6.1.1 Supervisor. The 6.1.1 FortiSIEM Cluster is now ready.
Set Up Collector-to-Worker Communication
- Go to Admin > Systems > Settings.
- Add the Workers to the Event Worker or Query Worker as appropriate.
- Click Save.
Working with Pre-6.1.0 Collectors
Pre-6.1.0 Collectors and agents will work with 6.1.1 Supervisor and Workers. You can install 6.1.1 collectors at your convenience.
Install 6.1.1 Collectors
FortiSIEM does not support Collector migration to 6.1.1. You can install new 6.1.1 Collectors and register them to 6.1.1 Supervisor in a specific way so that existing jobs assigned to Collectors and Windows agent associations are not lost. Follow these steps:
- Copy the http hashed password file (
/etc/httpd/accounts/passwds
) from the old Collector. - Disconnect the pre-6.1.1 Collector.
- Install the 6.1.1 Collector with the old IP address by the following the steps in Cluster Installation > Install Collectors.
- Copy the saved http hashed password file (
/etc/httpd/accounts/passwds
) from the old Collector to the 6.1.1 Collector.This step is needed for Agents to work seamlessly with 6.1.1 Collectors. The reason for this step is that when the Agent registers, a password for Agent-to-Collector communication is created and the hashed version is stored in the Collector. During 6.1.1 migration, this password is lost.
Register 6.1.1 Collectors
Follow the steps in Cluster Installation > Register Collectors, with the following difference: in the phProvisionCollector
command, use the --update
option instead of --add
. Other than this, use the exactly the same parameters that were used to register the pre-6.1.1
Collector. Specifically, use this form of the
phProvisionCollector
command to register a 6.1.1
Collector and keep the old associations:
# /opt/phoenix/bin/phProvisionCollector --update <user> '<password>' <Super IP or Host> <Organization> <CollectorName>
The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.
Re-install new Windows Agents with the old InstallSettings.xml
file. Both the migrated and the new agents will work. The new Linux Agent and migrated Linux Agent will also work.