Fortinet black logo

Migrating from FortiSIEM 5.3.x or 5.4.0

Copy Link
Copy Doc ID 730247f7-04e0-11eb-96b9-00505692583a:884301
Download PDF

Migrating from FortiSIEM 5.3.x or 5.4.0

Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.1, please be aware that the following features will not be available after migration.

  • Pre-compute feature

  • Elastic Cloud support

If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.

This section describes how to migrate from FortiSIEM 5.3.x or 5.4.0 to FortiSIEM 6.1.1. FortiSIEM performs migration in-place. The migration process backs up some important information from the original 5.3.x or 5.4.0 root disk, and then changes the root disk to boot up from a new 6.1.1 root disk. There is no need to copy disks. The instance identity remains the same.

Pre-Migration Checklist

To perform the migration, the following prerequisites must be met:

  • Ensure that your system can connect to the network. You will be asked to provide a DNS Server and a host that can be resolved by the DNS Server and responds to ping. The host can either be an internal host or a public domain host like google.com.
  • Delete the Worker from the Super GUI.
  • Stop/Shutdown the Worker.
  • Note the /svn partition by running the df -h command. the partition is used to mount /svn/53x-settings. You will need this information for a later step.
  • Create a /svn/53x-settings directory and symlink it to /images. The /svn partition should have at enough space to hold /opt/phoenix from your current system. Typically, 10 GB is enough. See the following example:

Migrate All-in-one Installation

Download the Backup Script

Download FortiSIEM Azure backup script to start migration. Follow these steps:

  1. # Download the file FSM_Backup_5.3_Files_6.1.1_Build0118 file from the support site and copy it to the 5.3.x or 5.4.0 Azure instance that you are planning to migrate to 6.1.1 (for example, /svn/53x-settings).
  2. Unzip the .zip file, for example:

    # unzip FSM_Backup_5.3_Files_6.1.1_Build0118.zip

Run the Backup Script and Shutdown System

Follow these steps to run the backup script and shut down the system:

  1. Go to the directory where you downloaded the backup script, for example:

    # cd /svn/53x-settings/FSM_Backup_5.3_Files_6.1.1_Build0118

  2. Run the backup script with the sh backup command to backup 5.3.x or 5.4.x settings that will be migrated later into the new 6.1.1 OS. For example:

    # sh backup

  3. Run the shutdown command to shut down the FortiSIEM instance, for example:

    # shutdown -h now

Create 6.1.1 New Root Disk

Follow these steps to create a new 6.1.1 root disk from the Azure portal.

  1. Log in to Azure portal, select Home > Disks service and then click Add.

  2. Fill in the disk details and choose Storage blob as the Source type and find 6.1.1 OS VHD (refer to earlier section on how to upload this VHD).
    Note: The root disk must be 25GB, and the size must not be changed.
  3. Click Review + Create after filling in the rest of the details if necessary.

  4. Verify that of the details are correct, then click Create.

  5. Wait for the deployment to complete. Click Go to resource and note the name of the resource.

Swap 6.1.1 OS Disk on Your 5.3.x or 5.4.0 Instance

Follow these steps to swap OS disk from the 5.3.x or 5.4.0 disk to 6.1.1 disk on the 5.3.x or 5.4.0 instance that you are migrating.

  1. Navigate to the 5.3.x or 5.4.0 VM and navigate to Disks, which is located in the side bar. Click Swap OS disk.

  2. Choose the 6.1.1 root disk you just created, fill in the confirmation box and click OK.

  3. Wait for the OS disk swap complete notification to appear.

  4. Navigate to the VM Overview, then click Disks in the sidebar. Click Create and attach a new disk.

  5. Add a 100 GiB opt disk and click OK

  6. On the Disks page, select Read-only under Host caching, and click Save.

Boot up the 5.3.x or 5.4.0 Instance and Migrate to 6.1.1

Follow these steps to complete the migration process:

  1. Navigate to the VM Overview, click Refresh, and Start the virtual machine.

  2. At the end of booting, log in with the default login credentials: User: root and Password: ProspectHills. You must use root to login to the system after booting up the 6.1.1 OS disk. The 5.3.x or 5.4.0 configure user name can not be used to login to the system.
  3. You will be required to change the password. Remember this password for future use.
  4. Use the /svn partition noted earlier and mount it to /mnt. This contains the backup of the 5.3.x or 5.4.0 system settings that will be used during migration. Copy the 5.3.x or 5.4.0 settings that were previously backed up and umount /mnt. For example:

    # mount /dev/sdb1 /mnt

    # mkdir /restore-53x-settings

    # cd /restore-53x-settings

    # rsync -av /mnt/53x-settings/ .

    # ln -sf /restore-53x-settings /images

    # umount /mnt

  5. Run the command configFSM.sh script to open the configuration GUI:
    1. Select 2 No in the Configure TIMEZONE dialog and then click Next.

    2. In Config Target, select your node type: Supervisor, Worker, or Collector. This step is usually performed on Supervisor (1 Supervisor). Click Next.

    3. In the Configure Supervisor, select the 6 migrate_6_1_1 operation and then click Next.

    4. Test network connectivity by entering a host name that can be resolved by your DNS Server (entered in the previous step) and responds to ping. The host can either be an internal host or a public domain host like google.com. In order for the migration to complete, the system still needs https connectivity to FortiSIEM OS update servers – os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs-c8.fortisiem.fortinet.com. Click Next.

    5. Run on the confirmation page once you make sure all the values are correct. The options are described in the table here.

    6. Wait for the operations to complete and the system to reboot.
    7. Wait for about 2 minutes before logging into the system. Wait another 5-10 minutes for all of the processes to start up. Then, execute the phstatus command to see the status of FortiSIEM processes.

    8. Remove the restored settings directories because you no longer need them, for example:
    9. # rm -rf /restore-53x-settings

      # rm -rf /svn/53x-settings

      # rm -f /images

Migrate Cluster Installation

This section provides instructions on how to migrate Supervisor, Workers, and Collectors separately in a cluster environment,

Delete Workers

  1. Login to the Supervisor.
  2. Go to Admin > License > Nodes and delete the Workers one-by-one.
  3. Go to the Admin > Cloud Health page and make sure that the Workers are not present.

    Note that the Collectors will buffer events while the Workers are down.

  4. Shutdown the Workers.

    SSH to the Workers one-by-one and shutdown the Workers.

Migrate Supervisor

Follow the steps in Migrate All-in-one Installation to migrate the supervisor node. Note: FortiSIEM 6.1 does not support Worker or Collector migration.

Install New Worker(s)

Follow the steps in Cluster Installation > Install Workers to install new Workers. You can either keep the same IP address or change the address.

Register Workers

Follow the steps in Cluster Installation > Register Workers to register the newly created 6.1.1 Workers to the 6.1.1 Supervisor. The 6.1.1 FortiSIEM Cluster is now ready.

Set Up Collector-to-Worker Communication

  1. Go to Admin > Systems > Settings.
  2. Add the Workers to the Event Worker or Query Worker as appropriate.
  3. Click Save.

Working with Pre-6.1.0 Collectors

Pre-6.1.0 Collectors and agents will work with 6.1.1 Supervisor and Workers. You can install 6.1.1 collectors at your convenience.

Install 6.1.1 Collectors

FortiSIEM does not support Collector migration to 6.1.1. You can install new 6.1.1 Collectors and register them to 6.1.1 Supervisor in a specific way so that existing jobs assigned to Collectors and Windows agent associations are not lost. Follow these steps:

  1. Copy the http hashed password file (/etc/httpd/accounts/passwds) from the old Collector.
  2. Disconnect the pre-6.1.1 Collector.
  3. Install the 6.1.1 Collector with the old IP address by the following the steps in Cluster Installation > Install Collectors.
  4. Copy the saved http hashed password file (/etc/httpd/accounts/passwds) from the old Collector to the 6.1.1 Collector. This step is needed for Agents to work seamlessly with 6.1.1 Collectors. The reason for this step is that when the Agent registers, a password for Agent-to-Collector communication is created and the hashed version is stored in the Collector. During 6.1.1 migration, this password is lost.

Register 6.1.1 Collectors

Follow the steps in Cluster Installation > Register Collectors, with the following difference: in the phProvisionCollector command, use the --update option instead of --add. Other than this, use the exactly the same parameters that were used to register the pre-6.1.1 Collector. Specifically, use this form of the

phProvisionCollector command to register a 6.1.1 Collector and keep the old associations:

# /opt/phoenix/bin/phProvisionCollector --update <user> '<password>' <Super IP or Host> <Organization> <CollectorName>

The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.

Re-install new Windows Agents with the old InstallSettings.xml file. Both the migrated and the new agents will work. The new Linux Agent and migrated Linux Agent will also work.

Migrating from FortiSIEM 5.3.x or 5.4.0

Migration limitations: If migrating from 5.3.3 or 5.4.0 to 6.1.1, please be aware that the following features will not be available after migration.

  • Pre-compute feature

  • Elastic Cloud support

If any of these features are critical to your organization, then please wait for a later version where these features are available after migration.

This section describes how to migrate from FortiSIEM 5.3.x or 5.4.0 to FortiSIEM 6.1.1. FortiSIEM performs migration in-place. The migration process backs up some important information from the original 5.3.x or 5.4.0 root disk, and then changes the root disk to boot up from a new 6.1.1 root disk. There is no need to copy disks. The instance identity remains the same.

Pre-Migration Checklist

To perform the migration, the following prerequisites must be met:

  • Ensure that your system can connect to the network. You will be asked to provide a DNS Server and a host that can be resolved by the DNS Server and responds to ping. The host can either be an internal host or a public domain host like google.com.
  • Delete the Worker from the Super GUI.
  • Stop/Shutdown the Worker.
  • Note the /svn partition by running the df -h command. the partition is used to mount /svn/53x-settings. You will need this information for a later step.
  • Create a /svn/53x-settings directory and symlink it to /images. The /svn partition should have at enough space to hold /opt/phoenix from your current system. Typically, 10 GB is enough. See the following example:

Migrate All-in-one Installation

Download the Backup Script

Download FortiSIEM Azure backup script to start migration. Follow these steps:

  1. # Download the file FSM_Backup_5.3_Files_6.1.1_Build0118 file from the support site and copy it to the 5.3.x or 5.4.0 Azure instance that you are planning to migrate to 6.1.1 (for example, /svn/53x-settings).
  2. Unzip the .zip file, for example:

    # unzip FSM_Backup_5.3_Files_6.1.1_Build0118.zip

Run the Backup Script and Shutdown System

Follow these steps to run the backup script and shut down the system:

  1. Go to the directory where you downloaded the backup script, for example:

    # cd /svn/53x-settings/FSM_Backup_5.3_Files_6.1.1_Build0118

  2. Run the backup script with the sh backup command to backup 5.3.x or 5.4.x settings that will be migrated later into the new 6.1.1 OS. For example:

    # sh backup

  3. Run the shutdown command to shut down the FortiSIEM instance, for example:

    # shutdown -h now

Create 6.1.1 New Root Disk

Follow these steps to create a new 6.1.1 root disk from the Azure portal.

  1. Log in to Azure portal, select Home > Disks service and then click Add.

  2. Fill in the disk details and choose Storage blob as the Source type and find 6.1.1 OS VHD (refer to earlier section on how to upload this VHD).
    Note: The root disk must be 25GB, and the size must not be changed.
  3. Click Review + Create after filling in the rest of the details if necessary.

  4. Verify that of the details are correct, then click Create.

  5. Wait for the deployment to complete. Click Go to resource and note the name of the resource.

Swap 6.1.1 OS Disk on Your 5.3.x or 5.4.0 Instance

Follow these steps to swap OS disk from the 5.3.x or 5.4.0 disk to 6.1.1 disk on the 5.3.x or 5.4.0 instance that you are migrating.

  1. Navigate to the 5.3.x or 5.4.0 VM and navigate to Disks, which is located in the side bar. Click Swap OS disk.

  2. Choose the 6.1.1 root disk you just created, fill in the confirmation box and click OK.

  3. Wait for the OS disk swap complete notification to appear.

  4. Navigate to the VM Overview, then click Disks in the sidebar. Click Create and attach a new disk.

  5. Add a 100 GiB opt disk and click OK

  6. On the Disks page, select Read-only under Host caching, and click Save.

Boot up the 5.3.x or 5.4.0 Instance and Migrate to 6.1.1

Follow these steps to complete the migration process:

  1. Navigate to the VM Overview, click Refresh, and Start the virtual machine.

  2. At the end of booting, log in with the default login credentials: User: root and Password: ProspectHills. You must use root to login to the system after booting up the 6.1.1 OS disk. The 5.3.x or 5.4.0 configure user name can not be used to login to the system.
  3. You will be required to change the password. Remember this password for future use.
  4. Use the /svn partition noted earlier and mount it to /mnt. This contains the backup of the 5.3.x or 5.4.0 system settings that will be used during migration. Copy the 5.3.x or 5.4.0 settings that were previously backed up and umount /mnt. For example:

    # mount /dev/sdb1 /mnt

    # mkdir /restore-53x-settings

    # cd /restore-53x-settings

    # rsync -av /mnt/53x-settings/ .

    # ln -sf /restore-53x-settings /images

    # umount /mnt

  5. Run the command configFSM.sh script to open the configuration GUI:
    1. Select 2 No in the Configure TIMEZONE dialog and then click Next.

    2. In Config Target, select your node type: Supervisor, Worker, or Collector. This step is usually performed on Supervisor (1 Supervisor). Click Next.

    3. In the Configure Supervisor, select the 6 migrate_6_1_1 operation and then click Next.

    4. Test network connectivity by entering a host name that can be resolved by your DNS Server (entered in the previous step) and responds to ping. The host can either be an internal host or a public domain host like google.com. In order for the migration to complete, the system still needs https connectivity to FortiSIEM OS update servers – os-pkgs-cdn.fortisiem.fortinet.com and os-pkgs-c8.fortisiem.fortinet.com. Click Next.

    5. Run on the confirmation page once you make sure all the values are correct. The options are described in the table here.

    6. Wait for the operations to complete and the system to reboot.
    7. Wait for about 2 minutes before logging into the system. Wait another 5-10 minutes for all of the processes to start up. Then, execute the phstatus command to see the status of FortiSIEM processes.

    8. Remove the restored settings directories because you no longer need them, for example:
    9. # rm -rf /restore-53x-settings

      # rm -rf /svn/53x-settings

      # rm -f /images

Migrate Cluster Installation

This section provides instructions on how to migrate Supervisor, Workers, and Collectors separately in a cluster environment,

Delete Workers

  1. Login to the Supervisor.
  2. Go to Admin > License > Nodes and delete the Workers one-by-one.
  3. Go to the Admin > Cloud Health page and make sure that the Workers are not present.

    Note that the Collectors will buffer events while the Workers are down.

  4. Shutdown the Workers.

    SSH to the Workers one-by-one and shutdown the Workers.

Migrate Supervisor

Follow the steps in Migrate All-in-one Installation to migrate the supervisor node. Note: FortiSIEM 6.1 does not support Worker or Collector migration.

Install New Worker(s)

Follow the steps in Cluster Installation > Install Workers to install new Workers. You can either keep the same IP address or change the address.

Register Workers

Follow the steps in Cluster Installation > Register Workers to register the newly created 6.1.1 Workers to the 6.1.1 Supervisor. The 6.1.1 FortiSIEM Cluster is now ready.

Set Up Collector-to-Worker Communication

  1. Go to Admin > Systems > Settings.
  2. Add the Workers to the Event Worker or Query Worker as appropriate.
  3. Click Save.

Working with Pre-6.1.0 Collectors

Pre-6.1.0 Collectors and agents will work with 6.1.1 Supervisor and Workers. You can install 6.1.1 collectors at your convenience.

Install 6.1.1 Collectors

FortiSIEM does not support Collector migration to 6.1.1. You can install new 6.1.1 Collectors and register them to 6.1.1 Supervisor in a specific way so that existing jobs assigned to Collectors and Windows agent associations are not lost. Follow these steps:

  1. Copy the http hashed password file (/etc/httpd/accounts/passwds) from the old Collector.
  2. Disconnect the pre-6.1.1 Collector.
  3. Install the 6.1.1 Collector with the old IP address by the following the steps in Cluster Installation > Install Collectors.
  4. Copy the saved http hashed password file (/etc/httpd/accounts/passwds) from the old Collector to the 6.1.1 Collector. This step is needed for Agents to work seamlessly with 6.1.1 Collectors. The reason for this step is that when the Agent registers, a password for Agent-to-Collector communication is created and the hashed version is stored in the Collector. During 6.1.1 migration, this password is lost.

Register 6.1.1 Collectors

Follow the steps in Cluster Installation > Register Collectors, with the following difference: in the phProvisionCollector command, use the --update option instead of --add. Other than this, use the exactly the same parameters that were used to register the pre-6.1.1 Collector. Specifically, use this form of the

phProvisionCollector command to register a 6.1.1 Collector and keep the old associations:

# /opt/phoenix/bin/phProvisionCollector --update <user> '<password>' <Super IP or Host> <Organization> <CollectorName>

The password should be enclosed in single quotes to ensure that any non-alphanumeric characters are escaped.

Re-install new Windows Agents with the old InstallSettings.xml file. Both the migrated and the new agents will work. The new Linux Agent and migrated Linux Agent will also work.