Fortinet white logo
Fortinet white logo

Disk Encryption for FortiSIEM Virtual Machine

Disk Encryption for FortiSIEM Virtual Machine

This document provides instructions for encrypting disks on a FortiSIEM Virtual Machine.

Key Notes:

  1. Do not encrypt the root disk as it presents an operational challenge during boot up to provide a passphrase. The root disk contains binaries and some internal system and application logs, not data.

  2. Disk encryption key management is an operational challenge. If you want strong security, then you must protect encryption keys with a passphrase and that requires a human to type them and mount the “opened” disks. The less secure alternative is to use keys that are not protected by a passphrase and stored in a file on the root partition.

Follow these general steps:

Step 0: Initialize FortiSIEM Node

Step 1: Download the Encryption Package

Step 2: Backup Data Disks

Step 3: Encrypt each Data Disk

Step 4: Restore Backed Up Data Disks

Step 5: Reboot FortiSIEM Node

There are some differences depending on:

  • Encrypting Supervisor, Worker or Collector

  • Fresh FortiSIEM install or an existing install running for some time

  • Event storage is local (e.g. EventDB on local storage or ClickHouse) or external (e.g. EventDB on NFS, Elasticsearch).

Incorporate the following differences in the steps below:

  • If you are encrypting Supervisor,

    • you need to backup, encrypt, restore SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to backup, encrypt, restore /data disk.

    • If event storage is ClickHouse, then you need to backup, encrypt, restore ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Worker,

    • you need to backup, encrypt, restore OPT disk.

    • If event storage is ClickHouse, then you need to backup, encrypt, restore ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Collector, then you need to backup, encrypt, restore OPT disk only.

  • On a fresh install, backup can be taken in /tmp. However, for an existing installation, /tmp may not have enough storage capacity. In that case, use an external location with sufficient storage for backup and restore.

Step 0: Initialize FortiSIEM Node

This involves the following steps:

  1. Run configFSM.sh as usual, and complete the system install.

  2. Upload a license to the system (for Supervisor).

  3. Configure FortiSIEM with appropriate storage.

Step 1: Download the Encryption Package

Step 1a: Download and Install

The cryptsetup package is not included in FortiSIEM. Take the following steps to install this package. This assumes network connectivity to FortiSIEM OS update servers listed in the user guide.

  • To install, run the following command:

    dnf install cryptsetup -y

  • To verify if the package is installed, run the following command:

    dnf search cryptsetup

Step 1b: Shutdown All Services

This step is needed so that disks have no read/write activity and backups can be taken.

# systemctl stop crond

# systemctl stop phxctl.service

# systemctl stop phFortiInsightAI.service

# systemctl stop svnlite.service

# systemctl stop rsyslog

# phxctl stop

# killall -9 node

# killall -9 redis-server

For ClickHouse deployments:

# systemctl stop phClickHouseMonitor.service

# systemctl stop ClickHouseKeeper.service

# systemctl stop clickhouse-server

Step 1c: Keep Record of Key Directory Permissions and Ownerships

For the to-be-encrypted directories (e.g. /svn, /cmdb, /opt on Supervisor), it is important to keep records of ownership and permissions. If there are any changes, then FortiSIEM may not work correctly, and the following reference will help to take corrective actions.

# ls -Rla <path> > /tmp/<path>.out

For root path:

# ls -la / | grep "opt\|svn\|cmdb" > /tmp/rootpath.out


Example:

# ls -Rla /opt > /tmp/opt.out

# ls -Rla /svn > /tmp/svn.out

# ls -Rla /cmdb > /tmp/cmdb.out

Example output:

[root@sup700 tmp]# ls -lh *.out

-rw-r--r-- 1 root root 163K Feb 27 11:54 cmdb.out

-rw-r--r-- 1 root root 604K Feb 27 11:54 data-clickhouse-hot-1.out

-rw-r--r-- 1 root root 8.5M Feb 27 11:54 opt.out

-rw-r--r-- 1 root root 268 Feb 27 11:54 svn.out

Step 2: Backup Data Disks

Back up the appropriate directories, since the encryption steps require these disks to be wiped before being encrypted.

  • If you are encrypting Supervisor,

    • you need to backup SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to backup /data disk.

    • If event storage is ClickHouse, then you need to backup ClickHouse data disks.

  • If you are encrypting Worker,

    • you need to backup OPT disk.

    • If event storage is ClickHouse, then you need to backup ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Collector, then you need to backup OPT disk only.

  • On a fresh install, backup can be taken in /tmp. However, for an existing installation, /tmp may not have enough storage capacity. In that case, use an external location with sufficient storage for backup and restore.

# tar cvzf /tmp/svn.tgz /svn

# tar cvzf /tmp/cmdb.tgz /cmdb

# tar cvzf /tmp/opt.tgz /opt

For local EventDB:

# tar cvzf /tmp/data.tgz /data


For ClickHouse:

# tar cvzf /tmp/data-clickhouse-hot-1.tgz /data-clickhouse-hot-1

If ClickHouse has multiple disks:

# tar cvzf /tmp/data-clickhouse-hot-2.tgz /data-clickhouse-hot-2

# tar cvzf /tmp/data-clickhouse-warm-1.tgz /data-clickhouse-warm-1

# tar cvzf /tmp/data-clickhouse-warm-1.tgz /data-clickhouse-warm-2


Example output:

[root@sup700 tmp]# ls -lh *.tgz

-rw-r--r-- 1 root root 87M Feb 27 12:20 cmdb.tgz

-rw-r--r-- 1 root root 740K Feb 27 12:20 data-clickhouse-hot-1.tgz

-rw-r--r-- 1 root root 3.1G Feb 27 12:20 opt.tgz

-rw-r--r-- 1 root root 148 Feb 27 12:20 svn.tgz

Step 3: Encrypt each Data Disk

Repeat the instructions below (3a – 3h) to encrypt the appropriate disks:

  • If you are encrypting Supervisor,

    • you need to encrypt SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to encrypt /data disk.

    • If event storage is ClickHouse, then you need to encrypt ClickHouse data disks.

  • If you are encrypting Worker,

    • you need to encrypt OPT disk.

    • If event storage is ClickHouse, then you need to encrypt ClickHouse data disks.

  • If you are encrypting Collector, then you need to encrypt OPT disk only.

Step 3a: Get Linux Partition Information for Swap

# lsblk

# fdisk -l /dev/sdd

Step 3b: Unmount the Filesystem and Turn off Swap

Use umount to unmount each disk. For /opt, the swap partition needs to be turned off.

# umount <disk name>

# swapoff -a

Examples:

# umount /svn

# umount /cmdb

# umount /opt

# umount /data

For ClickHouse:

# umount /data-clickhouse-hot-1

If ClickHouse has multiple disks:

umount /data-clickhouse-hot-2

umount /data-clickhouse-warm-1

umount /data-clickhouse-warm-2

Step 3c: Wipe the Disks of Previous Filesystem and Partition Information

Use wipefs to clear the existing filesystem and partition information from each disk.

# wipefs --all <disk partition name>

# wipefs --all <disk name>

Examples:

For /svn:

# wipefs --all /dev/sdb1

# wipefs --all /dev/sdb

For /cmdb:

# wipefs --all /dev/sdc1

# wipefs --all /dev/sdc

For /opt:

# wipefs --all /dev/sdd1

# wipefs --all /dev/sdd2

# wipefs --all /dev/sdd

For local EventDB (/data):

# wipefs –all /dev/sde

For ClickHouse:

# wipefs --all /dev/sde

For ClickHouse with additional disks (multiple hot tier or warm tier disks):

# wipefs --all /dev/sdf

# wipefs --all /dev/sdg

# wipefs --all /dev/sdh

# wipefs --all /dev/sdi

Step 3d: Create Swap and Opt Partitions

This step is only needed for /opt.

# parted -a optimal --script /dev/sdd mklabel gpt mkpart primary linux-swap 1G 25G && sleep 5

# parted -a optimal --script /dev/sdd mkpart primary xfs 26G 100G && sleep 5

Step 3e: Format LUKS Disk and Add one Key

Run the following command to format the LUKS Disk and add the default encryption/decryption key with passphrase to default slot 0.

# cryptsetup luksFormat <disk name>

Examples:

# cryptsetup luksFormat /dev/sdb

# cryptsetup luksFormat /dev/sdc

# cryptsetup luksFormat /dev/sdd1

# cryptsetup luksFormat /dev/sdd2

For local disk EventDB or ClickHouse:

# cryptsetup luksFormat /dev/sde

For ClickHouse with additional disks (multiple hot tier or warm tier disks):

# cryptsetup luksFormat /dev/sdg

# cryptsetup luksFormat /dev/sdf

# cryptsetup luksFormat /dev/sdh

# cryptsetup luksFormat /dev/sdi

WARNING!
========

This will overwrite data on /dev/sdb irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdb: Verify passphrase:

There are ~ 32 keyslots for additional keys, which can be used to provide multiple administrators with the ability to unlock the disks and perform periodic rotation of keys.

The following command can be used to dump information about different slots.

# cryptsetup luksDump /dev/<disk name>

Examples:

# cryptsetup luksDump /dev/sdb

Disk Encryption for FortiSIEM VM

# cryptsetup luksDump /dev/sdb

# cryptsetup luksDump /dev/sdc

# cryptsetup luksDump /dev/sdd1

# cryptsetup luksDump /dev/sdd2

For local EventDB or ClickHouse:

# cryptsetup luksDump /dev/sde

For ClickHouse with multiple disks:

# cryptsetup luksDump /dev/sdf

# cryptsetup luksDump /dev/sdg

# cryptsetup luksDump /dev/sdh

# cryptsetup luksDump /dev/sdi

Step 3f: Add a New Key to LUKS Disk

Create a new random key for keyslot 1 by running the following command.

# dd if=/dev/random of=/etc/enc<disk_name>key bs=512 count=1

Examples:

For opt:

# dd if=/dev/random of=/etc/encoptkey bs=512 count=1

For swap:

# dd if=/dev/random of=/etc/encswapkey bs=512 count=1

For cmdb:

# dd if=/dev/random of=/etc/enccmdbkey bs=512 count=1

For svn:

# dd if=/dev/random of=/etc/encsvnkey bs=512 count=1

For local EventDB or ClickHouse:

# dd if=/dev/random of=/etc/encdatakey bs=512 count=1

For multiple ClickHouse disks if available:

# dd if=/dev/random of=/etc/encdatahot1key bs=512 count=1

# dd if=/dev/random of=/etc/encdatahot2key bs=512 count=1

# dd if=/dev/random of=/etc/encdatawarm1key bs=512 count=1

# dd if=/dev/random of=/etc/encdatawarm2key bs=512 count=1

Enter the cryptsetup command with the luksAddKey option again. There will be two slots that have enabled keys. Keyslot 0 contains the initial key from Step 3e. Keyslot 1 contains a new key added from Step 3f.

Use the cryptsetup command with the luksAddKey option to add a new LUKS key for the target disk.

# cryptsetup luksAddKey /dev/<DISK_NAME> /etc/enc<mount_name>key

Examples:

# cryptsetup luksAddKey /dev/sdb /etc/encsvnkey

# cryptsetup luksAddKey /dev/sdc /etc/enccmdbkey

# cryptsetup luksAddKey /dev/sdd1 /etc/encswapkey

# cryptsetup luksAddKey /dev/sdd2 /etc/encoptkey

For local disk EventDB or ClickHouse:

# cryptsetup luksAddKey /dev/sde /etc/encdatakey

For ClickHouse that has multiple disks:

# cryptsetup luksAddKey /dev/sdf /etc/encdatahot1key

# cryptsetup luksAddKey /dev/sdg /etc/encdatahot2key

# cryptsetup luksAddKey /dev/sdh /etc/encdatawarm1key

# cryptsetup luksAddKey /dev/sdi /etc/encdatawarm2key

The following command can be used to dump the target volume information and keys.

# cryptsetup luksDump /dev/<disk_name>

Examples:

# cryptsetup luksDump /dev/sdb

# cryptsetup luksDump /dev/sdc

# cryptsetup luksDump /dev/sdd1

# cryptsetup luksDump /dev/sdd2

For local disk EventDB or ClickHouse:

# cryptsetup luksDump /dev/sde

For ClickHouse that has mutiple disks:

# cryptsetup luksDump /dev/sdf

# cryptsetup luksDump /dev/sdg

# cryptsetup luksDump /dev/sdh

# cryptsetup luksDump /dev/sdi

Step 3g: Open the Encrypted Disk

Use the cryptsetup command with the luksOpen option to open the encrypted target disk and provide a new encrypted volume name.

# cryptsetup luksOpen /dev/<DISK_NAME> <ENCRYPTED_VOLUME_NAME> --key-file /etc/enc<mount_name>key

Examples:

# cryptsetup luksOpen /dev/sdb encryptedSvn --key-file /etc/encsvnkey

# cryptsetup luksOpen /dev/sdc encryptedCmdb --key-file /etc/enccmdbkey

# cryptsetup luksOpen /dev/sdd1 encryptedSwap --key-file /etc/encswapkey

# cryptsetup luksOpen /dev/sdd2 encryptedOpt --key-file /etc/encoptkey

For local EventDB or ClickHouse:

# cryptsetup luksOpen /dev/sde encryptedData --key-file /etc/encdatakey

For multiple ClickHouse disks:

# cryptsetup luksOpen /dev/sdf encryptedDatahot1 --key-file /etc/encdatahot1key

# cryptsetup luksOpen /dev/sdg encryptedDatahot2 --key-file /etc/encdatahot2key

# cryptsetup luksOpen /dev/sdh encryptedDatawarm1 --key-file /etc/encdatawarm1key

# cryptsetup luksOpen /dev/sdi encryptedDatawarm2 --key-file /etc/encdatawarm2key

Step 3h: Allow the Encrypted Disk to Open on Boot

Create an entry in /etc/crypttab, which will open the encrypted disk at boot time using the keyslot 1 key file you saved above.

# echo "<ENCRYPTED_VOLUME_NAME> /dev/<DISK_NAME> /etc/enc<mount_NAME>key luks" >> /etc/crypttab

Examples:

# echo "encryptedSvn /dev/sdb /etc/encsvnkey luks" >> /etc/crypttab

# echo "encryptedCmdb /dev/sdc /etc/enccmdbkey luks" >> /etc/crypttab

# echo "encryptedSwap /dev/sdd1 /etc/encswapkey luks" >> /etc/crypttab

# echo "encryptedOpt /dev/sdd2 /etc/encoptkey luks" >> /etc/crypttab

For local EventDB or ClickHouse:

# echo "encryptedData /dev/sde /etc/encdatakey luks" >> /etc/crypttab

For ClickHouse with multiple disks:

# echo "encryptedDatahot1 /dev/sdf /etc/encdatahot1key luks" >> /etc/crypttab

# echo "encryptedDatahot2 /dev/sdg /etc/encdatahot2key luks" >> /etc/crypttab

# echo "encryptedDatawarm1 /dev/sdh /etc/encdatawarm1key luks" >> /etc/crypttab

# echo "encryptedDatawarm2 /dev/sdi /etc/encdatawarm2key luks" >> /etc/crypttab

Step 3i: Create swap and xfs Filesystem on the “Opened” Encrypted Disk

Use the mkfs.xfs command to create an xfs file system on the disk.

# mkfs.xfs /dev/mapper/<ENCRYPTED_VOLUME_NAME>

For swap:

# mkswap /dev/mapper/encryptedSwap

Examples:

# mkfs.xfs /dev/mapper/encryptedOpt

# mkswap /dev/mapper/encryptedSwap

# mkfs.xfs /dev/mapper/encryptedCmdb

# mkfs.xfs /dev/mapper/encryptedSvn


For local EventDB or ClickHouse:

# mkfs.xfs /dev/mapper/encryptedData

For ClickHouse with multiple disks:

# mkfs.xfs /dev/mapper/encryptedDatahot1

# mkfs.xfs /dev/mapper/encryptedDatahot2

# mkfs.xfs /dev/mapper/encryptedDatawarm1

# mkfs.xfs /dev/mapper/encryptedDatawarm2

Step 3j: Replace the Mount Point

Configure fstab to mount the new encrypted volumes and discard the old volume names. It is important to make a backup of the original file before modifying.

  1. Run the following command to back up your file.

    # cp -a /etc/fstab /etc/fstab.original

  2. Next, run vi, or some other text editor.

    # vi /etc/fstab

  3. Find the target disk: e.g. /cmdb, /svn, /opt etc.

  4. Replace the line containing the following information:

    /dev/mapper/<ENCRYPTED VOLUME NAME> <DISK NAME> xfs defaults,nodev 0 1

    Examples:

    Original:

    /dev/mapper/rl-root     /                       xfs     defaults        0 0
    UUID=0f1f822d-eb21-4b62-a36b-037830399b17 /boot                   xfs     defaults        0 0
    /dev/mapper/rl-swap     none                    swap    defaults        0 0
    #UUID=29abfbbf-0f63-4398-be77-0850d406e37e /opt xfs defaults 0 0
    #UUID=7b33e40f-3e80-44de-a646-b1305ffff021 /cmdb xfs defaults 0 0
    #UUID=0c7d5bc0-791b-4930-b565-da149071d552 /svn xfs defaults 0 0
    #UUID=951b1f24-3ea8-4ec5-bc9c-5b804b2d0a47    swap    swap  defaults 0 0
    #UUID=9163ed63-608e-4aa1-a611-98ab2fc8dd8b	/data-clickhouse-hot-1			xfs	defaults,nodev,noatime,inode64	 0	0
    

    Change to:

    /dev/mapper/encryptedSwap swap swap defaults 0 0
    /dev/mapper/encryptedOpt /opt xfs defaults,nodev 0 0
    /dev/mapper/encryptedCmdb /cmdb xfs defaults,nodev 0 0
    /dev/mapper/encryptedSvn /svn xfs defaults,nodev 0 0 0

    For local EventDB:

    /dev/mapper/encryptedData /data xfs defaults,nodev,noatime,inode64 0 0


    For ClickHouse:

    /dev/mapper/encryptedData /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    For ClickHouse with multiple disks:

    /dev/mapper/encryptedDatahot1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatahot2 /data-clickhouse-hot-2 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatawarm1 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatawarm2 /data-clickhouse-warm-2 xfs defaults,nodev,noatime,inode64 0 0


Mounting the Encrypted Disk

Use the mount command to remount the encrypted volume.

# mount <DISK_NAME>

Examples:

# mount /opt

# mount /cmdb

# mount /svn

# mount /data


For ClickHouse:

# mount /data-clickhouse-hot-1

For ClickHouse with multiple disks:

# mount /data-clickhouse-hot-1

# mount /data-clickhouse-hot-2

# mount /data-clickhouse-warm-1

# mount /data-clickhouse-warm-2

Check if the volume is mounted correctly, by running the df command.

# df <disk_name>

Examples:

Filesystem                 Size  Used Avail Use% Mounted on/dev/mapper/encryptedCmdb   60G  1.1G   59G   2% /cmdb
/dev/mapper/encryptedOpt    69G  7.4G   62G  11% /opt
/dev/mapper/encryptedSvn    60G  461M   60G   1% /svn
/dev/mapper/encryptedData  105G  3.1G  102G   3% /data-clickhouse-hot-1

Step 4: Restore Backed Up Data Disks

Restore the disks backed up in Step 2. Run the following as root.

# cd /

# mv /tmp/opt.tgz /

# tar xvzf opt.tgz

# mv /tmp/data.tgz /

# tar xvzf data.tgz

# mv /tmp/svn.tgz /

# tar xvzf svn.tgz

# mv /tmp/cmdb.tgz /

# tar xvzf cmdb.tgz

For ClickHouse:

# cd /

# mv /tmp/data-clickhouse-hot-1.tgz /

# tar xvzf data-clickhouse-hot-1.tgz

Validate and verify the file structure, ownership and permissions by comparing it to the restored data with the recorded files in Step 1c located under /tmp.

/tmp:

-rw-r--r-- 1 root root 122 Feb 8 20:26 cmdb.out

-rw-r--r-- 1 root root 6300087 Feb 8 20:26 opt.out

-rw-r--r-- 1 root root 902 Feb 8 20:32 data.out

-rw-r--r-- 1 root root 113 Feb 8 20:27 svn.out

Step 5: Reboot FortiSIEM Node

Lastly, reboot the FortiSIEM appliance to verify persistent mounting of the encrypted disks.

# reboot

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                    12G     0   12G   0% /dev
tmpfs                       12G  236K   12G   1% /dev/shm
tmpfs                       12G   17M   12G   1% /run
tmpfs                       12G     0   12G   0% /sys/fs/cgroup
/dev/mapper/rl-root         22G  8.3G   14G  39% /
/dev/sda1                 1014M  698M  317M  69% /boot
/dev/mapper/encryptedCmdb   60G  1.1G   59G   2% /cmdb
/dev/mapper/encryptedOpt    69G  7.4G   62G  11% /opt
/dev/mapper/encryptedSvn    60G  461M   60G   1% /svn
/dev/mapper/encryptedData  105G  3.1G  102G   3% /data-clickhouse-hot-1
tmpfs                      2.4G     0  2.4G   0% /run/user/500
tmpfs                      2.4G     0  2.4G   0% /run/user/0

# lsblk

NAME              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   25G  0 disk  
├─sda1              8:1    0    1G  0 part  /boot
└─sda2              8:2    0   24G  0 part  
  ├─rl-swap       253:0    0  2.5G  0 lvm   [SWAP]
  └─rl-root       253:1    0 21.5G  0 lvm   /
sdb                 8:16   0   60G  0 disk  
└─encryptedSvn    253:5    0   60G  0 crypt /svn
sdc                 8:32   0   60G  0 disk  
└─encryptedCmdb   253:3    0   60G  0 crypt /cmdb
sdd                 8:48   0  100G  0 disk  
├─sdd1              8:49   0 22.4G  0 part  
│ └─encryptedSwap 253:2    0 22.3G  0 crypt [SWAP]
└─sdd2              8:50   0 68.9G  0 part  
  └─encryptedOpt  253:4    0 68.9G  0 crypt /opt
sde                 8:64   0  105G  0 disk  
└─encryptedData   253:6    0  105G  0 crypt /data-clickhouse-hot-1

#phstatus

Every 1.0s: /opt/phoenix/bin/phstatus.py                                                                       sp671: Mon Feb 27 18:10:52 2023

System uptime:  18:10:53 up 46 min,  1 user,  load average: 2.38, 1.93, 1.63
Tasks: 30 total, 0 running, 30 sleeping, 0 stopped, 0 zombie
Cpu(s): 8 cores, 11.0%us, 0.7%sy, 0.0%ni, 87.8%id, 0.1%wa, 0.2%hi, 0.1%si, 0.0%st
Mem: 24463776k total, 13246504k used, 7401772k free, 5544k buffers
Swap: 26042360k total, 0k used, 26042360k free, 3483372k cached


PROCESS                  UPTIME         CPU%           VIRT_MEM       RES_MEM

phParser                 41:49          0              3283m          1511m
phQueryMaster            42:07          0              997m           92m
phRuleMaster             42:07          0              1526m          851m
phRuleWorker             42:07          0              1537m          370m
phQueryWorker            42:07          0              1505m          381m
phDataManager            42:07          0              1813m          476m
phDiscover               42:07          0              561m           64m
phReportWorker           42:07          0              1561m          99m
phReportMaster           42:07          0              667m           54m
phIpIdentityWorker	  42:07          0              1119m          69m
phIpIdentityMaster	  42:07          0              524m           48m
phAgentManager           42:07          0              2341m          735m
phCheckpoint             42:07          0              321m           42m
phPerfMonitor            42:07          0              841m           81m
phDataPurger             42:07          0              599m           69m
phEventForwarder         42:07          0              550m           39m
phMonitor                45:47          0              1415m          671m
Apache                   45:56          0              319m           18m
Rsyslogd                 45:55          0              192m           4128k
Node.js-charting         45:49          0              642m           80m
Node.js-pm2              45:48          0              638m           54m
phFortiInsightAI         45:56          0              13986m         340m
AppSvr                   45:45          9              11149m         4337m
DBSvr                    45:56          0              425m           37m
phAnomaly                41:37          0              982m           60m
SVNLite                  45:56          0              9851m          417m
phClickHouseMonitor	  45:26          0              1559m          25m
ClickHouseServer         45:54          0              6631m          782m
ClickHouseKeeper         45:54          0              1304m          272m
Redis                    45:49          0              246m           80m

Disk Encryption for FortiSIEM Virtual Machine

Disk Encryption for FortiSIEM Virtual Machine

This document provides instructions for encrypting disks on a FortiSIEM Virtual Machine.

Key Notes:

  1. Do not encrypt the root disk as it presents an operational challenge during boot up to provide a passphrase. The root disk contains binaries and some internal system and application logs, not data.

  2. Disk encryption key management is an operational challenge. If you want strong security, then you must protect encryption keys with a passphrase and that requires a human to type them and mount the “opened” disks. The less secure alternative is to use keys that are not protected by a passphrase and stored in a file on the root partition.

Follow these general steps:

Step 0: Initialize FortiSIEM Node

Step 1: Download the Encryption Package

Step 2: Backup Data Disks

Step 3: Encrypt each Data Disk

Step 4: Restore Backed Up Data Disks

Step 5: Reboot FortiSIEM Node

There are some differences depending on:

  • Encrypting Supervisor, Worker or Collector

  • Fresh FortiSIEM install or an existing install running for some time

  • Event storage is local (e.g. EventDB on local storage or ClickHouse) or external (e.g. EventDB on NFS, Elasticsearch).

Incorporate the following differences in the steps below:

  • If you are encrypting Supervisor,

    • you need to backup, encrypt, restore SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to backup, encrypt, restore /data disk.

    • If event storage is ClickHouse, then you need to backup, encrypt, restore ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Worker,

    • you need to backup, encrypt, restore OPT disk.

    • If event storage is ClickHouse, then you need to backup, encrypt, restore ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Collector, then you need to backup, encrypt, restore OPT disk only.

  • On a fresh install, backup can be taken in /tmp. However, for an existing installation, /tmp may not have enough storage capacity. In that case, use an external location with sufficient storage for backup and restore.

Step 0: Initialize FortiSIEM Node

This involves the following steps:

  1. Run configFSM.sh as usual, and complete the system install.

  2. Upload a license to the system (for Supervisor).

  3. Configure FortiSIEM with appropriate storage.

Step 1: Download the Encryption Package

Step 1a: Download and Install

The cryptsetup package is not included in FortiSIEM. Take the following steps to install this package. This assumes network connectivity to FortiSIEM OS update servers listed in the user guide.

  • To install, run the following command:

    dnf install cryptsetup -y

  • To verify if the package is installed, run the following command:

    dnf search cryptsetup

Step 1b: Shutdown All Services

This step is needed so that disks have no read/write activity and backups can be taken.

# systemctl stop crond

# systemctl stop phxctl.service

# systemctl stop phFortiInsightAI.service

# systemctl stop svnlite.service

# systemctl stop rsyslog

# phxctl stop

# killall -9 node

# killall -9 redis-server

For ClickHouse deployments:

# systemctl stop phClickHouseMonitor.service

# systemctl stop ClickHouseKeeper.service

# systemctl stop clickhouse-server

Step 1c: Keep Record of Key Directory Permissions and Ownerships

For the to-be-encrypted directories (e.g. /svn, /cmdb, /opt on Supervisor), it is important to keep records of ownership and permissions. If there are any changes, then FortiSIEM may not work correctly, and the following reference will help to take corrective actions.

# ls -Rla <path> > /tmp/<path>.out

For root path:

# ls -la / | grep "opt\|svn\|cmdb" > /tmp/rootpath.out


Example:

# ls -Rla /opt > /tmp/opt.out

# ls -Rla /svn > /tmp/svn.out

# ls -Rla /cmdb > /tmp/cmdb.out

Example output:

[root@sup700 tmp]# ls -lh *.out

-rw-r--r-- 1 root root 163K Feb 27 11:54 cmdb.out

-rw-r--r-- 1 root root 604K Feb 27 11:54 data-clickhouse-hot-1.out

-rw-r--r-- 1 root root 8.5M Feb 27 11:54 opt.out

-rw-r--r-- 1 root root 268 Feb 27 11:54 svn.out

Step 2: Backup Data Disks

Back up the appropriate directories, since the encryption steps require these disks to be wiped before being encrypted.

  • If you are encrypting Supervisor,

    • you need to backup SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to backup /data disk.

    • If event storage is ClickHouse, then you need to backup ClickHouse data disks.

  • If you are encrypting Worker,

    • you need to backup OPT disk.

    • If event storage is ClickHouse, then you need to backup ClickHouse data disks. If ClickHouse contains a warm tier or multiple disks, those individual disks will also follow the same steps in respect to its disk.

  • If you are encrypting Collector, then you need to backup OPT disk only.

  • On a fresh install, backup can be taken in /tmp. However, for an existing installation, /tmp may not have enough storage capacity. In that case, use an external location with sufficient storage for backup and restore.

# tar cvzf /tmp/svn.tgz /svn

# tar cvzf /tmp/cmdb.tgz /cmdb

# tar cvzf /tmp/opt.tgz /opt

For local EventDB:

# tar cvzf /tmp/data.tgz /data


For ClickHouse:

# tar cvzf /tmp/data-clickhouse-hot-1.tgz /data-clickhouse-hot-1

If ClickHouse has multiple disks:

# tar cvzf /tmp/data-clickhouse-hot-2.tgz /data-clickhouse-hot-2

# tar cvzf /tmp/data-clickhouse-warm-1.tgz /data-clickhouse-warm-1

# tar cvzf /tmp/data-clickhouse-warm-1.tgz /data-clickhouse-warm-2


Example output:

[root@sup700 tmp]# ls -lh *.tgz

-rw-r--r-- 1 root root 87M Feb 27 12:20 cmdb.tgz

-rw-r--r-- 1 root root 740K Feb 27 12:20 data-clickhouse-hot-1.tgz

-rw-r--r-- 1 root root 3.1G Feb 27 12:20 opt.tgz

-rw-r--r-- 1 root root 148 Feb 27 12:20 svn.tgz

Step 3: Encrypt each Data Disk

Repeat the instructions below (3a – 3h) to encrypt the appropriate disks:

  • If you are encrypting Supervisor,

    • you need to encrypt SVN, CMDB and OPT disks.

    • If event storage is EventDB on local disk, then you need to encrypt /data disk.

    • If event storage is ClickHouse, then you need to encrypt ClickHouse data disks.

  • If you are encrypting Worker,

    • you need to encrypt OPT disk.

    • If event storage is ClickHouse, then you need to encrypt ClickHouse data disks.

  • If you are encrypting Collector, then you need to encrypt OPT disk only.

Step 3a: Get Linux Partition Information for Swap

# lsblk

# fdisk -l /dev/sdd

Step 3b: Unmount the Filesystem and Turn off Swap

Use umount to unmount each disk. For /opt, the swap partition needs to be turned off.

# umount <disk name>

# swapoff -a

Examples:

# umount /svn

# umount /cmdb

# umount /opt

# umount /data

For ClickHouse:

# umount /data-clickhouse-hot-1

If ClickHouse has multiple disks:

umount /data-clickhouse-hot-2

umount /data-clickhouse-warm-1

umount /data-clickhouse-warm-2

Step 3c: Wipe the Disks of Previous Filesystem and Partition Information

Use wipefs to clear the existing filesystem and partition information from each disk.

# wipefs --all <disk partition name>

# wipefs --all <disk name>

Examples:

For /svn:

# wipefs --all /dev/sdb1

# wipefs --all /dev/sdb

For /cmdb:

# wipefs --all /dev/sdc1

# wipefs --all /dev/sdc

For /opt:

# wipefs --all /dev/sdd1

# wipefs --all /dev/sdd2

# wipefs --all /dev/sdd

For local EventDB (/data):

# wipefs –all /dev/sde

For ClickHouse:

# wipefs --all /dev/sde

For ClickHouse with additional disks (multiple hot tier or warm tier disks):

# wipefs --all /dev/sdf

# wipefs --all /dev/sdg

# wipefs --all /dev/sdh

# wipefs --all /dev/sdi

Step 3d: Create Swap and Opt Partitions

This step is only needed for /opt.

# parted -a optimal --script /dev/sdd mklabel gpt mkpart primary linux-swap 1G 25G && sleep 5

# parted -a optimal --script /dev/sdd mkpart primary xfs 26G 100G && sleep 5

Step 3e: Format LUKS Disk and Add one Key

Run the following command to format the LUKS Disk and add the default encryption/decryption key with passphrase to default slot 0.

# cryptsetup luksFormat <disk name>

Examples:

# cryptsetup luksFormat /dev/sdb

# cryptsetup luksFormat /dev/sdc

# cryptsetup luksFormat /dev/sdd1

# cryptsetup luksFormat /dev/sdd2

For local disk EventDB or ClickHouse:

# cryptsetup luksFormat /dev/sde

For ClickHouse with additional disks (multiple hot tier or warm tier disks):

# cryptsetup luksFormat /dev/sdg

# cryptsetup luksFormat /dev/sdf

# cryptsetup luksFormat /dev/sdh

# cryptsetup luksFormat /dev/sdi

WARNING!
========

This will overwrite data on /dev/sdb irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdb: Verify passphrase:

There are ~ 32 keyslots for additional keys, which can be used to provide multiple administrators with the ability to unlock the disks and perform periodic rotation of keys.

The following command can be used to dump information about different slots.

# cryptsetup luksDump /dev/<disk name>

Examples:

# cryptsetup luksDump /dev/sdb

Disk Encryption for FortiSIEM VM

# cryptsetup luksDump /dev/sdb

# cryptsetup luksDump /dev/sdc

# cryptsetup luksDump /dev/sdd1

# cryptsetup luksDump /dev/sdd2

For local EventDB or ClickHouse:

# cryptsetup luksDump /dev/sde

For ClickHouse with multiple disks:

# cryptsetup luksDump /dev/sdf

# cryptsetup luksDump /dev/sdg

# cryptsetup luksDump /dev/sdh

# cryptsetup luksDump /dev/sdi

Step 3f: Add a New Key to LUKS Disk

Create a new random key for keyslot 1 by running the following command.

# dd if=/dev/random of=/etc/enc<disk_name>key bs=512 count=1

Examples:

For opt:

# dd if=/dev/random of=/etc/encoptkey bs=512 count=1

For swap:

# dd if=/dev/random of=/etc/encswapkey bs=512 count=1

For cmdb:

# dd if=/dev/random of=/etc/enccmdbkey bs=512 count=1

For svn:

# dd if=/dev/random of=/etc/encsvnkey bs=512 count=1

For local EventDB or ClickHouse:

# dd if=/dev/random of=/etc/encdatakey bs=512 count=1

For multiple ClickHouse disks if available:

# dd if=/dev/random of=/etc/encdatahot1key bs=512 count=1

# dd if=/dev/random of=/etc/encdatahot2key bs=512 count=1

# dd if=/dev/random of=/etc/encdatawarm1key bs=512 count=1

# dd if=/dev/random of=/etc/encdatawarm2key bs=512 count=1

Enter the cryptsetup command with the luksAddKey option again. There will be two slots that have enabled keys. Keyslot 0 contains the initial key from Step 3e. Keyslot 1 contains a new key added from Step 3f.

Use the cryptsetup command with the luksAddKey option to add a new LUKS key for the target disk.

# cryptsetup luksAddKey /dev/<DISK_NAME> /etc/enc<mount_name>key

Examples:

# cryptsetup luksAddKey /dev/sdb /etc/encsvnkey

# cryptsetup luksAddKey /dev/sdc /etc/enccmdbkey

# cryptsetup luksAddKey /dev/sdd1 /etc/encswapkey

# cryptsetup luksAddKey /dev/sdd2 /etc/encoptkey

For local disk EventDB or ClickHouse:

# cryptsetup luksAddKey /dev/sde /etc/encdatakey

For ClickHouse that has multiple disks:

# cryptsetup luksAddKey /dev/sdf /etc/encdatahot1key

# cryptsetup luksAddKey /dev/sdg /etc/encdatahot2key

# cryptsetup luksAddKey /dev/sdh /etc/encdatawarm1key

# cryptsetup luksAddKey /dev/sdi /etc/encdatawarm2key

The following command can be used to dump the target volume information and keys.

# cryptsetup luksDump /dev/<disk_name>

Examples:

# cryptsetup luksDump /dev/sdb

# cryptsetup luksDump /dev/sdc

# cryptsetup luksDump /dev/sdd1

# cryptsetup luksDump /dev/sdd2

For local disk EventDB or ClickHouse:

# cryptsetup luksDump /dev/sde

For ClickHouse that has mutiple disks:

# cryptsetup luksDump /dev/sdf

# cryptsetup luksDump /dev/sdg

# cryptsetup luksDump /dev/sdh

# cryptsetup luksDump /dev/sdi

Step 3g: Open the Encrypted Disk

Use the cryptsetup command with the luksOpen option to open the encrypted target disk and provide a new encrypted volume name.

# cryptsetup luksOpen /dev/<DISK_NAME> <ENCRYPTED_VOLUME_NAME> --key-file /etc/enc<mount_name>key

Examples:

# cryptsetup luksOpen /dev/sdb encryptedSvn --key-file /etc/encsvnkey

# cryptsetup luksOpen /dev/sdc encryptedCmdb --key-file /etc/enccmdbkey

# cryptsetup luksOpen /dev/sdd1 encryptedSwap --key-file /etc/encswapkey

# cryptsetup luksOpen /dev/sdd2 encryptedOpt --key-file /etc/encoptkey

For local EventDB or ClickHouse:

# cryptsetup luksOpen /dev/sde encryptedData --key-file /etc/encdatakey

For multiple ClickHouse disks:

# cryptsetup luksOpen /dev/sdf encryptedDatahot1 --key-file /etc/encdatahot1key

# cryptsetup luksOpen /dev/sdg encryptedDatahot2 --key-file /etc/encdatahot2key

# cryptsetup luksOpen /dev/sdh encryptedDatawarm1 --key-file /etc/encdatawarm1key

# cryptsetup luksOpen /dev/sdi encryptedDatawarm2 --key-file /etc/encdatawarm2key

Step 3h: Allow the Encrypted Disk to Open on Boot

Create an entry in /etc/crypttab, which will open the encrypted disk at boot time using the keyslot 1 key file you saved above.

# echo "<ENCRYPTED_VOLUME_NAME> /dev/<DISK_NAME> /etc/enc<mount_NAME>key luks" >> /etc/crypttab

Examples:

# echo "encryptedSvn /dev/sdb /etc/encsvnkey luks" >> /etc/crypttab

# echo "encryptedCmdb /dev/sdc /etc/enccmdbkey luks" >> /etc/crypttab

# echo "encryptedSwap /dev/sdd1 /etc/encswapkey luks" >> /etc/crypttab

# echo "encryptedOpt /dev/sdd2 /etc/encoptkey luks" >> /etc/crypttab

For local EventDB or ClickHouse:

# echo "encryptedData /dev/sde /etc/encdatakey luks" >> /etc/crypttab

For ClickHouse with multiple disks:

# echo "encryptedDatahot1 /dev/sdf /etc/encdatahot1key luks" >> /etc/crypttab

# echo "encryptedDatahot2 /dev/sdg /etc/encdatahot2key luks" >> /etc/crypttab

# echo "encryptedDatawarm1 /dev/sdh /etc/encdatawarm1key luks" >> /etc/crypttab

# echo "encryptedDatawarm2 /dev/sdi /etc/encdatawarm2key luks" >> /etc/crypttab

Step 3i: Create swap and xfs Filesystem on the “Opened” Encrypted Disk

Use the mkfs.xfs command to create an xfs file system on the disk.

# mkfs.xfs /dev/mapper/<ENCRYPTED_VOLUME_NAME>

For swap:

# mkswap /dev/mapper/encryptedSwap

Examples:

# mkfs.xfs /dev/mapper/encryptedOpt

# mkswap /dev/mapper/encryptedSwap

# mkfs.xfs /dev/mapper/encryptedCmdb

# mkfs.xfs /dev/mapper/encryptedSvn


For local EventDB or ClickHouse:

# mkfs.xfs /dev/mapper/encryptedData

For ClickHouse with multiple disks:

# mkfs.xfs /dev/mapper/encryptedDatahot1

# mkfs.xfs /dev/mapper/encryptedDatahot2

# mkfs.xfs /dev/mapper/encryptedDatawarm1

# mkfs.xfs /dev/mapper/encryptedDatawarm2

Step 3j: Replace the Mount Point

Configure fstab to mount the new encrypted volumes and discard the old volume names. It is important to make a backup of the original file before modifying.

  1. Run the following command to back up your file.

    # cp -a /etc/fstab /etc/fstab.original

  2. Next, run vi, or some other text editor.

    # vi /etc/fstab

  3. Find the target disk: e.g. /cmdb, /svn, /opt etc.

  4. Replace the line containing the following information:

    /dev/mapper/<ENCRYPTED VOLUME NAME> <DISK NAME> xfs defaults,nodev 0 1

    Examples:

    Original:

    /dev/mapper/rl-root     /                       xfs     defaults        0 0
    UUID=0f1f822d-eb21-4b62-a36b-037830399b17 /boot                   xfs     defaults        0 0
    /dev/mapper/rl-swap     none                    swap    defaults        0 0
    #UUID=29abfbbf-0f63-4398-be77-0850d406e37e /opt xfs defaults 0 0
    #UUID=7b33e40f-3e80-44de-a646-b1305ffff021 /cmdb xfs defaults 0 0
    #UUID=0c7d5bc0-791b-4930-b565-da149071d552 /svn xfs defaults 0 0
    #UUID=951b1f24-3ea8-4ec5-bc9c-5b804b2d0a47    swap    swap  defaults 0 0
    #UUID=9163ed63-608e-4aa1-a611-98ab2fc8dd8b	/data-clickhouse-hot-1			xfs	defaults,nodev,noatime,inode64	 0	0
    

    Change to:

    /dev/mapper/encryptedSwap swap swap defaults 0 0
    /dev/mapper/encryptedOpt /opt xfs defaults,nodev 0 0
    /dev/mapper/encryptedCmdb /cmdb xfs defaults,nodev 0 0
    /dev/mapper/encryptedSvn /svn xfs defaults,nodev 0 0 0

    For local EventDB:

    /dev/mapper/encryptedData /data xfs defaults,nodev,noatime,inode64 0 0


    For ClickHouse:

    /dev/mapper/encryptedData /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    For ClickHouse with multiple disks:

    /dev/mapper/encryptedDatahot1 /data-clickhouse-hot-1 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatahot2 /data-clickhouse-hot-2 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatawarm1 /data-clickhouse-warm-1 xfs defaults,nodev,noatime,inode64 0 0

    /dev/mapper/encryptedDatawarm2 /data-clickhouse-warm-2 xfs defaults,nodev,noatime,inode64 0 0


Mounting the Encrypted Disk

Use the mount command to remount the encrypted volume.

# mount <DISK_NAME>

Examples:

# mount /opt

# mount /cmdb

# mount /svn

# mount /data


For ClickHouse:

# mount /data-clickhouse-hot-1

For ClickHouse with multiple disks:

# mount /data-clickhouse-hot-1

# mount /data-clickhouse-hot-2

# mount /data-clickhouse-warm-1

# mount /data-clickhouse-warm-2

Check if the volume is mounted correctly, by running the df command.

# df <disk_name>

Examples:

Filesystem                 Size  Used Avail Use% Mounted on/dev/mapper/encryptedCmdb   60G  1.1G   59G   2% /cmdb
/dev/mapper/encryptedOpt    69G  7.4G   62G  11% /opt
/dev/mapper/encryptedSvn    60G  461M   60G   1% /svn
/dev/mapper/encryptedData  105G  3.1G  102G   3% /data-clickhouse-hot-1

Step 4: Restore Backed Up Data Disks

Restore the disks backed up in Step 2. Run the following as root.

# cd /

# mv /tmp/opt.tgz /

# tar xvzf opt.tgz

# mv /tmp/data.tgz /

# tar xvzf data.tgz

# mv /tmp/svn.tgz /

# tar xvzf svn.tgz

# mv /tmp/cmdb.tgz /

# tar xvzf cmdb.tgz

For ClickHouse:

# cd /

# mv /tmp/data-clickhouse-hot-1.tgz /

# tar xvzf data-clickhouse-hot-1.tgz

Validate and verify the file structure, ownership and permissions by comparing it to the restored data with the recorded files in Step 1c located under /tmp.

/tmp:

-rw-r--r-- 1 root root 122 Feb 8 20:26 cmdb.out

-rw-r--r-- 1 root root 6300087 Feb 8 20:26 opt.out

-rw-r--r-- 1 root root 902 Feb 8 20:32 data.out

-rw-r--r-- 1 root root 113 Feb 8 20:27 svn.out

Step 5: Reboot FortiSIEM Node

Lastly, reboot the FortiSIEM appliance to verify persistent mounting of the encrypted disks.

# reboot

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                    12G     0   12G   0% /dev
tmpfs                       12G  236K   12G   1% /dev/shm
tmpfs                       12G   17M   12G   1% /run
tmpfs                       12G     0   12G   0% /sys/fs/cgroup
/dev/mapper/rl-root         22G  8.3G   14G  39% /
/dev/sda1                 1014M  698M  317M  69% /boot
/dev/mapper/encryptedCmdb   60G  1.1G   59G   2% /cmdb
/dev/mapper/encryptedOpt    69G  7.4G   62G  11% /opt
/dev/mapper/encryptedSvn    60G  461M   60G   1% /svn
/dev/mapper/encryptedData  105G  3.1G  102G   3% /data-clickhouse-hot-1
tmpfs                      2.4G     0  2.4G   0% /run/user/500
tmpfs                      2.4G     0  2.4G   0% /run/user/0

# lsblk

NAME              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   25G  0 disk  
├─sda1              8:1    0    1G  0 part  /boot
└─sda2              8:2    0   24G  0 part  
  ├─rl-swap       253:0    0  2.5G  0 lvm   [SWAP]
  └─rl-root       253:1    0 21.5G  0 lvm   /
sdb                 8:16   0   60G  0 disk  
└─encryptedSvn    253:5    0   60G  0 crypt /svn
sdc                 8:32   0   60G  0 disk  
└─encryptedCmdb   253:3    0   60G  0 crypt /cmdb
sdd                 8:48   0  100G  0 disk  
├─sdd1              8:49   0 22.4G  0 part  
│ └─encryptedSwap 253:2    0 22.3G  0 crypt [SWAP]
└─sdd2              8:50   0 68.9G  0 part  
  └─encryptedOpt  253:4    0 68.9G  0 crypt /opt
sde                 8:64   0  105G  0 disk  
└─encryptedData   253:6    0  105G  0 crypt /data-clickhouse-hot-1

#phstatus

Every 1.0s: /opt/phoenix/bin/phstatus.py                                                                       sp671: Mon Feb 27 18:10:52 2023

System uptime:  18:10:53 up 46 min,  1 user,  load average: 2.38, 1.93, 1.63
Tasks: 30 total, 0 running, 30 sleeping, 0 stopped, 0 zombie
Cpu(s): 8 cores, 11.0%us, 0.7%sy, 0.0%ni, 87.8%id, 0.1%wa, 0.2%hi, 0.1%si, 0.0%st
Mem: 24463776k total, 13246504k used, 7401772k free, 5544k buffers
Swap: 26042360k total, 0k used, 26042360k free, 3483372k cached


PROCESS                  UPTIME         CPU%           VIRT_MEM       RES_MEM

phParser                 41:49          0              3283m          1511m
phQueryMaster            42:07          0              997m           92m
phRuleMaster             42:07          0              1526m          851m
phRuleWorker             42:07          0              1537m          370m
phQueryWorker            42:07          0              1505m          381m
phDataManager            42:07          0              1813m          476m
phDiscover               42:07          0              561m           64m
phReportWorker           42:07          0              1561m          99m
phReportMaster           42:07          0              667m           54m
phIpIdentityWorker	  42:07          0              1119m          69m
phIpIdentityMaster	  42:07          0              524m           48m
phAgentManager           42:07          0              2341m          735m
phCheckpoint             42:07          0              321m           42m
phPerfMonitor            42:07          0              841m           81m
phDataPurger             42:07          0              599m           69m
phEventForwarder         42:07          0              550m           39m
phMonitor                45:47          0              1415m          671m
Apache                   45:56          0              319m           18m
Rsyslogd                 45:55          0              192m           4128k
Node.js-charting         45:49          0              642m           80m
Node.js-pm2              45:48          0              638m           54m
phFortiInsightAI         45:56          0              13986m         340m
AppSvr                   45:45          9              11149m         4337m
DBSvr                    45:56          0              425m           37m
phAnomaly                41:37          0              982m           60m
SVNLite                  45:56          0              9851m          417m
phClickHouseMonitor	  45:26          0              1559m          25m
ClickHouseServer         45:54          0              6631m          782m
ClickHouseKeeper         45:54          0              1304m          272m
Redis                    45:49          0              246m           80m