Fortinet black logo

FortiSOAR Performance Benchmarking for v7.3.0

Copy Link
Copy Doc ID ebdedd56-76be-11ed-8e6d-fa163e15d75b:199914
Download PDF

FortiSOAR Performance Benchmarking for v7.3.0

This document details the performance benchmark tests conducted in Fortinet labs. The performance benchmarking tests were performed on FortiSOAR version 7.3.0 Build 2034.

The objective of this performance test is to measure the time taken to create alerts in FortiSOAR, and complete the execution of corresponding playbooks on the created alerts in the following cases:

  • Single-node FortiSOAR appliance

  • Cluster setup of FortiSOAR

The data from this benchmark test can help you in determining your scaling requirements for a FortiSOAR instance to handle the expected workload in your environment.

Single Invocation Test for the single-node FortiSOAR appliance

Environment

FortiSOAR Virtual Appliance Specifications

Component Specifications
CPU 16 CPUs
Memory 30 GB
Storage 300 GB virtual disk, HDD type gp3 with IOPS 3000, attached to an AWS Instance

Instance Type

c4.4xLarge

Operating System Specifications

Operating System Kernel Version
Rocky Linux 8.6 4.18.0-372.32.1.el8_6.x86_64

External Tools Used

Tool Name Version
FortiMonitor Cloud
Internal Script to gather data

Pre-test conditions on both the standalone FortiSOAR machine and the FortiSOAR High Availability (HA) cluster

At the start of each test run -

  • The test environment contained zero alerts.

  • The test environment contained only the FortiSOAR built-in connectors such as IMAP, Utilities, etc.

  • The system playbooks were deactivated such as, alert assignment notification, SLA calculation, etc., and there were no running playbooks.

  • The playbook execution logs were purged.

  • Configured tunables as follows:

    • Changed celery workers to 16

    • Elastic heaps size to 8GB

    • Changes related to PostgresSQL:

      • max_connections = 1200
      • shared_buffers = 5GB
      • effective_cache_size = 15GB
      • maintenance_work_mem = 2GB
      • checkpoint_completion_target = 0.9
      • wal_buffers = 16MB
      • default_statistics_target = 500
      • random_page_cost = 4
      • effective_io_concurrency = 2
      • work_mem = 546kB
      • min_wal_size = 4GB
      • max_wal_size = 16GB
      • max_worker_processes = 8
      • max_parallel_workers_per_gather = 4
      • max_parallel_workers = 8
      • max_parallel_maintenance_workers = 4
    • Changes related to NGINX:
      • worker_processes auto;
        #or should be equal to the CPU core, you can use 'grep processor /proc/cpuinfo | wc -l' to find; auto does it implicitly
      • worker_connections 1024;
        # default is 768; find optimum value for your server by 'ulimit -n'
      • access_log off;
        # to boost I/O on HDD we can disable access logs
        # this prevents nginx from logging every action in a log file named 'access.log'
      • keepalive_timeout 15;
        # default is 65;
        # server will close the connection after this time (in seconds)

        sudo mkdir -p /data/nginx/cache


        proxy_cache_path /data/nginx/cache keys_zone=my_zone:10m inactive=1d;

        server {

        ...

        location /api-endpoint/ {

        proxy_cache my_zone;

        proxy_cache_key "$host$request_uri$http_authorization";

        proxy_cache_valid 404, 302 1m;

        proxy_cache_valid 200 1d;

        add_header X-Cache-Status $upstream_cache_status;

        }

        ...

        }
Note

In a production environment the network bandwidth, especially for outbound connections, to applications such as VirusTotal might vary, which could affect the observations.

Test setup for the single-node FortiSOAR appliance

This test has been invoked on a standalone FortiSOAR machine with configurations mentioned in the Environment section.

Tests performed

Test 1: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook. You can download the JSON for the sample playbooks (PerfBenchmarking_Test01_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
  2. Once the alerts are created, measure the total time taken to create all the alerts in FortiSOAR.
Observations

The data in the following table outlines the number of alerts ingested and the total time taken to ingest those alerts.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1

0.28

1
5

0.45

1
10

0.69

1

25

1.44

1

50

2.66

1

100

5.74

1

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 2: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" playbooks

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test02_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 1.29 2
5 2.20 6
10 2.85 11

25

6.80 26

50

10.20 51

100

16.39 101

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 3: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" and "Enrichment" playbooks

The test was executed using an automated testbed that starts FortiSIEM ingestion which in turn created alerts in FortiSOAR. Once the alerts are created in FortiSOAR, "Extraction" and "Enrichment" playbooks are triggered and the total time taken for all the extraction and enrichment playbooks to complete their execution is calculated.

Tooltip

The setup for this test is exactly the same, however this test additionally requires the "VirusTotal" connector to be configured.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test03_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
  3. Once the indicators are extracted, the "Enrichment" playbooks are triggered and they perform the following steps:
    1. Matches the IP in an internal subnet through the "Utilities" subnet.
    2. Validates whether the IP is Private or Public.
    3. Performs enrichment using the "Utilities" connector, if the IP is "Private".
    4. Performs enrichment using the "VirusTotal" connector, if the IP is "Public".
    5. Updates the indicator status based on the IP’s vulnerability.
    6. Updates the state of the indicator.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 4.23 8
5 6.16 36
10 11.41 71

25

25.25 176

50

46.27 351

100

1 minute 26 seconds 700

Note: Once this test is completed, refer to the pre-test conditions before starting a new test. Also, note that enrichment of playbooks makes API calls over the Internet, and the times mentioned in this table to execute playbooks is inclusive of this time.

Sustained Invocation Test for the single-node FortiSOAR appliance

A sustenance test was also conducted with the configuration as defined in "Test 2", i.e., the test is executed by manually triggering the FortiSIEM Ingestion playbook, “FortiSIEM -> Ingest 100 Alerts”, which creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Number of alerts: 100/min

Duration: 12 hours

Playbooks configured: As defined in Test 2 comprising of "Ingestion" and "Indicator Extraction" playbooks.

Total number of playbooks executed: 72720

Results

The system performed well under the sustained load. All 72000 alerts were successfully ingested and all the extraction playbooks were successfully completed without any queuing.

Graphs

The following graphs are plotted for the vital statistics for the system that was under test during the period of the test run.

CPU Load Average Utilization Graph

Analysis of CPU load average utilization when the test run was in progress on the appliance:

CPU Load Average Utilization Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the CPU utilization was normal and the performance of the system did not get impacted.

Memory Utilization Graph

Analysis of memory utilization when the test run was in progress on the appliance:

Memory Utilization Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the Memory utilization was around 65%.

RabbitMQ PB Queue Count Graph

Analysis of RabbitMQ PB Queue Count when the test run was in progress on the appliance:

RabbitMQ PB Queue Count Graph for single FSR Node

Using the system resources specified in the "Environment" and tunables configured as mentioned in the "Pre-Test Conditions" Sections, it was observed that during the "Sustenance Test" the rabbitmq_pb_queue went to a maximum of 60 for some instances but eventually went back to 0. This means that no playbooks remained in queue and that all the required playbooks associated with the alerts were getting completed, i.e., alerts were created and their indicators were extracted and enriched in a minute.

IO Wait Graph

Analysis of IO Wait when the test run was in progress on the appliance:

IO Wait Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the IO Wait time was normal, with the average IO Wait being around 2% of the CPU idle time.

Read/Write IO Wait Graph for ElasticSearch

Analysis of Read/Write IO Wait for ElasticSearch when the test run was in progress on the appliance:

Read/Write IO Wait Graph for ElasticSearch for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the "Read" Wait for the ElasticSearch disk was almost 0 millisecond. The "Write" Wait for the ElasticSearch disk averaged around 1 millisecond, with the maximum wait of 8 milliseconds and the minimum wait of 0.5 millisecond.

Read/Write IO Wait Graph for PostgreSQL

Analysis of Read/Write IO Wait for PostgreSQL when the test run was in progress on the appliance:

Read/Write IO Wait Graph for PostgreSQL for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running, the "Read" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 2.5 milliseconds and the minimum wait of 0 milliseconds. The "Write" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 1.8 milliseconds and the minimum wait of 0.7 millisecond.

Single Invocation Test for the High Availability (HA) active-active cluster of two FortiSOAR nodes

Test setup for the HA active-active cluster of two FortiSOAR nodes

This test has been invoked the following setup:

  • Cluster of two FortiSOAR machines that are joined in the Active-Active state using the FortiSOAR HA feature.

  • The machines that form the HA cluster must be in the same network subnet.

Tests performed

Test 1: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook. You can download the JSON for the sample playbooks (PerfBenchmarking_Test01_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
  2. Once the alerts are created, measure the total time taken to create all the alerts in FortiSOAR.
Observations

The data in the following table outlines the number of alerts ingested and the total time taken to ingest those alerts.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1

0.31

1
5

0.48

1
10

0.70

1

25

1.69

1

50

3.81

1

100

6.36

1

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 2: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" playbooks

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test02_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 1.25 2
5 1.88 6
10 2.57 11

25

5.56 26

50

8.36 51

100

12.83 101

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 3: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" and "Enrichment" playbooks

The test was executed using an automated testbed that starts FortiSIEM ingestion which in turn created alerts in FortiSOAR. Once the alerts are created in FortiSOAR, "Extraction" and "Enrichment" playbooks are triggered and the total time taken for all the extraction and enrichment playbooks to complete their execution is calculated.

Tooltip

The setup for this test is exactly the same, however this test additionally requires the "VirusTotal" connector to be configured.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test03_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
  3. Once the indicators are extracted, the "Enrichment" playbooks are triggered and they perform the following steps:
    1. Matches the IP in an internal subnet through the "Utilities" subnet.
    2. Validates whether the IP is Private or Public.
    3. Performs enrichment using the "Utilities" connector, if the IP is "Private".
    4. Performs enrichment using the "VirusTotal" connector, if the IP is "Public".
    5. Updates the indicator status based on the IP’s vulnerability.
    6. Updates the state of the indicator.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 4.44 8
5 5.87 36
10 7.57 51

25

17.59 176

50

33.12 351

100

1 minute 1 second 700

Note: Once this test is completed, refer to the pre-test conditions before starting a new test. Also, note that enrichment of playbooks makes API calls over the Internet, and the times mentioned in this table to execute playbooks is inclusive of this time.

Sustained Invocation Test for the HA active-active cluster of two FortiSOAR appliance

A sustenance test was also conducted with the configuration as defined in "Test 2", i.e., the test is executed by manually triggering the FortiSIEM Ingestion playbook, “FortiSIEM -> Ingest 100 Alerts”, which creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Number of alerts: 100/min

Duration: 12 hours

Playbooks configured: As defined in Test 2 comprising of "Ingestion" and "Indicator Extraction" playbooks.

Total number of playbooks executed: 72720

Results

The system performed well under the sustained load. All 72000 alerts were successfully ingested and all the extraction playbooks were successfully completed without any queuing.

Graphs

The following graphs are plotted for the vital statistics for the HA cluster that was under test during the period of the test run.

Note

All the graphs included in this section are from the Primary/Active Node.

CPU Load Average Utilization Graph

Analysis of CPU load average utilization when the test run was in progress on the appliance:

CPU Load Average Utilization Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the CPU utilization was normal and the performance of the system did not get impacted.

Memory Utilization Graph

Analysis of memory utilization when the test run was in progress on the appliance:

Memory Utilization Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the Memory utilization was around 60%.

RabbitMQ PB Queue Count Graph

Analysis of RabbitMQ PB Queue Count when the test run was in progress on the appliance:

RabbitMQ PB Queue Count Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and tunables configured as mentioned in the "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" the rabbitmq_pb_queue went to a maximum of 45 for some instances but eventually went back to 0. This means that no playbooks remained in queue and that all the required playbooks associated with the alerts were getting completed, i.e., alerts were created and their indicators were extracted and enriched in a minute.

IO Wait Graph

Analysis of IO Wait when the test run was in progress on the appliance:

IO Wait Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the IO Wait time was normal, with the average IO Wait being around 1% of the CPU idle time.

Read/Write IO Wait Graph for ElasticSearch

Analysis of Read/Write IO Wait for ElasticSearch when the test run was in progress on the appliance:

Read/Write IO Wait Graph for ElasticSearch for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the "Read" Wait for the ElasticSearch disk was almost 0 milliseconds. The "Write" Wait for the ElasticSearch disk averaged around 1 millisecond, with the maximum wait of 1.3 milliseconds and the minimum wait of 0.7 millisecond.

Read/Write IO Wait Graph for PostgreSQL

Analysis of Read/Write IO Wait for PostgreSQL when the test run was in progress on the appliance:

Read/Write IO Wait Graph for PostgreSQL for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running, the "Read" Wait for the PostgreSQL disk averaged around 1 millisecond. The "Write" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 1.4 milliseconds and the minimum wait of 0.6 millisecond.

FortiSOAR Performance Benchmarking for v7.3.0

This document details the performance benchmark tests conducted in Fortinet labs. The performance benchmarking tests were performed on FortiSOAR version 7.3.0 Build 2034.

The objective of this performance test is to measure the time taken to create alerts in FortiSOAR, and complete the execution of corresponding playbooks on the created alerts in the following cases:

  • Single-node FortiSOAR appliance

  • Cluster setup of FortiSOAR

The data from this benchmark test can help you in determining your scaling requirements for a FortiSOAR instance to handle the expected workload in your environment.

Single Invocation Test for the single-node FortiSOAR appliance

Environment

FortiSOAR Virtual Appliance Specifications

Component Specifications
CPU 16 CPUs
Memory 30 GB
Storage 300 GB virtual disk, HDD type gp3 with IOPS 3000, attached to an AWS Instance

Instance Type

c4.4xLarge

Operating System Specifications

Operating System Kernel Version
Rocky Linux 8.6 4.18.0-372.32.1.el8_6.x86_64

External Tools Used

Tool Name Version
FortiMonitor Cloud
Internal Script to gather data

Pre-test conditions on both the standalone FortiSOAR machine and the FortiSOAR High Availability (HA) cluster

At the start of each test run -

  • The test environment contained zero alerts.

  • The test environment contained only the FortiSOAR built-in connectors such as IMAP, Utilities, etc.

  • The system playbooks were deactivated such as, alert assignment notification, SLA calculation, etc., and there were no running playbooks.

  • The playbook execution logs were purged.

  • Configured tunables as follows:

    • Changed celery workers to 16

    • Elastic heaps size to 8GB

    • Changes related to PostgresSQL:

      • max_connections = 1200
      • shared_buffers = 5GB
      • effective_cache_size = 15GB
      • maintenance_work_mem = 2GB
      • checkpoint_completion_target = 0.9
      • wal_buffers = 16MB
      • default_statistics_target = 500
      • random_page_cost = 4
      • effective_io_concurrency = 2
      • work_mem = 546kB
      • min_wal_size = 4GB
      • max_wal_size = 16GB
      • max_worker_processes = 8
      • max_parallel_workers_per_gather = 4
      • max_parallel_workers = 8
      • max_parallel_maintenance_workers = 4
    • Changes related to NGINX:
      • worker_processes auto;
        #or should be equal to the CPU core, you can use 'grep processor /proc/cpuinfo | wc -l' to find; auto does it implicitly
      • worker_connections 1024;
        # default is 768; find optimum value for your server by 'ulimit -n'
      • access_log off;
        # to boost I/O on HDD we can disable access logs
        # this prevents nginx from logging every action in a log file named 'access.log'
      • keepalive_timeout 15;
        # default is 65;
        # server will close the connection after this time (in seconds)

        sudo mkdir -p /data/nginx/cache


        proxy_cache_path /data/nginx/cache keys_zone=my_zone:10m inactive=1d;

        server {

        ...

        location /api-endpoint/ {

        proxy_cache my_zone;

        proxy_cache_key "$host$request_uri$http_authorization";

        proxy_cache_valid 404, 302 1m;

        proxy_cache_valid 200 1d;

        add_header X-Cache-Status $upstream_cache_status;

        }

        ...

        }
Note

In a production environment the network bandwidth, especially for outbound connections, to applications such as VirusTotal might vary, which could affect the observations.

Test setup for the single-node FortiSOAR appliance

This test has been invoked on a standalone FortiSOAR machine with configurations mentioned in the Environment section.

Tests performed

Test 1: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook. You can download the JSON for the sample playbooks (PerfBenchmarking_Test01_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
  2. Once the alerts are created, measure the total time taken to create all the alerts in FortiSOAR.
Observations

The data in the following table outlines the number of alerts ingested and the total time taken to ingest those alerts.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1

0.28

1
5

0.45

1
10

0.69

1

25

1.44

1

50

2.66

1

100

5.74

1

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 2: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" playbooks

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test02_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 1.29 2
5 2.20 6
10 2.85 11

25

6.80 26

50

10.20 51

100

16.39 101

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 3: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" and "Enrichment" playbooks

The test was executed using an automated testbed that starts FortiSIEM ingestion which in turn created alerts in FortiSOAR. Once the alerts are created in FortiSOAR, "Extraction" and "Enrichment" playbooks are triggered and the total time taken for all the extraction and enrichment playbooks to complete their execution is calculated.

Tooltip

The setup for this test is exactly the same, however this test additionally requires the "VirusTotal" connector to be configured.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test03_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
  3. Once the indicators are extracted, the "Enrichment" playbooks are triggered and they perform the following steps:
    1. Matches the IP in an internal subnet through the "Utilities" subnet.
    2. Validates whether the IP is Private or Public.
    3. Performs enrichment using the "Utilities" connector, if the IP is "Private".
    4. Performs enrichment using the "VirusTotal" connector, if the IP is "Public".
    5. Updates the indicator status based on the IP’s vulnerability.
    6. Updates the state of the indicator.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a single-node FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 4.23 8
5 6.16 36
10 11.41 71

25

25.25 176

50

46.27 351

100

1 minute 26 seconds 700

Note: Once this test is completed, refer to the pre-test conditions before starting a new test. Also, note that enrichment of playbooks makes API calls over the Internet, and the times mentioned in this table to execute playbooks is inclusive of this time.

Sustained Invocation Test for the single-node FortiSOAR appliance

A sustenance test was also conducted with the configuration as defined in "Test 2", i.e., the test is executed by manually triggering the FortiSIEM Ingestion playbook, “FortiSIEM -> Ingest 100 Alerts”, which creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Number of alerts: 100/min

Duration: 12 hours

Playbooks configured: As defined in Test 2 comprising of "Ingestion" and "Indicator Extraction" playbooks.

Total number of playbooks executed: 72720

Results

The system performed well under the sustained load. All 72000 alerts were successfully ingested and all the extraction playbooks were successfully completed without any queuing.

Graphs

The following graphs are plotted for the vital statistics for the system that was under test during the period of the test run.

CPU Load Average Utilization Graph

Analysis of CPU load average utilization when the test run was in progress on the appliance:

CPU Load Average Utilization Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the CPU utilization was normal and the performance of the system did not get impacted.

Memory Utilization Graph

Analysis of memory utilization when the test run was in progress on the appliance:

Memory Utilization Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the Memory utilization was around 65%.

RabbitMQ PB Queue Count Graph

Analysis of RabbitMQ PB Queue Count when the test run was in progress on the appliance:

RabbitMQ PB Queue Count Graph for single FSR Node

Using the system resources specified in the "Environment" and tunables configured as mentioned in the "Pre-Test Conditions" Sections, it was observed that during the "Sustenance Test" the rabbitmq_pb_queue went to a maximum of 60 for some instances but eventually went back to 0. This means that no playbooks remained in queue and that all the required playbooks associated with the alerts were getting completed, i.e., alerts were created and their indicators were extracted and enriched in a minute.

IO Wait Graph

Analysis of IO Wait when the test run was in progress on the appliance:

IO Wait Graph for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the IO Wait time was normal, with the average IO Wait being around 2% of the CPU idle time.

Read/Write IO Wait Graph for ElasticSearch

Analysis of Read/Write IO Wait for ElasticSearch when the test run was in progress on the appliance:

Read/Write IO Wait Graph for ElasticSearch for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the "Read" Wait for the ElasticSearch disk was almost 0 millisecond. The "Write" Wait for the ElasticSearch disk averaged around 1 millisecond, with the maximum wait of 8 milliseconds and the minimum wait of 0.5 millisecond.

Read/Write IO Wait Graph for PostgreSQL

Analysis of Read/Write IO Wait for PostgreSQL when the test run was in progress on the appliance:

Read/Write IO Wait Graph for PostgreSQL for single FSR Node

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running, the "Read" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 2.5 milliseconds and the minimum wait of 0 milliseconds. The "Write" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 1.8 milliseconds and the minimum wait of 0.7 millisecond.

Single Invocation Test for the High Availability (HA) active-active cluster of two FortiSOAR nodes

Test setup for the HA active-active cluster of two FortiSOAR nodes

This test has been invoked the following setup:

  • Cluster of two FortiSOAR machines that are joined in the Active-Active state using the FortiSOAR HA feature.

  • The machines that form the HA cluster must be in the same network subnet.

Tests performed

Test 1: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook. You can download the JSON for the sample playbooks (PerfBenchmarking_Test01_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
  2. Once the alerts are created, measure the total time taken to create all the alerts in FortiSOAR.
Observations

The data in the following table outlines the number of alerts ingested and the total time taken to ingest those alerts.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1

0.31

1
5

0.48

1
10

0.70

1

25

1.69

1

50

3.81

1

100

6.36

1

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 2: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" playbooks

This test is executed by manually triggering FortiSIEM Ingestion playbook that creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test02_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 1.25 2
5 1.88 6
10 2.57 11

25

5.56 26

50

8.36 51

100

12.83 101

Note: Once this test is completed, refer to the pre-test conditions before starting a new test.

Test 3: Perform Ingestion in FortiSOAR using the FortiSIEM Ingestion Playbook after the alerts are created and after triggering "Extraction" and "Enrichment" playbooks

The test was executed using an automated testbed that starts FortiSIEM ingestion which in turn created alerts in FortiSOAR. Once the alerts are created in FortiSOAR, "Extraction" and "Enrichment" playbooks are triggered and the total time taken for all the extraction and enrichment playbooks to complete their execution is calculated.

Tooltip

The setup for this test is exactly the same, however this test additionally requires the "VirusTotal" connector to be configured.

Steps followed
  1. Created the alerts using the FortiSIEM Ingestion playbook.
  2. Once the alerts are created, the "Extraction" playbooks are triggered. You can download the JSON for the sample playbooks (PerfBenchmarking_Test03_PB_Collection_7_3.zip) so that you can run the same tests in your environment to see the performance in your version/hardware platforms. Or, if you want to make some additions that are specific to your environment, you can also tweak the existing playbooks.
    The playbooks perform the following steps:
    1. Declares variables using the "Set Variable Step".
    2. Updates the existing indicator list using mapping.
    3. Retrieves indicators from the source data of the alert.
    4. Creates indicators in the "Indicators" Module.
    5. Links alerts to the indicators.
    6. Updates the state of the alerts.
  3. Once the indicators are extracted, the "Enrichment" playbooks are triggered and they perform the following steps:
    1. Matches the IP in an internal subnet through the "Utilities" subnet.
    2. Validates whether the IP is Private or Public.
    3. Performs enrichment using the "Utilities" connector, if the IP is "Private".
    4. Performs enrichment using the "VirusTotal" connector, if the IP is "Public".
    5. Updates the indicator status based on the IP’s vulnerability.
    6. Updates the state of the indicator.
Observations

The data in the following table outlines the number of alerts ingested, the total time taken to ingest those alerts, and the total time taken for all the triggered playbooks to complete their execution.

Single Invocation Test run on a two-node active-active FortiSOAR appliance
Number of alerts created in FortiSOAR

Total time (in seconds) taken to create all alerts in FortiSOAR

Total number of playbooks executed in FortiSOAR
1 4.44 8
5 5.87 36
10 7.57 51

25

17.59 176

50

33.12 351

100

1 minute 1 second 700

Note: Once this test is completed, refer to the pre-test conditions before starting a new test. Also, note that enrichment of playbooks makes API calls over the Internet, and the times mentioned in this table to execute playbooks is inclusive of this time.

Sustained Invocation Test for the HA active-active cluster of two FortiSOAR appliance

A sustenance test was also conducted with the configuration as defined in "Test 2", i.e., the test is executed by manually triggering the FortiSIEM Ingestion playbook, “FortiSIEM -> Ingest 100 Alerts”, which creates alerts in FortiSOAR. Once the alerts are created in FortiSOAR, an "Extraction" playbook is triggered and the total time taken for all the extraction playbooks to complete their execution is calculated.

Number of alerts: 100/min

Duration: 12 hours

Playbooks configured: As defined in Test 2 comprising of "Ingestion" and "Indicator Extraction" playbooks.

Total number of playbooks executed: 72720

Results

The system performed well under the sustained load. All 72000 alerts were successfully ingested and all the extraction playbooks were successfully completed without any queuing.

Graphs

The following graphs are plotted for the vital statistics for the HA cluster that was under test during the period of the test run.

Note

All the graphs included in this section are from the Primary/Active Node.

CPU Load Average Utilization Graph

Analysis of CPU load average utilization when the test run was in progress on the appliance:

CPU Load Average Utilization Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the CPU utilization was normal and the performance of the system did not get impacted.

Memory Utilization Graph

Analysis of memory utilization when the test run was in progress on the appliance:

Memory Utilization Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the Memory utilization was around 60%.

RabbitMQ PB Queue Count Graph

Analysis of RabbitMQ PB Queue Count when the test run was in progress on the appliance:

RabbitMQ PB Queue Count Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and tunables configured as mentioned in the "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" the rabbitmq_pb_queue went to a maximum of 45 for some instances but eventually went back to 0. This means that no playbooks remained in queue and that all the required playbooks associated with the alerts were getting completed, i.e., alerts were created and their indicators were extracted and enriched in a minute.

IO Wait Graph

Analysis of IO Wait when the test run was in progress on the appliance:

IO Wait Graph for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the IO Wait time was normal, with the average IO Wait being around 1% of the CPU idle time.

Read/Write IO Wait Graph for ElasticSearch

Analysis of Read/Write IO Wait for ElasticSearch when the test run was in progress on the appliance:

Read/Write IO Wait Graph for ElasticSearch for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running the "Read" Wait for the ElasticSearch disk was almost 0 milliseconds. The "Write" Wait for the ElasticSearch disk averaged around 1 millisecond, with the maximum wait of 1.3 milliseconds and the minimum wait of 0.7 millisecond.

Read/Write IO Wait Graph for PostgreSQL

Analysis of Read/Write IO Wait for PostgreSQL when the test run was in progress on the appliance:

Read/Write IO Wait Graph for PostgreSQL for the HA active-active cluster of two FSR Nodes

Using the system resources specified in the "Environment" and "Pre-Test Conditions" Sections, it was observed that while the "Sustenance Test" was running, the "Read" Wait for the PostgreSQL disk averaged around 1 millisecond. The "Write" Wait for the PostgreSQL disk averaged around 1 millisecond, with the maximum wait of 1.4 milliseconds and the minimum wait of 0.6 millisecond.