Fortinet white logo
Fortinet white logo

FortiSIEM Reference Architecture Using ClickHouse

Enterprise

Enterprise

Enterprise deployments can scale from a few hundred to thousands of devices and very high EPS. FortiSIEM with ClickHouse can handle very high EPS environments with relatively few nodes, providing a big advantage in these systems.

Enterprise deployments should include:

  • One Supervisor node

  • Worker nodes as needed to support the required number of shards and replicas

  • One or three dedicated keeper nodes

  • Collectors

The Supervisor and Worker nodes node can be deployed as a virtual or hardware appliance. Virtual appliances should be deployed on a scalable, enterprise class hypervisor platform. The virtual appliance should be given at least the minimum specified resources, and the resources should be dedicated rather than shared with other guests. In the case of a hardware Supervisor being used, a large hardware appliance should be chosen.

The system should design with enough shards from the outset to handle the maximum anticipated EPS through the life of the solution. Refer to the FortiSIEM ClickHouse Sizing Guide at https://docs.fortinet.com/product/fortisiem/ for the latest recommendation on the number of shards for your anticipated maximum EPS.

The number of replicas in a shard depends on the requirements for data redundancy vs the cost of the solution. Each copy of the data is called a replica - a single replica has no redundancy; two replicas is resilient against the failure of one node; three replicas is resilient against the failure of two nodes, and so on. In many cases, we have assumed that two replicas per shard will provide sufficient resilience, but organizations should make their own assessment of the resilience required in their environment.

Storage can be expanded in-life by expanding the ClickHouse disk on the nodes within each shard. This provides a straightforward way to store more events if needed in future. FortiSIEM also supports archive to an NFS server for longer term storage using FortiSIEM eventDB. However, to simplify querying and data retention consider providing more storage to ClickHouse rather than introducing eventDB on NFS archive.

Enterprise solutions should have dedicated keeper nodes for resilience. Deploy either one or three keeper nodes, depending on the budget and requirements.

  • One keeper node is non-resilient. If it fails, the system will be in read only mode

  • A three node cluster is resilient against the failure of one keeper node. If two out of the three fail then the system will lose quorum and will be in read only mode

The Supervisor, Worker and Keeper nodes and storage must all be in the same data center, connected by a high performance, data center class LAN.

Collectors are essential in an enterprise deployment. They are deployed across the organization; in the data center to collect logs from servers and run performance monitoring jobs, and at remote locations to provide local performance monitoring and log collection, pre-processing, compression and secure upload to the central supervisor/ worker cluster.

A load balancer can optionally be used to distribute collector connections across the worker cluster, and also to distribute inbound syslog or FortiSIEM server agent traffic across Collector nodes. The load balancer is an optional component that can provide additional flexibility in traffic routing, it is not essential to the operation of a distributed FortiSIEM cluster.

Enterprise

Enterprise

Enterprise deployments can scale from a few hundred to thousands of devices and very high EPS. FortiSIEM with ClickHouse can handle very high EPS environments with relatively few nodes, providing a big advantage in these systems.

Enterprise deployments should include:

  • One Supervisor node

  • Worker nodes as needed to support the required number of shards and replicas

  • One or three dedicated keeper nodes

  • Collectors

The Supervisor and Worker nodes node can be deployed as a virtual or hardware appliance. Virtual appliances should be deployed on a scalable, enterprise class hypervisor platform. The virtual appliance should be given at least the minimum specified resources, and the resources should be dedicated rather than shared with other guests. In the case of a hardware Supervisor being used, a large hardware appliance should be chosen.

The system should design with enough shards from the outset to handle the maximum anticipated EPS through the life of the solution. Refer to the FortiSIEM ClickHouse Sizing Guide at https://docs.fortinet.com/product/fortisiem/ for the latest recommendation on the number of shards for your anticipated maximum EPS.

The number of replicas in a shard depends on the requirements for data redundancy vs the cost of the solution. Each copy of the data is called a replica - a single replica has no redundancy; two replicas is resilient against the failure of one node; three replicas is resilient against the failure of two nodes, and so on. In many cases, we have assumed that two replicas per shard will provide sufficient resilience, but organizations should make their own assessment of the resilience required in their environment.

Storage can be expanded in-life by expanding the ClickHouse disk on the nodes within each shard. This provides a straightforward way to store more events if needed in future. FortiSIEM also supports archive to an NFS server for longer term storage using FortiSIEM eventDB. However, to simplify querying and data retention consider providing more storage to ClickHouse rather than introducing eventDB on NFS archive.

Enterprise solutions should have dedicated keeper nodes for resilience. Deploy either one or three keeper nodes, depending on the budget and requirements.

  • One keeper node is non-resilient. If it fails, the system will be in read only mode

  • A three node cluster is resilient against the failure of one keeper node. If two out of the three fail then the system will lose quorum and will be in read only mode

The Supervisor, Worker and Keeper nodes and storage must all be in the same data center, connected by a high performance, data center class LAN.

Collectors are essential in an enterprise deployment. They are deployed across the organization; in the data center to collect logs from servers and run performance monitoring jobs, and at remote locations to provide local performance monitoring and log collection, pre-processing, compression and secure upload to the central supervisor/ worker cluster.

A load balancer can optionally be used to distribute collector connections across the worker cluster, and also to distribute inbound syslog or FortiSIEM server agent traffic across Collector nodes. The load balancer is an optional component that can provide additional flexibility in traffic routing, it is not essential to the operation of a distributed FortiSIEM cluster.