Fortinet black logo

Administration Guide

FAQ

FAQ

Note

This FAQ only applies to the FortiAnalyzer-BigData unit, not the FortiAnalyzer-BigData-VM.

Why does FAZBD4500F have physical ports 10G, 40G on SW module #1 & #2 when module #1 is for cluster internal connection?

By default, module #1 is for internal connections. If you need to expand the chassis, then you will need to link 2 boxes through the module #1 interfaces.

If there is no need for chassis expansion, you would not touch switch module #1.

Do port2 of FAZ Blade (slot 1) and Big Data hosts (Slots 2-14) have to be in the same subnet?

The recommended topology and the best practice is to use the same subnet.

If there is a second chassis (as for storage) are Blade 1 and Blade 2 used the same as Blade 3-14 or do they have special functions?

For the second chassis:

  • Blade 1 is not used and should be powered off.

  • Blade 2 does not have special functions, it is the same as Blades 3-14.

How many power supplies can fail without issues?

It is based on the workloads’ power consumption. The field needs to observe the symptom and ensure the power supply to make it work.

Can we replace Blades/Switches for RMA?

Yes.

What happens if Blade 1 malfunctions?

If Blade 1 fails, it currently cannot failover to the other standby blade. However, as of v7.0.1, if you stack two FAZ-BD chassis, HA mode can be enabled across the two FAZ blades.

If Blade 2 fails, does it delegate its role to another blade?

If Blade 2 fails, it can failover to the other Big Data host.

Big Data controller's IP is configured in FortiAnalyzer Blade 1. Does this mean that this IP is "shared" between all Big Data hosts (Blades 2-14)?

The controller IP is not shared. It is configured on the Big Data controller. When the controller fails over, the IP is re-configured on a new controller.

When a blade fails, does it delegate its role to another blade?

When Blades 2-14 fails, the node with the same role will take over.

Does "3 K8s Master" mean there are 3 blades for this function?

Yes, it means there are 3 blades/hosts that are masters.

Is "3" the default replication factor?

Yes, there are 3 replications.

This means that 3 copies are stored in the system:

  • 1 original copy

  • 2 replicated copies

Is the allocated disk usage for the FortiAnalyzer blade or for the Big Data blade?

It is for FortiAnalyzer to cache the log data before they are ingested to the Big Data cluster.

To view the allocated disk usage, go to Settings > Manage Storage Policy > Data Policy > Disk Usage > Allocated.

In Hosts view, is the last number the actual slot?

The last number is the blade ID. It is important to install blades into slots according to IDs.

Is the blade ID labeled on the blades?

Yes.

If the blade is labeled, when a particular faulty blade is on RMA, should the RMA return with the same blade labeled with the same blade ID?

The RMA will ship a blade with the OS and bootloader wiped, so all the Big Data blades are the same no matter which blade ID it belongs to. Once it is injected into the slots, you will need to set the blade ID accordingly in bootloader. The Big Data controller will bootstrap it with a cluster role.

Does this mean HA is no longer supported?

For 6.2 & 6.4, the Blade 2-14 HA is supported by the Big Data architecture design.

Note

Previous to v6.4.6, recovering/replacing a non-Data role node may fail and require a manual recovery process in some scenarios. This has been resolved in v6.4.6.

Can I migrate old logs after removing fetch management?

Data migration is not supported.

Is the upgrade started from the Big Data controller going to upgrade all the blades including Blade 1?

Yes. CLI runs on the controller node, then it will upgrade all the nodes, starting with and including Blade 1.

Will the size of the database affect the amount of time it takes to upgrade? For example, will it take less time to upgrade a 1Tb vs 150Tb?

Based on the Big Data architecture, the database is not rebuilt. Therefore, the size of the database will not impact the upgrade time.

Will Log View and Reports show data from its own cache and Big Data storage?

Data is stored in Big Data storage. When you query the log data from Log View and Reports, the query request will be sent to the Big Data. All the data returned comes from the Big Data cluster.

How much free storage is required for upgrade?

The system has a mechanism to reserve space for upgrade and other operations.

For upgrade, will all Big Data blades reboot at the same time, or in the order of the blade that finished downloading the image first?

The infrared controller handles the order of the blades. They will be rebooted parallel of each other.

When the system is working normally (not specific to upgrade), is log ingestion from the ADOM cache to Big Data storage in real time or is there a delay?

It is near real time. The delay will be several minutes at most.

Does "same version" mean same main branch (v7.0 to v7.0) or same patch level (v7.0.4 to v7.0.4)?

"Same version" means same patch level GA release.

Will the system rollback if an upgrade fails for certain blades?

The system will not rollback. However, you can use the CLI console to redo upgrading to recover.

fazbdctl upgrade fazbd -o retry

FAQ

Note

This FAQ only applies to the FortiAnalyzer-BigData unit, not the FortiAnalyzer-BigData-VM.

Why does FAZBD4500F have physical ports 10G, 40G on SW module #1 & #2 when module #1 is for cluster internal connection?

By default, module #1 is for internal connections. If you need to expand the chassis, then you will need to link 2 boxes through the module #1 interfaces.

If there is no need for chassis expansion, you would not touch switch module #1.

Do port2 of FAZ Blade (slot 1) and Big Data hosts (Slots 2-14) have to be in the same subnet?

The recommended topology and the best practice is to use the same subnet.

If there is a second chassis (as for storage) are Blade 1 and Blade 2 used the same as Blade 3-14 or do they have special functions?

For the second chassis:

  • Blade 1 is not used and should be powered off.

  • Blade 2 does not have special functions, it is the same as Blades 3-14.

How many power supplies can fail without issues?

It is based on the workloads’ power consumption. The field needs to observe the symptom and ensure the power supply to make it work.

Can we replace Blades/Switches for RMA?

Yes.

What happens if Blade 1 malfunctions?

If Blade 1 fails, it currently cannot failover to the other standby blade. However, as of v7.0.1, if you stack two FAZ-BD chassis, HA mode can be enabled across the two FAZ blades.

If Blade 2 fails, does it delegate its role to another blade?

If Blade 2 fails, it can failover to the other Big Data host.

Big Data controller's IP is configured in FortiAnalyzer Blade 1. Does this mean that this IP is "shared" between all Big Data hosts (Blades 2-14)?

The controller IP is not shared. It is configured on the Big Data controller. When the controller fails over, the IP is re-configured on a new controller.

When a blade fails, does it delegate its role to another blade?

When Blades 2-14 fails, the node with the same role will take over.

Does "3 K8s Master" mean there are 3 blades for this function?

Yes, it means there are 3 blades/hosts that are masters.

Is "3" the default replication factor?

Yes, there are 3 replications.

This means that 3 copies are stored in the system:

  • 1 original copy

  • 2 replicated copies

Is the allocated disk usage for the FortiAnalyzer blade or for the Big Data blade?

It is for FortiAnalyzer to cache the log data before they are ingested to the Big Data cluster.

To view the allocated disk usage, go to Settings > Manage Storage Policy > Data Policy > Disk Usage > Allocated.

In Hosts view, is the last number the actual slot?

The last number is the blade ID. It is important to install blades into slots according to IDs.

Is the blade ID labeled on the blades?

Yes.

If the blade is labeled, when a particular faulty blade is on RMA, should the RMA return with the same blade labeled with the same blade ID?

The RMA will ship a blade with the OS and bootloader wiped, so all the Big Data blades are the same no matter which blade ID it belongs to. Once it is injected into the slots, you will need to set the blade ID accordingly in bootloader. The Big Data controller will bootstrap it with a cluster role.

Does this mean HA is no longer supported?

For 6.2 & 6.4, the Blade 2-14 HA is supported by the Big Data architecture design.

Note

Previous to v6.4.6, recovering/replacing a non-Data role node may fail and require a manual recovery process in some scenarios. This has been resolved in v6.4.6.

Can I migrate old logs after removing fetch management?

Data migration is not supported.

Is the upgrade started from the Big Data controller going to upgrade all the blades including Blade 1?

Yes. CLI runs on the controller node, then it will upgrade all the nodes, starting with and including Blade 1.

Will the size of the database affect the amount of time it takes to upgrade? For example, will it take less time to upgrade a 1Tb vs 150Tb?

Based on the Big Data architecture, the database is not rebuilt. Therefore, the size of the database will not impact the upgrade time.

Will Log View and Reports show data from its own cache and Big Data storage?

Data is stored in Big Data storage. When you query the log data from Log View and Reports, the query request will be sent to the Big Data. All the data returned comes from the Big Data cluster.

How much free storage is required for upgrade?

The system has a mechanism to reserve space for upgrade and other operations.

For upgrade, will all Big Data blades reboot at the same time, or in the order of the blade that finished downloading the image first?

The infrared controller handles the order of the blades. They will be rebooted parallel of each other.

When the system is working normally (not specific to upgrade), is log ingestion from the ADOM cache to Big Data storage in real time or is there a delay?

It is near real time. The delay will be several minutes at most.

Does "same version" mean same main branch (v7.0 to v7.0) or same patch level (v7.0.4 to v7.0.4)?

"Same version" means same patch level GA release.

Will the system rollback if an upgrade fails for certain blades?

The system will not rollback. However, you can use the CLI console to redo upgrading to recover.

fazbdctl upgrade fazbd -o retry