Checking the cluster status
You can use the following get and diagnose commands to show the status of the cluster and all of the devices in it.
-
Log into the primary FortiController CLI and enter the following command to view the system status of the primary FortiController.
get system status Version: FortiController-5103B v5.0,build0024,140815 Branch Point: 0024 Serial-Number: FT513B3912000029 BIOS version: 04000009 System Part-Number: P08442-04 Hostname: ch1-slot1 Current HA mode: a-p, master System time: Sat Sep 13 06:51:53 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada
-
Enter the following command to view the load balance status of the primary FortiController and its workers.
The command output shows the workers in slots 3, 4, and 5, and status information about each one.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: slot-3 Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running"
-
Enter this command from the primary FortiController to show the HA status of the primary and secondary FortiControllers.
The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (
status=alive
) and the B2 interfaces are not (status=dead
). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.diagnose system ha status mode: a-p minimize chassis failover: 1 ch1-slot1(FT513B3912000029), Master(priority=0),
ip=169.254.128.41, uptime=62581.81, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=2 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch2-slot1(FT513B3912000051), Slave(priority=1), ip=169.254.128.42, uptime=1644.71, chassis=2(1) slot: 1 sync: conf_sync=0, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=2 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=66430.35 status=alive local_interface= b2 last_hb_time= 0.00 status=dead -
Log into the secondary FortiController CLI and enter this command to view the status of the secondary FortiController.
get system status Version: FortiController-5103B v5.0,build0020,131118 (Patch 3) Branch Point: 0020 Serial-Number: FT513B3912000051 BIOS version: 04000009 System Part-Number: P08442-04 Hostname: ch2-slot1 Current HA mode: a-p, backup System time: Sat Sep 13 07:29:04 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada)
-
Enter the following command to view the status of the secondary FortiController and its workers.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: N/A Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running"
-
Enter the following command from the secondary FortiController to show the HA status of the secondary and primary FortiControllers.
Notice that the secondary FortiController is shown first. The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (
status=alive
) and the B2 interfaces are not (status=dead
). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.diagnose system ha status mode: a-p minimize chassis failover: 1 ch2-slot1(FT513B3912000051), Slave(priority=1),
ip=169.254.128.42, uptime=3795.92, chassis=2(1) slot: 1 sync: conf_sync=0, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch1-slot1(FT513B3912000029), Master(priority=0),
ip=169.254.128.41, uptime=64732.98, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=68534.90 status=alive local_interface= b2 last_hb_time= 0.00 status=dead