Fortinet black logo

Administration Guide

Retrieving system logs in backend system

Retrieving system logs in backend system

1. dmesg

Dmesg is used to examine or control the kernel ring buffer. It includes all important kernel information such as hardware loading and call trace information. Kernel level traffic debug logs will be also included in dmesg.

One can check such logs with “# dmesg” or “#dmesg | grep xxx” directly;

For further troubleshooting, you can archive all logs under the directory /var/log/dmesg/:

tar czf /var/log/gui_upload/dmesg.tar.gz /var/log/dmesg/

Notes:

By default, dmesg uses a time stamp notation of seconds and nanoseconds since the local kernel started, and it’s not in a human-friendly format. If you need to check the accurate time, please check the "/var/log/dmesg/kern.log".

kern.log contains the latest dmesg information, and other logs started with kern.log are backup logs.

2. Apache error logs

If one failed to do some GUI related operation, please collect this logs for analysis:

/var/log# ls apache_logs/

error_log

3. CMDB logs

For configuration deployment issues, please collect cmdb logs for analysis:

# ls /var/log/cmdb/cmdb.log.*

cmdb/cmdb.log.0 cmdb/cmdb.log.155 cmdb/cmdb.log.211 cmdb/cmdb.log.44

#ls /var/log/dbg_cli/

4. /var/log/debug/

Some real-time logs will be generated and stored at /var/log/debug/:

/# ls /var/log/debug/

collect_tcpdump_para.txt daemon_log_flag proxyd_dbg

coredump_log_flag dbsync_log sample

crash.log kernel.log system-startup.log

crash_log_flag kernel_log_flag tmp

crl_updated_dbg netstat_log_flag daemon.log nstd

5. /var/log/gui_upload/

1) Core, coredump and some real-time logs will be generated and stored at /var/log/gui_upload/:

/# ls /var/log/gui_upload/

core-proxyd-2141-1630609770 dlog_logd ha_event_log

core-proxyd-7794-1630610047 ints.txt debug_disk.txtirq

jeprof.out.51146.1630448785.heap perf.data kern.log

debug_out_d_cond_cpu.sh.txt debug_out_d_mem.sh.txt debug_out_d_net.sh.txt

debug_out_d_proc.sh.txt

2) Some logs named as “debug_<function name>.txt” (or with the prefix “debug_out_d_” in some intermediate builds) are generated after 6.4.1.

  • Scripts in /var/log/debug/sample/ are samples to run in /var/log/outgoing;

  • Scripts in /var/log/outgoing/ are scripts actually run in /var/log/outgoing;

  • Currently these system information are collected:

    /# ls /var/log/debug/sample/ #script samples

    README d_cond_cpu.sh d_mem.sh d_net.sh d_proc.sh first_flag

    /# ls /var/log/outgoing/ #scripts actually run

    d_cond_cpu.sh d_mem.sh d_net.sh d_proc.sh

    /# ls -l /var/log/gui_upload/debug_out_d_* (in new builds files are debug_<function name>.txt)

    -rw-r--r-- 1 root 0 65018 Sep 28 18:03 /var/log/gui_upload/debug_out_d_cond_cpu.sh.txt

    -rw-r--r-- 1 root 0 119859 Sep 28 18:03 /var/log/gui_upload/debug_out_d_mem.sh.txt

    -rw-r--r-- 1 root 0 66371 Sep 28 18:03 /var/log/gui_upload/debug_out_d_net.sh.txt

    -rw-r--r-- 1 root 0 126484 Sep 28 18:03

    /var/log/gui_upload/debug_out_d_proc.sh.txt

  • The information collected by these scripts mainly include:

    d_cond_cpu.sh: If the CPU usage more than 90% - date, top 10 daemons of CPU usage, perf top for 10 seconds

    d_mem.sh: date, free, /proc/meminfo, etc.

    d_net.sh: date, netstat -natpu, route -n

    d_proc.sh: date, top -b -n1, ps

  • The running interval for these scripts can be set with CLI:

    FortiWeb # show full system global

    config system global

    set debug-monitor-interval 5 #minutes

    End

    If the script is blocked for 30 sec, the system will kill it and call it in the next debug-monitor-interval.

  • If necessary, one can add scripts (shell or python) to this directory to collect system information; (NOT Recommended, because too many these manually-added tasks may impact system running & stability)

  • The size of “debug_<function name>.txt” is limited to 25MB. If the size gets greater, it will be moved to an .old file. And there are only two files rotated.

3) NMON logs are generated after 6.4.0.

NMON (shorthand for Nigel's Monitor) is a system monitor tool that can collect system performance statistics including CPU, Mem, Disk, Net, etc.

  • NMON log files (with a suffix .nmon) are generated automatically and stored at /var/log/debug/tmp, and will be archived and can be downloaded via the method described in below section 10.2. The maximum number of .nmon files stored is 180.

  • A .nmon file is generated with a sampling interval of 5 minutes, and each time when system boots up, a new .nmon file will be generated. So generally only one .nmon file named “FortiWeb_220107_1734.nmon” (may be different on some previous builds) will be generated each day. Multiple .nmon files generated in one day indicate that system rebooted or crashed.

  • After processed by an nmon analyzer:

4) Jemalloc dump logs for proxyd & ml_daemon.

Please refer to Diagnosing memory leak issues.

Jemalloc dump logs named as “jeprof.out.*.*.heap” can be generated manually by executing diagnose debug jemalloc proxyd dump, or produced automatically when the total system memory usage reaches the boundary value (70% by default).

Jeprof information is very useful when debugging memory issues for proxyd & machine learning.

5) Jemalloc pdump logs for proxyd.

Please refer to Diagnosing memory leak issues.

Jemalloc pdump logs named as “proxyd-objpool-*-*.txt” can be generated manually by executing diagnose debug jemalloc proxyd pdump.

Such logs include memory statistics information for key data structures, and only proxyd supports generating these logs. When analyzing proxyd issues, you can also collect both dump and pdump logs at the same time.

6) Proxyd watchdog logs generated from 7.0.1.

Proxyd watchdogs logs are useful when analyzing proxyd thread lock issues.

If a proxyd thread is stuck for 5 or 60 seconds, FortiWeb will write a debug message like "proxyd worker thread [1] stuck for 5 (or 60) seconds" into the /var/log/debug/daemon.log and generate a log file named like "watchdog-proxyd-3991-1658580435.bt" under the /var/log/gui_upload/.

Watchdog logs mainly include “pstack <proxyd>” information. And /var/log/debug/daemon.log is included in the one-click downloaded debug file "console_log.tar.gz".

From 7.0.1 to 7.0.3, the default stuck time period is 5, while on 7.0.4 and newer builds, the time is changed to 60 seconds.

7) Console output log (COMlog) generated from 7.0.2

COMlog refers to system outputs that are printed out to console terminal automatically when system reboots or encounters unexpected problems, and the logs displayed on console when you configure directly on the console terminal.

/# ls -l /var/log/gui_upload/ | grep console

-rw-r--r-- 1 root 0 8261 Aug 8 13:45 console.log

This information can be used for troubleshooting if unexpected behavior starts to occur, or when you need to collect console prints while lacking SSH permission for security purposes.

COMlog can record up to 4 MB of console output to the kernel ring buffer, and also supports reading the content and writing it to a log file "/var/log/gui_upload/console.log".

  • To enable/disable the COMlog:

    diagnose debug comlog enable/disable #dump & read will only take effect after comlog is enabled

    COMlog is enabled by default. To change the default behavior and save it to configuration file, run:

    config system global

    set console-log enable/disable

    end

    Notes: when console-log is enabled, diagnose debug comlog will also be enabled.

  • To view the COMlog status, including speed, file size, and log start and end:

    FWB # dia debug comlog info

    ttyname:/dev/pts/1 com_speed = 9600

    control = Logging enabled #COMlog is enabled

    log_space = 4186042/4194304

    log_start = 0

    log_end = 8261

    log_size = 8261

  • To dump the COMlog from the kernel ring buffer:

    diagnose debug comlog dump

  • To read the COMlog from ring buffer and write to /var/log/gui_upload/console.log:

    FWB # diagnose debug comlog read

    Dump log to /var/log/gui_upload/console.log done.

  • To clear the COMlog in the kernel ring buffer:

    diagnose debug comlog clear

Notes:

  • COMlog will be written into the "console.log" only after you execute diagnose deb comlog read;

  • Every time after executing diagnose deb comlog read, the content of "console.log" will be overwritten, so if you execute it after system reboots, the logs saved before rebooting will be lost.

Due to the two limitations above, console output for kernel coredump or other issues that cause system reboot cannot be recorded in "console.log". FortiWeb will enhance this limitation in future builds.

Retrieving system logs in backend system

1. dmesg

Dmesg is used to examine or control the kernel ring buffer. It includes all important kernel information such as hardware loading and call trace information. Kernel level traffic debug logs will be also included in dmesg.

One can check such logs with “# dmesg” or “#dmesg | grep xxx” directly;

For further troubleshooting, you can archive all logs under the directory /var/log/dmesg/:

tar czf /var/log/gui_upload/dmesg.tar.gz /var/log/dmesg/

Notes:

By default, dmesg uses a time stamp notation of seconds and nanoseconds since the local kernel started, and it’s not in a human-friendly format. If you need to check the accurate time, please check the "/var/log/dmesg/kern.log".

kern.log contains the latest dmesg information, and other logs started with kern.log are backup logs.

2. Apache error logs

If one failed to do some GUI related operation, please collect this logs for analysis:

/var/log# ls apache_logs/

error_log

3. CMDB logs

For configuration deployment issues, please collect cmdb logs for analysis:

# ls /var/log/cmdb/cmdb.log.*

cmdb/cmdb.log.0 cmdb/cmdb.log.155 cmdb/cmdb.log.211 cmdb/cmdb.log.44

#ls /var/log/dbg_cli/

4. /var/log/debug/

Some real-time logs will be generated and stored at /var/log/debug/:

/# ls /var/log/debug/

collect_tcpdump_para.txt daemon_log_flag proxyd_dbg

coredump_log_flag dbsync_log sample

crash.log kernel.log system-startup.log

crash_log_flag kernel_log_flag tmp

crl_updated_dbg netstat_log_flag daemon.log nstd

5. /var/log/gui_upload/

1) Core, coredump and some real-time logs will be generated and stored at /var/log/gui_upload/:

/# ls /var/log/gui_upload/

core-proxyd-2141-1630609770 dlog_logd ha_event_log

core-proxyd-7794-1630610047 ints.txt debug_disk.txtirq

jeprof.out.51146.1630448785.heap perf.data kern.log

debug_out_d_cond_cpu.sh.txt debug_out_d_mem.sh.txt debug_out_d_net.sh.txt

debug_out_d_proc.sh.txt

2) Some logs named as “debug_<function name>.txt” (or with the prefix “debug_out_d_” in some intermediate builds) are generated after 6.4.1.

  • Scripts in /var/log/debug/sample/ are samples to run in /var/log/outgoing;

  • Scripts in /var/log/outgoing/ are scripts actually run in /var/log/outgoing;

  • Currently these system information are collected:

    /# ls /var/log/debug/sample/ #script samples

    README d_cond_cpu.sh d_mem.sh d_net.sh d_proc.sh first_flag

    /# ls /var/log/outgoing/ #scripts actually run

    d_cond_cpu.sh d_mem.sh d_net.sh d_proc.sh

    /# ls -l /var/log/gui_upload/debug_out_d_* (in new builds files are debug_<function name>.txt)

    -rw-r--r-- 1 root 0 65018 Sep 28 18:03 /var/log/gui_upload/debug_out_d_cond_cpu.sh.txt

    -rw-r--r-- 1 root 0 119859 Sep 28 18:03 /var/log/gui_upload/debug_out_d_mem.sh.txt

    -rw-r--r-- 1 root 0 66371 Sep 28 18:03 /var/log/gui_upload/debug_out_d_net.sh.txt

    -rw-r--r-- 1 root 0 126484 Sep 28 18:03

    /var/log/gui_upload/debug_out_d_proc.sh.txt

  • The information collected by these scripts mainly include:

    d_cond_cpu.sh: If the CPU usage more than 90% - date, top 10 daemons of CPU usage, perf top for 10 seconds

    d_mem.sh: date, free, /proc/meminfo, etc.

    d_net.sh: date, netstat -natpu, route -n

    d_proc.sh: date, top -b -n1, ps

  • The running interval for these scripts can be set with CLI:

    FortiWeb # show full system global

    config system global

    set debug-monitor-interval 5 #minutes

    End

    If the script is blocked for 30 sec, the system will kill it and call it in the next debug-monitor-interval.

  • If necessary, one can add scripts (shell or python) to this directory to collect system information; (NOT Recommended, because too many these manually-added tasks may impact system running & stability)

  • The size of “debug_<function name>.txt” is limited to 25MB. If the size gets greater, it will be moved to an .old file. And there are only two files rotated.

3) NMON logs are generated after 6.4.0.

NMON (shorthand for Nigel's Monitor) is a system monitor tool that can collect system performance statistics including CPU, Mem, Disk, Net, etc.

  • NMON log files (with a suffix .nmon) are generated automatically and stored at /var/log/debug/tmp, and will be archived and can be downloaded via the method described in below section 10.2. The maximum number of .nmon files stored is 180.

  • A .nmon file is generated with a sampling interval of 5 minutes, and each time when system boots up, a new .nmon file will be generated. So generally only one .nmon file named “FortiWeb_220107_1734.nmon” (may be different on some previous builds) will be generated each day. Multiple .nmon files generated in one day indicate that system rebooted or crashed.

  • After processed by an nmon analyzer:

4) Jemalloc dump logs for proxyd & ml_daemon.

Please refer to Diagnosing memory leak issues.

Jemalloc dump logs named as “jeprof.out.*.*.heap” can be generated manually by executing diagnose debug jemalloc proxyd dump, or produced automatically when the total system memory usage reaches the boundary value (70% by default).

Jeprof information is very useful when debugging memory issues for proxyd & machine learning.

5) Jemalloc pdump logs for proxyd.

Please refer to Diagnosing memory leak issues.

Jemalloc pdump logs named as “proxyd-objpool-*-*.txt” can be generated manually by executing diagnose debug jemalloc proxyd pdump.

Such logs include memory statistics information for key data structures, and only proxyd supports generating these logs. When analyzing proxyd issues, you can also collect both dump and pdump logs at the same time.

6) Proxyd watchdog logs generated from 7.0.1.

Proxyd watchdogs logs are useful when analyzing proxyd thread lock issues.

If a proxyd thread is stuck for 5 or 60 seconds, FortiWeb will write a debug message like "proxyd worker thread [1] stuck for 5 (or 60) seconds" into the /var/log/debug/daemon.log and generate a log file named like "watchdog-proxyd-3991-1658580435.bt" under the /var/log/gui_upload/.

Watchdog logs mainly include “pstack <proxyd>” information. And /var/log/debug/daemon.log is included in the one-click downloaded debug file "console_log.tar.gz".

From 7.0.1 to 7.0.3, the default stuck time period is 5, while on 7.0.4 and newer builds, the time is changed to 60 seconds.

7) Console output log (COMlog) generated from 7.0.2

COMlog refers to system outputs that are printed out to console terminal automatically when system reboots or encounters unexpected problems, and the logs displayed on console when you configure directly on the console terminal.

/# ls -l /var/log/gui_upload/ | grep console

-rw-r--r-- 1 root 0 8261 Aug 8 13:45 console.log

This information can be used for troubleshooting if unexpected behavior starts to occur, or when you need to collect console prints while lacking SSH permission for security purposes.

COMlog can record up to 4 MB of console output to the kernel ring buffer, and also supports reading the content and writing it to a log file "/var/log/gui_upload/console.log".

  • To enable/disable the COMlog:

    diagnose debug comlog enable/disable #dump & read will only take effect after comlog is enabled

    COMlog is enabled by default. To change the default behavior and save it to configuration file, run:

    config system global

    set console-log enable/disable

    end

    Notes: when console-log is enabled, diagnose debug comlog will also be enabled.

  • To view the COMlog status, including speed, file size, and log start and end:

    FWB # dia debug comlog info

    ttyname:/dev/pts/1 com_speed = 9600

    control = Logging enabled #COMlog is enabled

    log_space = 4186042/4194304

    log_start = 0

    log_end = 8261

    log_size = 8261

  • To dump the COMlog from the kernel ring buffer:

    diagnose debug comlog dump

  • To read the COMlog from ring buffer and write to /var/log/gui_upload/console.log:

    FWB # diagnose debug comlog read

    Dump log to /var/log/gui_upload/console.log done.

  • To clear the COMlog in the kernel ring buffer:

    diagnose debug comlog clear

Notes:

  • COMlog will be written into the "console.log" only after you execute diagnose deb comlog read;

  • Every time after executing diagnose deb comlog read, the content of "console.log" will be overwritten, so if you execute it after system reboots, the logs saved before rebooting will be lost.

Due to the two limitations above, console output for kernel coredump or other issues that cause system reboot cannot be recorded in "console.log". FortiWeb will enhance this limitation in future builds.