Fortinet black logo

Administration Guide

Diagnosing debug flow

    Diagnosing debug flow

Debugging traffic flow at user level with diagnose commands

The most commonly used diagnose debug flow commands are combined as below:

Reset enabled diagnose settings, turn on debug log output with timestamp

diagnose debug reset

diagnose debug timestamp enable

Add filters and start the flow trace

diagnose debug flow filter flow-detail 7 #Enables messages from each packet processing module and packet flow traces

diagnose debug flow filter HTTP-detail 7 #HTTP parser details

diagnose debug flow filter module-detail status on #Turn on details from WAF modules processing the flow

diagnose debug flow filter module-detail module <module> #Specify all or specific module(s)

diagnose debug flow filter server-IP 192.168.12.12 #The VIP in RP mode or the real server IP in TP/TI mode

diagnose debug flow filter client-IP 192.168.12.1 #The client IP

diagnose debug flow filter pserver-ip <C.C.C.C> #The real server IP for RP mode only; supported from 6.3.21 and 7.0.3

diagnose debug flow trace start

diagnose debug enable

Stop output

diagnose debug flow trace stop

Diagnose debug disable

Please note the following:

  • Client-IP & server-IP are supported on all 6.3.x and 7.0.x builds; pserver-ip is supported on 6.3.21, 7.0.3 and later builds.

  • The relationship of IP filters (client-IP, server-IP and pserver-IP) for diagnose debug flow are different on different FortiWeb builds. Please check the following description.

  • Logical relationship between IP filters on 6.3.20, 7.0.1 and earlier builds:

    • Only client-IP and server-IP are supported on these builds;

    • The logic relationship between the client-IP and server-IP are AND, that is to say, only logs for traffic flows matching both filters will be printed out;

      Example 1: When only one IP filter, either client-IP or server-IP, is specified, diagnose logs for the traffic flow matching the IP filter will be printed out.

      Example 2: When all two IP filters are set, diagnose logs for the traffic flow matching the both IP filters will be printed out.

    • A known limitation is that when TLS 1.3 is deployed on the back-end side (between FortiWeb and the real back-end servers) and any IP flow filter is specified, the SSL pre-master secrets for the back-end side will not be printed out. You need to remove all IP filters to retrieve the TLS 1.3 secrets.

      Please refer to Decrypting SSL packets to analyze traffic issues to analyze traffic issues for more details.

  • Logical relationship between IP filters on 6.3.21, 7.0.3 and later 7.0.x builds:

    • Three IP filters (client-IP, server-IP and pserver-IP) are supported.

    • If only the front-end IP filters (client-IP or/and server-IP) are configured, the logic relationship between the two front-end filters is AND, as same as the behavior of previous builds.

    • If both the front-end filters (client-IP or/and server-IP) and the back-end filter pserver-IP is specified, the relationship between the front-end filters and the back-end filter is OR, that is to say the flows either matching the front-end or back-end IP filters will be printed out.

      For example, with the following filters specified:

      diagnose debug flow filter client-IP <A.A.A.A>

      diagnose debug flow filter server-IP <B.B.B.B>

      diagnose debug flow filter pserver-IP <C.C.C.C>

      These traffic flows will be printed in diagnose logs:

      From A.A.A.A to B.B.B.B, and distributed to pserver C.C.C.C

      From A.A.A.A to B.B.B.B, and distributed to pserver D.D.D.D #the client side flow from A.A.A.A to B.B.B.B will be printed, while the server side flow from FortiWeb to D.D.D.D will NOT be printed

      From E.E.E.E to F.F.F.F, and distributed to pserver C.C.C.C #the server side flow from FortiWeb to C.C.C.C will be printed, while the client side flow from E.E.E.E to F.F.F.F will NOT be printed

      These traffic flows will NOT be printed in diagnose logs:

      From A.A.A.A to F.F.F.F, and distributed to pserver D.D.D.D

  • Diagnose debug flow usually results in a large amount of prints and impacts the performance. So if the traffic is heavy or the system resources has been highly occupied, you should enable diagnose debug flow with caution.

    Some basic recommendations:

    • Enable diagnose in the SSH terminal instead of the Serial Console.

    • Under normal circumstances, enabling the filter client-IP only is recommended when debugging issues in the production environment.

      Avoid just specifying the server-IP or pserver-IP, because there might be excessive output on SSH or Console terminals.

    • Just set a low log priority level, and don’t enable unnecessary filters.

      For example, if you intend to retrieve SSL pre-master secrets to decrypt SSL traffic, just set diagnose debug flow filter flow-detail 4 and do not enable module-detail.

    • Don’t forget to execute diagnose debug disable or diagnose debug reset after debug is done.

Debugging traffic flow at kernel level

Change the debug levels in the back-end settings, then kernel level debug logs will be recorded in dmesg. This method is useful to track traffic flow processing in the system kernel.

/proc/tproxy/debug # for transparent mode.

  • echo "FFFF F" > proc/tproxy/debug: output logs to dmesg with a detailed level

  • echo "XXXX F" > proc/tproxy/debug: don’t forget to turn off debug logs

Use the same way to turn on debug logs for reverse-proxy and wccp mode.

Some details:

/var/log# more /proc/tproxy/debug

Debug modules : HOOK4 HOOK6 HASH POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv6

HASH : for tproxy hash

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

LOIP : for enable / disable local IP filter in hook4

PIP : <PIP [1,0] ip> for only enbale this IP upto proxyd

Debug levels : 1 2 4 8

1 : for error message

2 : for data packet info

4 : for data following info

8 : for function entry/exit info

Current debug info : FFFF 15, mbypass = 0, sysmode : 2, localip : 0, proxyd-ip : 0.0.0.0

ex : echo "HOOK4 F" > debug > debug

ex : echo "PIP 1 10.200.2.1" > debug

Example:

[BEGIN] 9/13/2021 23:35:55

/# dmesg

[553897.203831] (tproxy) (/Chroot_Build/34/SVN_REPO_CHILD/FortiWEB/kernel/modules/tproxy/tproxy_policy.c:433) get vserver(240.0.0.29), vport(9781), dir(1)

[553897.203834] (tproxy) ====> get vserver(240.0.0.29), vport(9781), mark(1835264/1835264), incoming (vzone_p3p4_vlan) tcp info : src:(192.168.11.1:48310), dst:(192.168.11.2:80)

[553897.203836] (tproxy) (465) incoming (vzone_p3p4_vlan) tcp info : src:(192.168.11.1:48310), dst:(192.168.11.2:80) -ipid(63355) iptlen(60) seq(2348868809) ack_seq(0) syn(1) ack(0) fin(0) rst(0) psh(0)

[553897.203838] (tproxy) [fortiweb-tproxy] redirecting: proto 6 192.168.11.2:80 -> 240.0.0.29:9781, ipid(63355) iplen(60) mark: 1c0100

[553897.203855] (tproxy)

[553897.203855]

[553897.203855] ====> out to client : src:(192.168.11.2:80), dst:(192.168.11.1:48310)- seq(1319007036) ack_seq(2348868810) syn(1) ack(1) fin(0) rst(0) psh(0)

[553897.203856] (tproxy) [POST_ROUTING]: TO CLIENT OK, 192.168.11.2:80->192.168.11.1:48310, todevname:port3vlan101, flag 4000

/proc/rptproxy/debug #for reverse-proxy mode

/var/log# more /proc/rptproxy/debug

Debug modules : HOOK4 HOOK6 HASH POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv6

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

LOIP : for enable / disable local IP filter in hook4

PIP : <PIP [1,0] ip> for only enbale this ip upto proxyd

Debug levels : 1 2 4 8

...

Current debug info : 0, mbypass = 0, sysmode : 2, localip : 0, proxyd-ip : 0.0.0.0

/proc/wproxy/debug #for wccp mode

/var/log# more /proc/wproxy/debug

Debug modules : HOOK4 HOOK6 POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv4

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

Debug levels : 1 2 4 8

...

Current debug info : 0, mbypass = 0, sysmode : 1

How to capture network packets in FortiWeb

Capturing network packets is a useful and direct method when troubleshooting network issues, including TCP connection establishment issues, SSL handshake issues or analyzing HTTP issues.

Usually it’s better to enable diagnose debug flow and capture packets at the same time, then analyze them together.

    Diagnosing debug flow

Debugging traffic flow at user level with diagnose commands

The most commonly used diagnose debug flow commands are combined as below:

Reset enabled diagnose settings, turn on debug log output with timestamp

diagnose debug reset

diagnose debug timestamp enable

Add filters and start the flow trace

diagnose debug flow filter flow-detail 7 #Enables messages from each packet processing module and packet flow traces

diagnose debug flow filter HTTP-detail 7 #HTTP parser details

diagnose debug flow filter module-detail status on #Turn on details from WAF modules processing the flow

diagnose debug flow filter module-detail module <module> #Specify all or specific module(s)

diagnose debug flow filter server-IP 192.168.12.12 #The VIP in RP mode or the real server IP in TP/TI mode

diagnose debug flow filter client-IP 192.168.12.1 #The client IP

diagnose debug flow filter pserver-ip <C.C.C.C> #The real server IP for RP mode only; supported from 6.3.21 and 7.0.3

diagnose debug flow trace start

diagnose debug enable

Stop output

diagnose debug flow trace stop

Diagnose debug disable

Please note the following:

  • Client-IP & server-IP are supported on all 6.3.x and 7.0.x builds; pserver-ip is supported on 6.3.21, 7.0.3 and later builds.

  • The relationship of IP filters (client-IP, server-IP and pserver-IP) for diagnose debug flow are different on different FortiWeb builds. Please check the following description.

  • Logical relationship between IP filters on 6.3.20, 7.0.1 and earlier builds:

    • Only client-IP and server-IP are supported on these builds;

    • The logic relationship between the client-IP and server-IP are AND, that is to say, only logs for traffic flows matching both filters will be printed out;

      Example 1: When only one IP filter, either client-IP or server-IP, is specified, diagnose logs for the traffic flow matching the IP filter will be printed out.

      Example 2: When all two IP filters are set, diagnose logs for the traffic flow matching the both IP filters will be printed out.

    • A known limitation is that when TLS 1.3 is deployed on the back-end side (between FortiWeb and the real back-end servers) and any IP flow filter is specified, the SSL pre-master secrets for the back-end side will not be printed out. You need to remove all IP filters to retrieve the TLS 1.3 secrets.

      Please refer to Decrypting SSL packets to analyze traffic issues to analyze traffic issues for more details.

  • Logical relationship between IP filters on 6.3.21, 7.0.3 and later 7.0.x builds:

    • Three IP filters (client-IP, server-IP and pserver-IP) are supported.

    • If only the front-end IP filters (client-IP or/and server-IP) are configured, the logic relationship between the two front-end filters is AND, as same as the behavior of previous builds.

    • If both the front-end filters (client-IP or/and server-IP) and the back-end filter pserver-IP is specified, the relationship between the front-end filters and the back-end filter is OR, that is to say the flows either matching the front-end or back-end IP filters will be printed out.

      For example, with the following filters specified:

      diagnose debug flow filter client-IP <A.A.A.A>

      diagnose debug flow filter server-IP <B.B.B.B>

      diagnose debug flow filter pserver-IP <C.C.C.C>

      These traffic flows will be printed in diagnose logs:

      From A.A.A.A to B.B.B.B, and distributed to pserver C.C.C.C

      From A.A.A.A to B.B.B.B, and distributed to pserver D.D.D.D #the client side flow from A.A.A.A to B.B.B.B will be printed, while the server side flow from FortiWeb to D.D.D.D will NOT be printed

      From E.E.E.E to F.F.F.F, and distributed to pserver C.C.C.C #the server side flow from FortiWeb to C.C.C.C will be printed, while the client side flow from E.E.E.E to F.F.F.F will NOT be printed

      These traffic flows will NOT be printed in diagnose logs:

      From A.A.A.A to F.F.F.F, and distributed to pserver D.D.D.D

  • Diagnose debug flow usually results in a large amount of prints and impacts the performance. So if the traffic is heavy or the system resources has been highly occupied, you should enable diagnose debug flow with caution.

    Some basic recommendations:

    • Enable diagnose in the SSH terminal instead of the Serial Console.

    • Under normal circumstances, enabling the filter client-IP only is recommended when debugging issues in the production environment.

      Avoid just specifying the server-IP or pserver-IP, because there might be excessive output on SSH or Console terminals.

    • Just set a low log priority level, and don’t enable unnecessary filters.

      For example, if you intend to retrieve SSL pre-master secrets to decrypt SSL traffic, just set diagnose debug flow filter flow-detail 4 and do not enable module-detail.

    • Don’t forget to execute diagnose debug disable or diagnose debug reset after debug is done.

Debugging traffic flow at kernel level

Change the debug levels in the back-end settings, then kernel level debug logs will be recorded in dmesg. This method is useful to track traffic flow processing in the system kernel.

/proc/tproxy/debug # for transparent mode.

  • echo "FFFF F" > proc/tproxy/debug: output logs to dmesg with a detailed level

  • echo "XXXX F" > proc/tproxy/debug: don’t forget to turn off debug logs

Use the same way to turn on debug logs for reverse-proxy and wccp mode.

Some details:

/var/log# more /proc/tproxy/debug

Debug modules : HOOK4 HOOK6 HASH POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv6

HASH : for tproxy hash

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

LOIP : for enable / disable local IP filter in hook4

PIP : <PIP [1,0] ip> for only enbale this IP upto proxyd

Debug levels : 1 2 4 8

1 : for error message

2 : for data packet info

4 : for data following info

8 : for function entry/exit info

Current debug info : FFFF 15, mbypass = 0, sysmode : 2, localip : 0, proxyd-ip : 0.0.0.0

ex : echo "HOOK4 F" > debug > debug

ex : echo "PIP 1 10.200.2.1" > debug

Example:

[BEGIN] 9/13/2021 23:35:55

/# dmesg

[553897.203831] (tproxy) (/Chroot_Build/34/SVN_REPO_CHILD/FortiWEB/kernel/modules/tproxy/tproxy_policy.c:433) get vserver(240.0.0.29), vport(9781), dir(1)

[553897.203834] (tproxy) ====> get vserver(240.0.0.29), vport(9781), mark(1835264/1835264), incoming (vzone_p3p4_vlan) tcp info : src:(192.168.11.1:48310), dst:(192.168.11.2:80)

[553897.203836] (tproxy) (465) incoming (vzone_p3p4_vlan) tcp info : src:(192.168.11.1:48310), dst:(192.168.11.2:80) -ipid(63355) iptlen(60) seq(2348868809) ack_seq(0) syn(1) ack(0) fin(0) rst(0) psh(0)

[553897.203838] (tproxy) [fortiweb-tproxy] redirecting: proto 6 192.168.11.2:80 -> 240.0.0.29:9781, ipid(63355) iplen(60) mark: 1c0100

[553897.203855] (tproxy)

[553897.203855]

[553897.203855] ====> out to client : src:(192.168.11.2:80), dst:(192.168.11.1:48310)- seq(1319007036) ack_seq(2348868810) syn(1) ack(1) fin(0) rst(0) psh(0)

[553897.203856] (tproxy) [POST_ROUTING]: TO CLIENT OK, 192.168.11.2:80->192.168.11.1:48310, todevname:port3vlan101, flag 4000

/proc/rptproxy/debug #for reverse-proxy mode

/var/log# more /proc/rptproxy/debug

Debug modules : HOOK4 HOOK6 HASH POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv6

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

LOIP : for enable / disable local IP filter in hook4

PIP : <PIP [1,0] ip> for only enbale this ip upto proxyd

Debug levels : 1 2 4 8

...

Current debug info : 0, mbypass = 0, sysmode : 2, localip : 0, proxyd-ip : 0.0.0.0

/proc/wproxy/debug #for wccp mode

/var/log# more /proc/wproxy/debug

Debug modules : HOOK4 HOOK6 POLICY

HOOK4 : for netfilter hook IPv4

HOOK6 : for netfilter hook IPv4

POLICY : for policy management

FFFF : for all above

XXXX : cleanup all above

PASS : for bypass this module in kernel path

Debug levels : 1 2 4 8

...

Current debug info : 0, mbypass = 0, sysmode : 1

How to capture network packets in FortiWeb

Capturing network packets is a useful and direct method when troubleshooting network issues, including TCP connection establishment issues, SSL handshake issues or analyzing HTTP issues.

Usually it’s better to enable diagnose debug flow and capture packets at the same time, then analyze them together.