Fortinet white logo
Fortinet white logo

Administration Guide

Image Analysis

Image Analysis

Content Analysis is a licensed feature, powered by AI that detects visual threats including pornography, extremism, graphic violence, and other inappropriate Not Safe for Work (NSFW) visual content in images of the following types:

  • BMP

  • JPEG

  • PNG

  • TIFF

This service is a real-time analysis of the content passing through the FortiProxy unit. The Content Analysis Service uses advanced artificial intelligence that delivers unparalleled accuracy with near zero false positives, all in a matter of milliseconds. After inappropriate NSFW content is detected, such content can be optionally blocked or reported. Unlike early heuristic-based technologies the AI-powered Content Analysis Service has been extensively trained and developed, and more NSFW-relevant Threat Categories are being added as they become available.

In general, the procedure is similar to the HTTP antivirus scanning procedure.

When a client HTTP requests an image, the HTTP header content-type determines the image type. Then the WAD process holds the image content from the server for scanning before sending it to the client.

If the scan results are larger than the configurable threshold, the requested image is blocked, and the client receives a replacement image. This replacement image keeps the same image type and size if you enable the option to re-size images. The FortiProxy unit stores the results to improve performance for future requests.

The default settings provide a good balance, but they might require some adjustment in some instances.

To use Content Analysis, you need to set up at least one profile and apply it to a policy. Content Analysis profiles are configured under Content Analyses > Image Analyses.

Hover over the leftmost edge of the column heading to display the Configure Table icon, which you can use to select the columns to display or to reset all the columns to their default settings. You can also drag column headings to change their order.

The following options are available:

Create New

Create a Content Analysis profile. See Create or edit an Image Analysis profile.

Edit

Modify the selected Content Analysis profile. See Create or edit an Image Analysis profile.

Delete

Remove the selected Content Analysis profile.

Name

The name of the Content Analysis profile.

Image Skip Size

Enter a value between 0 and 2,048.

This value represents the size of image that will be skipped by the image scan unit, in kilobytes. Images that are too small are difficult to scan and are more likely to be rated incorrectly by the image scan engine.

The default value is 1.

Action

Set to either Pass or Block the image when it exceeds the rating threshold. The default is Pass.

Comments

An optional description of the Content Analysis profile.

Ref.

Displays the number of times the object is referenced to other objects.

To view the location of the referenced object, select the number in Ref.; the Object Usage window opens and displays the various locations of the referenced object.

Validating Content Analysis

You can use the following debug commands to validate the service licensing and image cache:

get system fortiguard—Display licensing information.

diagnose test application wad 143—Display image cache.

diagnose test application wad 144—Clear image cache.

You need a license to display and clear the image cache; otherwise, these commands are not available.

Image Analysis

Image Analysis

Content Analysis is a licensed feature, powered by AI that detects visual threats including pornography, extremism, graphic violence, and other inappropriate Not Safe for Work (NSFW) visual content in images of the following types:

  • BMP

  • JPEG

  • PNG

  • TIFF

This service is a real-time analysis of the content passing through the FortiProxy unit. The Content Analysis Service uses advanced artificial intelligence that delivers unparalleled accuracy with near zero false positives, all in a matter of milliseconds. After inappropriate NSFW content is detected, such content can be optionally blocked or reported. Unlike early heuristic-based technologies the AI-powered Content Analysis Service has been extensively trained and developed, and more NSFW-relevant Threat Categories are being added as they become available.

In general, the procedure is similar to the HTTP antivirus scanning procedure.

When a client HTTP requests an image, the HTTP header content-type determines the image type. Then the WAD process holds the image content from the server for scanning before sending it to the client.

If the scan results are larger than the configurable threshold, the requested image is blocked, and the client receives a replacement image. This replacement image keeps the same image type and size if you enable the option to re-size images. The FortiProxy unit stores the results to improve performance for future requests.

The default settings provide a good balance, but they might require some adjustment in some instances.

To use Content Analysis, you need to set up at least one profile and apply it to a policy. Content Analysis profiles are configured under Content Analyses > Image Analyses.

Hover over the leftmost edge of the column heading to display the Configure Table icon, which you can use to select the columns to display or to reset all the columns to their default settings. You can also drag column headings to change their order.

The following options are available:

Create New

Create a Content Analysis profile. See Create or edit an Image Analysis profile.

Edit

Modify the selected Content Analysis profile. See Create or edit an Image Analysis profile.

Delete

Remove the selected Content Analysis profile.

Name

The name of the Content Analysis profile.

Image Skip Size

Enter a value between 0 and 2,048.

This value represents the size of image that will be skipped by the image scan unit, in kilobytes. Images that are too small are difficult to scan and are more likely to be rated incorrectly by the image scan engine.

The default value is 1.

Action

Set to either Pass or Block the image when it exceeds the rating threshold. The default is Pass.

Comments

An optional description of the Content Analysis profile.

Ref.

Displays the number of times the object is referenced to other objects.

To view the location of the referenced object, select the number in Ref.; the Object Usage window opens and displays the various locations of the referenced object.

Validating Content Analysis

You can use the following debug commands to validate the service licensing and image cache:

get system fortiguard—Display licensing information.

diagnose test application wad 143—Display image cache.

diagnose test application wad 144—Clear image cache.

You need a license to display and clear the image cache; otherwise, these commands are not available.