Fortinet white logo
Fortinet white logo

Administration Guide

FortiAI tokens

FortiAI tokens

When FortiAnalyzer is licensed for FortiAI, the license will include a monthly entitlement for tokens that is shared by all FortiAI users.

How token usage is calculated

Tokens are used in large language models (LLMs) to process text and quantify usage. Tokens usage is calculated using the following guidelines:

  • When you use the FortiAI assistant, the text in both the prompt (input) and the response (output) is processed as tokens.

  • While there is not a one-to-one relationship between words or characters and tokens, in general, more text in the query and response means using more tokens.

  • Because the FortiAI assistant uses session history to inform it's responses, queries that are a part of a long session will use more tokens than new conversations.

Consider the following two queries:

  • Can you show me all the log entries for the endpoint 10.10.10.10?

  • Show logs for 10.10.10.10 (Past week).

The total amount of tokens used in the above examples is based on input (prompt) plus output (response). The first prompt uses more text, which means it will use more tokens in the input. The first prompt will also generate a larger response from FortiAI because it asks for "all log entries" rather than limiting the response to logs from the "Past week" only. Thus, the first prompt will use a greater number of tokens in the output as well.

For example, see below for a simple calculation of tokens used for each query. Note that the number of tokens are for the example, and do not reflect actual amounts to be used by these queries.

  • Tokens used by first query = 20 (input) + 2000 (output)

  • Tokens used by second query = 10 (input) + 1000 (output)

It would also be important to consider how long the thread is when these queries are made. The longer the thread gets, the more the number of tokens will be consumed.

Best practices

To ensure you are using your monthly allocation of tokens effectively, consider implementing best practices for FortiAI users. For example:

  • Make your prompts concise and specific. In terms of token usage, the prompt "Can you show me all the log entries for endpoint 10.10.10.10 from the past week?" is less effective than "Show recent logs for 10.10.10.10 (Past week)" because the former prompt uses more text than the latter.

  • Use filters in your prompts to receive concise and specific responses. For example, include time ranges or specify a limit for the number of results.

  • Use words that relate to functions existing in FortiAnalyzer. For example, using "apply filter" or "generate report" concisely tells the FortiAI assistant what action is required.

  • Leverage predefined datasets, charts, reports, and event handlers whenever possible. You can more efficiently use FortiAI by referencing existing tools in your prompts.

  • Reference details in the existing thread when possible. This reduces redundancy and allows you to be concise and specific as you build upon previous prompts. However, note that the FortiAI assistant will not remember previous threads.

  • Restart the AI assistant after 10 conversations if you don't need to keep the historical context.

Viewing token usage

The monthly token usage is displayed at the bottom of the FortiAI pane in FortiAnalyzer. Mouse over the Monthly token usage % to view the following in a tooltip:

  • Current Chat Session Token Usage

  • Current Monthly Token Usage

  • Total Monthly Entitled Tokens

FortiAI tokens

FortiAI tokens

When FortiAnalyzer is licensed for FortiAI, the license will include a monthly entitlement for tokens that is shared by all FortiAI users.

How token usage is calculated

Tokens are used in large language models (LLMs) to process text and quantify usage. Tokens usage is calculated using the following guidelines:

  • When you use the FortiAI assistant, the text in both the prompt (input) and the response (output) is processed as tokens.

  • While there is not a one-to-one relationship between words or characters and tokens, in general, more text in the query and response means using more tokens.

  • Because the FortiAI assistant uses session history to inform it's responses, queries that are a part of a long session will use more tokens than new conversations.

Consider the following two queries:

  • Can you show me all the log entries for the endpoint 10.10.10.10?

  • Show logs for 10.10.10.10 (Past week).

The total amount of tokens used in the above examples is based on input (prompt) plus output (response). The first prompt uses more text, which means it will use more tokens in the input. The first prompt will also generate a larger response from FortiAI because it asks for "all log entries" rather than limiting the response to logs from the "Past week" only. Thus, the first prompt will use a greater number of tokens in the output as well.

For example, see below for a simple calculation of tokens used for each query. Note that the number of tokens are for the example, and do not reflect actual amounts to be used by these queries.

  • Tokens used by first query = 20 (input) + 2000 (output)

  • Tokens used by second query = 10 (input) + 1000 (output)

It would also be important to consider how long the thread is when these queries are made. The longer the thread gets, the more the number of tokens will be consumed.

Best practices

To ensure you are using your monthly allocation of tokens effectively, consider implementing best practices for FortiAI users. For example:

  • Make your prompts concise and specific. In terms of token usage, the prompt "Can you show me all the log entries for endpoint 10.10.10.10 from the past week?" is less effective than "Show recent logs for 10.10.10.10 (Past week)" because the former prompt uses more text than the latter.

  • Use filters in your prompts to receive concise and specific responses. For example, include time ranges or specify a limit for the number of results.

  • Use words that relate to functions existing in FortiAnalyzer. For example, using "apply filter" or "generate report" concisely tells the FortiAI assistant what action is required.

  • Leverage predefined datasets, charts, reports, and event handlers whenever possible. You can more efficiently use FortiAI by referencing existing tools in your prompts.

  • Reference details in the existing thread when possible. This reduces redundancy and allows you to be concise and specific as you build upon previous prompts. However, note that the FortiAI assistant will not remember previous threads.

  • Restart the AI assistant after 10 conversations if you don't need to keep the historical context.

Viewing token usage

The monthly token usage is displayed at the bottom of the FortiAI pane in FortiAnalyzer. Mouse over the Monthly token usage % to view the following in a tooltip:

  • Current Chat Session Token Usage

  • Current Monthly Token Usage

  • Total Monthly Entitled Tokens