Fortinet white logo
Fortinet white logo

OpenAI v2.0.0

About the connector

This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows.

This document provides information about the OpenAI connector, which facilitates automated interactions, using the OpenAI API. Add the OpenAI connector as a step in FortiSOAR™ playbooks and perform automated operations such as conversing with OpenAI, retrieving a list of available models from OpenAI, etc.

NOTE: For optimal results from the OpenAI connector, use it in conjunction with the 'Fortinet Advisor' solution pack. For more information, see the Fortinet Advisor solution pack documentation.

Version information

Connector Version: 2.0.0

FortiSOAR™ Version Tested on: 7.4.3

OpenAI Version Tested on: v1
NOTE: The OpenAI connector v2.0.0 and higher is compatible with the OpenAI Python dependency version 1.0.0 or higher. Also, this version of the connector has been tested on the following gpt models: gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview.

Authored By: Fortinet

Certified: Yes

Release Notes for version 2.0.0

The following enhancements are made to the OpenAI connector in version 2.0.0:

  • Added the following new actions and playbooks:
    • Converse With OpenAI
    • List Available Models
    • Get Tokens Usage
    • Get Token Count
  • Renamed the Create a chat completion action and playbook to Ask a Question and updated the output schema for this action.
  • Added the following input parameters to the Ask a Question and Converse With OpenAI actions:
    • Timeout
    • Additional Inputs
  • Updated the output schema for the following actions:
    • Ask a Question
    • Converse With OpenAI
    • List Available Models
    • Get Tokens Usage
  • Enhanced connector to be able to connect to Azure OpenAI Service. A new configuration parameter named 'Use Microsoft Azure OpenAI Endpoint' is added for users who want to use OpenAI services via a Microsoft Azure Endpoint.

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:
yum install cyops-connector-openai

Prerequisites to configuring the connector

  • You must have the API key to connect and perform automated operations on the OpenAI server.
  • The FortiSOAR™ server should have outbound connectivity to port 443 on the OpenAI server.

Minimum Permissions Required

  • Not applicable.

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Content Hub page, click the Manage tab, and then click the OpenAI connector card. On the connector popup, click the Configurations tab to enter the required configuration details.

Parameter Description
API Key Specify the API key to access the endpoint to connect and perform automated operations.
For information on how to get an API Key, see https://platform.openai.com/account/api-keys.
Use Microsoft Azure OpenAI Endpoint

Select this option to use OpenAI services via a Microsoft Azure Endpoint.

If you select this option, i.e., set it to 'true', then you must specify the following parameters:

  • Endpoint FQHN: Specify the FQHN of the Azure OpenAI Endpoint to which you want to connect and perform automated operations
  • API Version: Specify the Azure OpenAI API version to be used for the connection. For example, 2023-05-15
  • Deployment Name: Specify the deployment name of Azure OpenAI Deployment Name to be used for the connection.

Actions supported by the connector

The following automated operations can be included in playbooks, and you can also use the annotations to access operations from FortiSOAR™:

Function Description Annotation and Category
Ask a Question Generates a contextually relevant response to a given question using a pre-trained deep learning model. chat_completions
Miscellaneous
Converse With OpenAI Allows users to converse with OpenAI, i.e., users can ask a question and get the answer from OpenAI based on the previous discussions. chat_conversation
Miscellaneous
List Available Models Retrieves a list and descriptions of all models available in the OpenAI API. list_models
Miscellaneous
Get Tokens Usage Retrieves the usage details for each OpenAI API call for the specified date. get_usage
Miscellaneous
Get Token Count Counts the number of tokens in the specified string and OpenAI model. count_tokens
Miscellaneous

operation: Ask a Question

Input parameters

Parameter Description
Message Specify the message or question for which you want to generate a response.
Model Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.
NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1.
Max Tokens (Optional) Specify the maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.
Timeout (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds.
Additional Inputs (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 }

Output

The output contains the following populated JSON schema:

{
    "id": "",
    "model": "",
    "usage": {
        "total_tokens": "",
        "prompt_tokens": "",
        "completion_tokens": ""
    },
    "object": "",
    "choices": [
        {
            "index": "",
            "message": {
                "role": "",
                "content": "",
                "tool_calls": "",
                "function_call": ""
            },
            "finish_reason": ""
        }
    ],
    "created": "",
    "system_fingerprint": ""
}

operation: Converse With OpenAI

Input parameters

Parameter Description
Messages Specify the list of messages for which you want to generate a chat completion. The OpenAI documentation recommends that you should include all previous chat messages.
Model Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.
NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1.
Max Tokens (Optional) Specify the maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.
Timeout (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds.
Additional Inputs (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 }

Output

The output contains the following populated JSON schema:

{
    "id": "",
    "model": "",
    "usage": {
        "total_tokens": "",
        "prompt_tokens": "",
        "completion_tokens": ""
    },
    "object": "",
    "choices": [
        {
            "index": "",
            "message": {
                "role": "",
                "content": "",
                "tool_calls": "",
                "function_call": ""
            },
            "finish_reason": ""
        }
    ],
    "created": "",
    "system_fingerprint": ""
}

operation: List Available Models

Input parameters

None.

Output

The output contains the following populated JSON schema:

{
    "data": [
        {
            "id": "",
            "object": "",
            "created": "",
            "owned_by": ""
        }
    ],
    "object": ""
}

operation: Get Tokens Usage

Input parameters

Parameter Description
Date Select the date for which you want to retrieve usage data for each OpenAI API call.

Output

The output contains the following populated JSON schema:

{
    "data": [
        {
            "user_id": "",
            "operation": "",
            "n_requests": "",
            "snapshot_id": "",
            "organization_id": "",
            "aggregation_timestamp": "",
            "n_context_tokens_total": "",
            "n_generated_tokens_total": ""
        }
    ],
    "object": "",
    "ft_data": [],
    "dalle_api_data": [],
    "whisper_api_data": []
}

operation: Get Token Count

Input parameters

Parameter Description
Input Text Specify the text input, i.e., the string for which you want to evaluate the token count.
Model The OpenAI model using which you want to evaluate the token count.
Specify the ID of the OpenAI model to use for the evaluation of token count. By default, this parameter is set to gpt-4.

Output

The output contains the following populated JSON schema:

{
    "tokens": ""
}

Included playbooks

The Sample - OpenAI - 2.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.

  • Ask a Question
  • Converse With OpenAI
  • Get Token Count
  • Get Tokens Usage
  • List Available Models

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets overwritten during the connector upgrade and gets deleted during connector uninstall.

Previous
Next

OpenAI v2.0.0

About the connector

This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows.

This document provides information about the OpenAI connector, which facilitates automated interactions, using the OpenAI API. Add the OpenAI connector as a step in FortiSOAR™ playbooks and perform automated operations such as conversing with OpenAI, retrieving a list of available models from OpenAI, etc.

NOTE: For optimal results from the OpenAI connector, use it in conjunction with the 'Fortinet Advisor' solution pack. For more information, see the Fortinet Advisor solution pack documentation.

Version information

Connector Version: 2.0.0

FortiSOAR™ Version Tested on: 7.4.3

OpenAI Version Tested on: v1
NOTE: The OpenAI connector v2.0.0 and higher is compatible with the OpenAI Python dependency version 1.0.0 or higher. Also, this version of the connector has been tested on the following gpt models: gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview.

Authored By: Fortinet

Certified: Yes

Release Notes for version 2.0.0

The following enhancements are made to the OpenAI connector in version 2.0.0:

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:
yum install cyops-connector-openai

Prerequisites to configuring the connector

Minimum Permissions Required

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Content Hub page, click the Manage tab, and then click the OpenAI connector card. On the connector popup, click the Configurations tab to enter the required configuration details.

Parameter Description
API Key Specify the API key to access the endpoint to connect and perform automated operations.
For information on how to get an API Key, see https://platform.openai.com/account/api-keys.
Use Microsoft Azure OpenAI Endpoint

Select this option to use OpenAI services via a Microsoft Azure Endpoint.

If you select this option, i.e., set it to 'true', then you must specify the following parameters:

  • Endpoint FQHN: Specify the FQHN of the Azure OpenAI Endpoint to which you want to connect and perform automated operations
  • API Version: Specify the Azure OpenAI API version to be used for the connection. For example, 2023-05-15
  • Deployment Name: Specify the deployment name of Azure OpenAI Deployment Name to be used for the connection.

Actions supported by the connector

The following automated operations can be included in playbooks, and you can also use the annotations to access operations from FortiSOAR™:

Function Description Annotation and Category
Ask a Question Generates a contextually relevant response to a given question using a pre-trained deep learning model. chat_completions
Miscellaneous
Converse With OpenAI Allows users to converse with OpenAI, i.e., users can ask a question and get the answer from OpenAI based on the previous discussions. chat_conversation
Miscellaneous
List Available Models Retrieves a list and descriptions of all models available in the OpenAI API. list_models
Miscellaneous
Get Tokens Usage Retrieves the usage details for each OpenAI API call for the specified date. get_usage
Miscellaneous
Get Token Count Counts the number of tokens in the specified string and OpenAI model. count_tokens
Miscellaneous

operation: Ask a Question

Input parameters

Parameter Description
Message Specify the message or question for which you want to generate a response.
Model Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.
NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1.
Max Tokens (Optional) Specify the maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.
Timeout (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds.
Additional Inputs (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 }

Output

The output contains the following populated JSON schema:

{
    "id": "",
    "model": "",
    "usage": {
        "total_tokens": "",
        "prompt_tokens": "",
        "completion_tokens": ""
    },
    "object": "",
    "choices": [
        {
            "index": "",
            "message": {
                "role": "",
                "content": "",
                "tool_calls": "",
                "function_call": ""
            },
            "finish_reason": ""
        }
    ],
    "created": "",
    "system_fingerprint": ""
}

operation: Converse With OpenAI

Input parameters

Parameter Description
Messages Specify the list of messages for which you want to generate a chat completion. The OpenAI documentation recommends that you should include all previous chat messages.
Model Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.
NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1.
Max Tokens (Optional) Specify the maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.
Timeout (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds.
Additional Inputs (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 }

Output

The output contains the following populated JSON schema:

{
    "id": "",
    "model": "",
    "usage": {
        "total_tokens": "",
        "prompt_tokens": "",
        "completion_tokens": ""
    },
    "object": "",
    "choices": [
        {
            "index": "",
            "message": {
                "role": "",
                "content": "",
                "tool_calls": "",
                "function_call": ""
            },
            "finish_reason": ""
        }
    ],
    "created": "",
    "system_fingerprint": ""
}

operation: List Available Models

Input parameters

None.

Output

The output contains the following populated JSON schema:

{
    "data": [
        {
            "id": "",
            "object": "",
            "created": "",
            "owned_by": ""
        }
    ],
    "object": ""
}

operation: Get Tokens Usage

Input parameters

Parameter Description
Date Select the date for which you want to retrieve usage data for each OpenAI API call.

Output

The output contains the following populated JSON schema:

{
    "data": [
        {
            "user_id": "",
            "operation": "",
            "n_requests": "",
            "snapshot_id": "",
            "organization_id": "",
            "aggregation_timestamp": "",
            "n_context_tokens_total": "",
            "n_generated_tokens_total": ""
        }
    ],
    "object": "",
    "ft_data": [],
    "dalle_api_data": [],
    "whisper_api_data": []
}

operation: Get Token Count

Input parameters

Parameter Description
Input Text Specify the text input, i.e., the string for which you want to evaluate the token count.
Model The OpenAI model using which you want to evaluate the token count.
Specify the ID of the OpenAI model to use for the evaluation of token count. By default, this parameter is set to gpt-4.

Output

The output contains the following populated JSON schema:

{
    "tokens": ""
}

Included playbooks

The Sample - OpenAI - 2.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets overwritten during the connector upgrade and gets deleted during connector uninstall.

Previous
Next