Fortinet white logo
Fortinet white logo

OpenAI v1.0.0

About the connector

This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows

This document provides information about the OpenAI Connector, which facilitates automated interactions, with a OpenAI server using FortiSOAR™ playbooks. Add the OpenAI Connector as a step in FortiSOAR™ playbooks and perform automated operations with OpenAI.

Version information

Connector Version: 1.0.0

FortiSOAR™ Version Tested on: 7.3.2-2150

OpenAI Version Tested on: v1

Authored By: Fortinet

Certified: Yes

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:

yum install cyops-connector-openai

Prerequisites to configuring the connector

  • You must have the API key to connect and perform automated operations on the OpenAI server.
  • The FortiSOAR™ server should have outbound connectivity to port 443 on the OpenAI server.

Minimum Permissions Required

  • Not applicable.

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Connectors page, click the OpenAI connector row (if you are in the Grid view on the Connectors page) and in the Configurations tab enter the required configuration details:

Parameter Description
API Key Specify the API key to access the endpoint to connect and perform automated operations. To get an API Key, visit https://platform.openai.com/account/api-keys.

Actions supported by the connector

The following automated operations can be included in playbooks and you can also use the annotations to access operations:

Function Description Annotation and Category
Create a chat completion Generates a completion for a given chat message using a pre-trained deep learning model. chat_completions
Miscellaneous

operation: Create a chat completion

Input parameters

Parameter Description
Message Specify a message for which to generate a chat completion.
Model Specify the ID of the GPT model to use for the chat completion. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values like 0.8 makes the output more random, while lower values like makes it more focused and deterministic.
NOTE: It is recommended to use either this parameter or Top Probability parameter, not both. By default, it is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or Temperature parameter, not both. By default, it is set to 1.
Max Tokens Specify maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.

Output

The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": ""
},
"finish_reason": ""
}
],
"created": ""
}

Included playbooks

The Sample - OpenAI - 1.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.

  • Create a Chat Completion

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets deleted during connector upgrade and delete.

Previous
Next

OpenAI v1.0.0

About the connector

This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows

This document provides information about the OpenAI Connector, which facilitates automated interactions, with a OpenAI server using FortiSOAR™ playbooks. Add the OpenAI Connector as a step in FortiSOAR™ playbooks and perform automated operations with OpenAI.

Version information

Connector Version: 1.0.0

FortiSOAR™ Version Tested on: 7.3.2-2150

OpenAI Version Tested on: v1

Authored By: Fortinet

Certified: Yes

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:

yum install cyops-connector-openai

Prerequisites to configuring the connector

Minimum Permissions Required

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Connectors page, click the OpenAI connector row (if you are in the Grid view on the Connectors page) and in the Configurations tab enter the required configuration details:

Parameter Description
API Key Specify the API key to access the endpoint to connect and perform automated operations. To get an API Key, visit https://platform.openai.com/account/api-keys.

Actions supported by the connector

The following automated operations can be included in playbooks and you can also use the annotations to access operations:

Function Description Annotation and Category
Create a chat completion Generates a completion for a given chat message using a pre-trained deep learning model. chat_completions
Miscellaneous

operation: Create a chat completion

Input parameters

Parameter Description
Message Specify a message for which to generate a chat completion.
Model Specify the ID of the GPT model to use for the chat completion. Currently, only gpt-3.5-turbo and gpt-3.5-turbo-0301 are supported. By default, it is set to gpt-3.5-turbo.
Temperature Specify the sampling temperature between 0 and 2. Higher values like 0.8 makes the output more random, while lower values like makes it more focused and deterministic.
NOTE: It is recommended to use either this parameter or Top Probability parameter, not both. By default, it is set to 1.
Top Probability Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
NOTE: It is recommended to use either this parameter or Temperature parameter, not both. By default, it is set to 1.
Max Tokens Specify maximum number of tokens to generate in the chat completion.
NOTE: The total length of input tokens and generated tokens is limited by the model's context length.

Output

The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": ""
},
"finish_reason": ""
}
],
"created": ""
}

Included playbooks

The Sample - OpenAI - 1.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets deleted during connector upgrade and delete.

Previous
Next