This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows.
This document provides information about the OpenAI connector, which facilitates automated interactions, using the OpenAI API. Add the OpenAI connector as a step in FortiSOAR™ playbooks and perform automated operations such as conversing with OpenAI, retrieving a list of available models from OpenAI, etc.
NOTE: For optimal results from the OpenAI connector, use it in conjunction with the 'Fortinet Advisor' solution pack. For more information, see the Fortinet Advisor solution pack documentation.
Connector Version: 2.0.0
FortiSOAR™ Version Tested on: 7.4.3
OpenAI Version Tested on: v1
NOTE: The OpenAI connector v2.0.0 and higher is compatible with the OpenAI Python dependency version 1.0.0 or higher. Also, this version of the connector has been tested on the following gpt models: gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview.
Authored By: Fortinet
Certified: Yes
The following enhancements are made to the OpenAI connector in version 2.0.0:
Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.
You can also use the yum command as a root user to install the connector:
yum install cyops-connector-openai
For the procedure to configure a connector, click here
In FortiSOAR™, on the Content Hub page, click the Manage tab, and then click the OpenAI connector card. On the connector popup, click the Configurations tab to enter the required configuration details.
| Parameter | Description |
|---|---|
| API Key | Specify the API key to access the endpoint to connect and perform automated operations. For information on how to get an API Key, see https://platform.openai.com/account/api-keys. |
| Use Microsoft Azure OpenAI Endpoint |
Select this option to use OpenAI services via a Microsoft Azure Endpoint. If you select this option, i.e., set it to 'true', then you must specify the following parameters:
|
The following automated operations can be included in playbooks, and you can also use the annotations to access operations from FortiSOAR™:
| Function | Description | Annotation and Category |
|---|---|---|
| Ask a Question | Generates a contextually relevant response to a given question using a pre-trained deep learning model. | chat_completions Miscellaneous |
| Converse With OpenAI | Allows users to converse with OpenAI, i.e., users can ask a question and get the answer from OpenAI based on the previous discussions. | chat_conversation Miscellaneous |
| List Available Models | Retrieves a list and descriptions of all models available in the OpenAI API. | list_models Miscellaneous |
| Get Tokens Usage | Retrieves the usage details for each OpenAI API call for the specified date. | get_usage Miscellaneous |
| Get Token Count | Counts the number of tokens in the specified string and OpenAI model. | count_tokens Miscellaneous |
| Parameter | Description |
|---|---|
| Message | Specify the message or question for which you want to generate a response. |
| Model | Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo. |
| Temperature | Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1. |
| Top Probability | Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1. |
| Max Tokens | (Optional) Specify the maximum number of tokens to generate in the chat completion. NOTE: The total length of input tokens and generated tokens is limited by the model's context length. |
| Timeout | (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds. |
| Additional Inputs | (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 } |
The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": "",
"tool_calls": "",
"function_call": ""
},
"finish_reason": ""
}
],
"created": "",
"system_fingerprint": ""
}
| Parameter | Description |
|---|---|
| Messages | Specify the list of messages for which you want to generate a chat completion. The OpenAI documentation recommends that you should include all previous chat messages. |
| Model | Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo. |
| Temperature | Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1. |
| Top Probability | Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1. |
| Max Tokens | (Optional) Specify the maximum number of tokens to generate in the chat completion. NOTE: The total length of input tokens and generated tokens is limited by the model's context length. |
| Timeout | (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds. |
| Additional Inputs | (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 } |
The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": "",
"tool_calls": "",
"function_call": ""
},
"finish_reason": ""
}
],
"created": "",
"system_fingerprint": ""
}
None.
The output contains the following populated JSON schema:
{
"data": [
{
"id": "",
"object": "",
"created": "",
"owned_by": ""
}
],
"object": ""
}
| Parameter | Description |
|---|---|
| Date | Select the date for which you want to retrieve usage data for each OpenAI API call. |
The output contains the following populated JSON schema:
{
"data": [
{
"user_id": "",
"operation": "",
"n_requests": "",
"snapshot_id": "",
"organization_id": "",
"aggregation_timestamp": "",
"n_context_tokens_total": "",
"n_generated_tokens_total": ""
}
],
"object": "",
"ft_data": [],
"dalle_api_data": [],
"whisper_api_data": []
}
| Parameter | Description |
|---|---|
| Input Text | Specify the text input, i.e., the string for which you want to evaluate the token count. |
| Model | The OpenAI model using which you want to evaluate the token count. Specify the ID of the OpenAI model to use for the evaluation of token count. By default, this parameter is set to gpt-4. |
The output contains the following populated JSON schema:
{
"tokens": ""
}
The Sample - OpenAI - 2.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.
Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets overwritten during the connector upgrade and gets deleted during connector uninstall.
This integration supports interacting with OpenAI's powerful language model, ChatGPT from FortiSOAR workflows.
This document provides information about the OpenAI connector, which facilitates automated interactions, using the OpenAI API. Add the OpenAI connector as a step in FortiSOAR™ playbooks and perform automated operations such as conversing with OpenAI, retrieving a list of available models from OpenAI, etc.
NOTE: For optimal results from the OpenAI connector, use it in conjunction with the 'Fortinet Advisor' solution pack. For more information, see the Fortinet Advisor solution pack documentation.
Connector Version: 2.0.0
FortiSOAR™ Version Tested on: 7.4.3
OpenAI Version Tested on: v1
NOTE: The OpenAI connector v2.0.0 and higher is compatible with the OpenAI Python dependency version 1.0.0 or higher. Also, this version of the connector has been tested on the following gpt models: gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview.
Authored By: Fortinet
Certified: Yes
The following enhancements are made to the OpenAI connector in version 2.0.0:
Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.
You can also use the yum command as a root user to install the connector:
yum install cyops-connector-openai
For the procedure to configure a connector, click here
In FortiSOAR™, on the Content Hub page, click the Manage tab, and then click the OpenAI connector card. On the connector popup, click the Configurations tab to enter the required configuration details.
| Parameter | Description |
|---|---|
| API Key | Specify the API key to access the endpoint to connect and perform automated operations. For information on how to get an API Key, see https://platform.openai.com/account/api-keys. |
| Use Microsoft Azure OpenAI Endpoint |
Select this option to use OpenAI services via a Microsoft Azure Endpoint. If you select this option, i.e., set it to 'true', then you must specify the following parameters:
|
The following automated operations can be included in playbooks, and you can also use the annotations to access operations from FortiSOAR™:
| Function | Description | Annotation and Category |
|---|---|---|
| Ask a Question | Generates a contextually relevant response to a given question using a pre-trained deep learning model. | chat_completions Miscellaneous |
| Converse With OpenAI | Allows users to converse with OpenAI, i.e., users can ask a question and get the answer from OpenAI based on the previous discussions. | chat_conversation Miscellaneous |
| List Available Models | Retrieves a list and descriptions of all models available in the OpenAI API. | list_models Miscellaneous |
| Get Tokens Usage | Retrieves the usage details for each OpenAI API call for the specified date. | get_usage Miscellaneous |
| Get Token Count | Counts the number of tokens in the specified string and OpenAI model. | count_tokens Miscellaneous |
| Parameter | Description |
|---|---|
| Message | Specify the message or question for which you want to generate a response. |
| Model | Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo. |
| Temperature | Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1. |
| Top Probability | Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1. |
| Max Tokens | (Optional) Specify the maximum number of tokens to generate in the chat completion. NOTE: The total length of input tokens and generated tokens is limited by the model's context length. |
| Timeout | (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds. |
| Additional Inputs | (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 } |
The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": "",
"tool_calls": "",
"function_call": ""
},
"finish_reason": ""
}
],
"created": "",
"system_fingerprint": ""
}
| Parameter | Description |
|---|---|
| Messages | Specify the list of messages for which you want to generate a chat completion. The OpenAI documentation recommends that you should include all previous chat messages. |
| Model | Specify the ID of the GPT model to use for the chat completion. Currently, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-4, and gpt-4-1106-preview are supported. By default, it is set to gpt-3.5-turbo. |
| Temperature | Specify the sampling temperature between 0 and 2. Higher values, such as, 0.8 make the output more random, while lower values make the output more focused and deterministic.NOTE: It is recommended to use either this parameter or the 'Top Probability' parameter, not both. By default, this parameter is set to 1. |
| Top Probability | Specify the top probability, an alternative to sampling with temperature, also called nucleus sampling. The model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.NOTE: It is recommended to use either this parameter or the 'Temperature' parameter, not both. By default, this parameter is set to 1. |
| Max Tokens | (Optional) Specify the maximum number of tokens to generate in the chat completion. NOTE: The total length of input tokens and generated tokens is limited by the model's context length. |
| Timeout | (Optional) Specify the maximum time (in seconds) you want to wait for the action to complete successfully. By default, the timeout is set to 600 seconds. |
| Additional Inputs | (Optional) Specify any other inputs, as a key-value pair, to be included in the OpenAI Completions API request. For example, { "seed": 123 } |
The output contains the following populated JSON schema:
{
"id": "",
"model": "",
"usage": {
"total_tokens": "",
"prompt_tokens": "",
"completion_tokens": ""
},
"object": "",
"choices": [
{
"index": "",
"message": {
"role": "",
"content": "",
"tool_calls": "",
"function_call": ""
},
"finish_reason": ""
}
],
"created": "",
"system_fingerprint": ""
}
None.
The output contains the following populated JSON schema:
{
"data": [
{
"id": "",
"object": "",
"created": "",
"owned_by": ""
}
],
"object": ""
}
| Parameter | Description |
|---|---|
| Date | Select the date for which you want to retrieve usage data for each OpenAI API call. |
The output contains the following populated JSON schema:
{
"data": [
{
"user_id": "",
"operation": "",
"n_requests": "",
"snapshot_id": "",
"organization_id": "",
"aggregation_timestamp": "",
"n_context_tokens_total": "",
"n_generated_tokens_total": ""
}
],
"object": "",
"ft_data": [],
"dalle_api_data": [],
"whisper_api_data": []
}
| Parameter | Description |
|---|---|
| Input Text | Specify the text input, i.e., the string for which you want to evaluate the token count. |
| Model | The OpenAI model using which you want to evaluate the token count. Specify the ID of the OpenAI model to use for the evaluation of token count. By default, this parameter is set to gpt-4. |
The output contains the following populated JSON schema:
{
"tokens": ""
}
The Sample - OpenAI - 2.0.0 playbook collection comes bundled with the OpenAI connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the OpenAI connector.
Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets overwritten during the connector upgrade and gets deleted during connector uninstall.