Fortinet white logo
Fortinet white logo
1.0.0

Google Bard v1.0.0

About the connector

Google Bard is a conversational AI chatbot, based initially on the LaMDA family of large language models and later the PaLM LLM.

This document provides information about the Google Bard Connector, which facilitates automated interactions, with a Google Bard server using FortiSOAR™ playbooks. Add the Google Bard Connector as a step in FortiSOAR™ playbooks and perform automated operations with Google Bard.

NOTE: This connector has been renamed to Google Gemini. For subsequent updated versions of this connector, refer to Google Gemini connector documentation.

Version information

Connector Version: 1.0.0

FortiSOAR™ Version Tested on: 7.4.1-3167

Google Bard API Version Tested On: v1beta2

Authored By: Fortinet

Certified: Yes

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:

yum install cyops-connector-google-bard

Prerequisites to configuring the connector

  • You must have the credentials of Google Bard server to connect and perform automated operations.
  • The FortiSOAR™ server should have outbound connectivity to port 443 on the Google Bard server.

Minimum Permissions Required

  • Not applicable

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Connectors page, click the Google Bard connector row (if you are in the Grid view on the Connectors page) and in the Configurations tab enter the required configuration details:

Parameter Description
Server URL Specify the URL of the Google Bard server to connect and perform automated operations.
API Key Specify the API key to access the endpoint to connect and perform the automated operations
Verify SSL Specifies whether the SSL certificate for the server is to be verified.
By default, this option is set to True.

Actions supported by the connector

The following automated operations can be included in playbooks and you can also use the annotations to access operations:

Function Description Annotation and Category
Get All Model List Retrieves a list of all language models from Google Bard based on the pagination criteria and other parameters you have specified. list_models
Investigation
Get Model Details Retrieves information for a specific model from Google Bard based on the model name you have specified. get_model_details
Investigation
Generate Text Generates a response from the model based on the model name, text prompts, and other input parameters you have specified. generate_text
Investigation
Generate Embedding Generates an embedding from the model based on the model name and text you have specified. generate_embeddings
Investigation
Count Message Token Retrieves the number of tokens that the model creates from the prompt based on the model name, message, and other input parameter you have specified. count_message_token
Investigation
Generate Message Generates an message from the model based on the model name, messages, and other input parameters you have specified. generate_message
Investigation

operation: Get All Model List

Input parameters

Parameter Description
Page Size (Optional) Specify the maximum count of records that this operation fetches from Google Bard. By default, this option is set to 50. You can specify a maximum value of 1000 and minimum value of 1.
Page Token (Optional) Specify the token for the next set of items to return. The previously returned response contains a nextPageToken element containing a PageToken parameter. Use this parameter as a starting point to get the next page of results.

Output

The output contains the following populated JSON schema:

{
    "models": [
        {
            "name": "",
            "topK": "",
            "topP": "",
            "version": "",
            "description": "",
            "displayName": "",
            "temperature": "",
            "inputTokenLimit": "",
            "outputTokenLimit": "",
            "supportedGenerationMethods": []
        }
    ],
    "nextPageToken": ""
}

operation: Get Model Details

Input parameters

Parameter Description
Model Name Specify the name of the model whose details are to be retrieved from Google Bard.

Output

The output contains the following populated JSON schema:

{
    "name": "",
    "topK": "",
    "topP": "",
    "version": "",
    "description": "",
    "displayName": "",
    "temperature": "",
    "inputTokenLimit": "",
    "outputTokenLimit": "",
    "supportedGenerationMethods": []
}

operation: Generate Text

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate text in Google Bard.
Text Prompt Specify a free-form input text given to the model as a prompt based on which Google Bard generates text.
Safety Settings (Optional) Specify a list of unique SafetySetting instances for blocking unsafe content.
Stop Sequences (Optional) Specify a set of character sequences (up to 5) that stop the output generation.
Temperature (Optional) Specify the value of the temperature to control the randomness of the output. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 produces responses that are more varied and creative, while a value closer to 0.0 typically results in a more straightforward responses from the model.

NOTE: The default value varies by model.

Candidate Count (Optional) Specify the candidate count based on which the number of generated responses to return.

NOTE: This value must be between [1, 8], inclusive. If unset, defaults to 1.

Maximum Output Tokens (Optional) Specify the maximum number of tokens to include in a candidate. If unset, defaults to 64.
Cumulative Probability (TopP) (Optional) Specify the maximum cumulative probability of tokens to consider when sampling.

NOTE: The default value varies by model, see the topP attribute of the Model returned by the Get Model Details response.

Token (TopK) (Optional) Specify the maximum number of tokens to consider when sampling.

NOTE: The default value varies by model, see the topK attribute of the Model returned by the Get Model Details response.

Output

The output contains the following populated JSON schema:

{
    "candidates": [
        {
            "output": "",
            "safetyRatings": [
                {
                    "category": "",
                    "probability": ""
                }
            ]
        }
    ]
}

operation: Generate Embedding

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate embedding in Google Bard.
Text Specify the free-form input text that the model turns into an embedding in Google Bard.

Output

The output contains the following populated JSON schema:

{
    "embedding": {
        "value": []
    }
}

operation: Count Message Token

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to retrieve token from Google Bard.
Messages Specify the list of messages based on which to retrieve token count from Google Bard.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated: The oldest items are dropped from the messages.

Context (Optional) Specify the context of your prompt to the model to help provide context and guide the responses. If not empty, this context is given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.
Examples (Optional) Specify a list of examples of what the model should generate in Google Bard. This includes both user input and the response that the model should emulate.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated. Items are dropped from messages before examples.

Output

The output contains the following populated JSON schema:

{
    "tokenCount": ""
}

operation: Generate Message

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate message in Google Bard.
Messages Specify the list of messages based on which to generate message in Google Bard.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated: The oldest items are dropped from the messages.

Context (Optional) Specify the context of your prompt to the model to help provide context and guide the responses. If not empty, this context is given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.
Examples (Optional) Specify a list of examples of what the model should generate in Google Bard. This includes both user input and the response that the model should emulate.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated. Items are dropped from messages before examples.

Temperature (Optional) Specify the value of the temperature to control the randomness of the output. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 produces responses that are more varied and creative, while a value closer to 0.0 typically results in a more straightforward responses from the model.

NOTE: The default value varies by model.

Candidate Count (Optional) Specify the candidate count based on which the number of generated responses to return.

NOTE: This value must be between [1, 8], inclusive. If unset, defaults to 1.

Cumulative Probability (TopP) (Optional) Specify the maximum cumulative probability of tokens to consider when sampling.

NOTE: The default value varies by model, see the topP attribute of the Model returned by the Get Model Details response.

Token (TopK) (Optional) Specify the maximum number of tokens to consider when sampling.

NOTE: The default value varies by model, see the topK attribute of the Model returned by the Get Model Details response.

Output

The output contains the following populated JSON schema:

{
    "messages": [
        {
            "author": "",
            "content": ""
        }
    ],
    "candidates": [
        {
            "author": "",
            "content": ""
        }
    ]
}

Included playbooks

The Sample - Google Bard - 1.0.0 playbook collection comes bundled with the Google Bard connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the Google Bard connector.

  • Count Message Token
  • Generate Embedding
  • Generate Message
  • Generate Text
  • Get All Model List
  • Get Model Details

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets deleted during connector upgrade and delete.

Previous
Next

Google Bard v1.0.0

About the connector

Google Bard is a conversational AI chatbot, based initially on the LaMDA family of large language models and later the PaLM LLM.

This document provides information about the Google Bard Connector, which facilitates automated interactions, with a Google Bard server using FortiSOAR™ playbooks. Add the Google Bard Connector as a step in FortiSOAR™ playbooks and perform automated operations with Google Bard.

NOTE: This connector has been renamed to Google Gemini. For subsequent updated versions of this connector, refer to Google Gemini connector documentation.

Version information

Connector Version: 1.0.0

FortiSOAR™ Version Tested on: 7.4.1-3167

Google Bard API Version Tested On: v1beta2

Authored By: Fortinet

Certified: Yes

Installing the connector

Use the Content Hub to install the connector. For the detailed procedure to install a connector, click here.

You can also use the yum command as a root user to install the connector:

yum install cyops-connector-google-bard

Prerequisites to configuring the connector

Minimum Permissions Required

Configuring the connector

For the procedure to configure a connector, click here

Configuration parameters

In FortiSOAR™, on the Connectors page, click the Google Bard connector row (if you are in the Grid view on the Connectors page) and in the Configurations tab enter the required configuration details:

Parameter Description
Server URL Specify the URL of the Google Bard server to connect and perform automated operations.
API Key Specify the API key to access the endpoint to connect and perform the automated operations
Verify SSL Specifies whether the SSL certificate for the server is to be verified.
By default, this option is set to True.

Actions supported by the connector

The following automated operations can be included in playbooks and you can also use the annotations to access operations:

Function Description Annotation and Category
Get All Model List Retrieves a list of all language models from Google Bard based on the pagination criteria and other parameters you have specified. list_models
Investigation
Get Model Details Retrieves information for a specific model from Google Bard based on the model name you have specified. get_model_details
Investigation
Generate Text Generates a response from the model based on the model name, text prompts, and other input parameters you have specified. generate_text
Investigation
Generate Embedding Generates an embedding from the model based on the model name and text you have specified. generate_embeddings
Investigation
Count Message Token Retrieves the number of tokens that the model creates from the prompt based on the model name, message, and other input parameter you have specified. count_message_token
Investigation
Generate Message Generates an message from the model based on the model name, messages, and other input parameters you have specified. generate_message
Investigation

operation: Get All Model List

Input parameters

Parameter Description
Page Size (Optional) Specify the maximum count of records that this operation fetches from Google Bard. By default, this option is set to 50. You can specify a maximum value of 1000 and minimum value of 1.
Page Token (Optional) Specify the token for the next set of items to return. The previously returned response contains a nextPageToken element containing a PageToken parameter. Use this parameter as a starting point to get the next page of results.

Output

The output contains the following populated JSON schema:

{
    "models": [
        {
            "name": "",
            "topK": "",
            "topP": "",
            "version": "",
            "description": "",
            "displayName": "",
            "temperature": "",
            "inputTokenLimit": "",
            "outputTokenLimit": "",
            "supportedGenerationMethods": []
        }
    ],
    "nextPageToken": ""
}

operation: Get Model Details

Input parameters

Parameter Description
Model Name Specify the name of the model whose details are to be retrieved from Google Bard.

Output

The output contains the following populated JSON schema:

{
    "name": "",
    "topK": "",
    "topP": "",
    "version": "",
    "description": "",
    "displayName": "",
    "temperature": "",
    "inputTokenLimit": "",
    "outputTokenLimit": "",
    "supportedGenerationMethods": []
}

operation: Generate Text

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate text in Google Bard.
Text Prompt Specify a free-form input text given to the model as a prompt based on which Google Bard generates text.
Safety Settings (Optional) Specify a list of unique SafetySetting instances for blocking unsafe content.
Stop Sequences (Optional) Specify a set of character sequences (up to 5) that stop the output generation.
Temperature (Optional) Specify the value of the temperature to control the randomness of the output. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 produces responses that are more varied and creative, while a value closer to 0.0 typically results in a more straightforward responses from the model.

NOTE: The default value varies by model.

Candidate Count (Optional) Specify the candidate count based on which the number of generated responses to return.

NOTE: This value must be between [1, 8], inclusive. If unset, defaults to 1.

Maximum Output Tokens (Optional) Specify the maximum number of tokens to include in a candidate. If unset, defaults to 64.
Cumulative Probability (TopP) (Optional) Specify the maximum cumulative probability of tokens to consider when sampling.

NOTE: The default value varies by model, see the topP attribute of the Model returned by the Get Model Details response.

Token (TopK) (Optional) Specify the maximum number of tokens to consider when sampling.

NOTE: The default value varies by model, see the topK attribute of the Model returned by the Get Model Details response.

Output

The output contains the following populated JSON schema:

{
    "candidates": [
        {
            "output": "",
            "safetyRatings": [
                {
                    "category": "",
                    "probability": ""
                }
            ]
        }
    ]
}

operation: Generate Embedding

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate embedding in Google Bard.
Text Specify the free-form input text that the model turns into an embedding in Google Bard.

Output

The output contains the following populated JSON schema:

{
    "embedding": {
        "value": []
    }
}

operation: Count Message Token

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to retrieve token from Google Bard.
Messages Specify the list of messages based on which to retrieve token count from Google Bard.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated: The oldest items are dropped from the messages.

Context (Optional) Specify the context of your prompt to the model to help provide context and guide the responses. If not empty, this context is given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.
Examples (Optional) Specify a list of examples of what the model should generate in Google Bard. This includes both user input and the response that the model should emulate.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated. Items are dropped from messages before examples.

Output

The output contains the following populated JSON schema:

{
    "tokenCount": ""
}

operation: Generate Message

Input parameters

Parameter Description
Model Name Specify the name of the model based on which to generate message in Google Bard.
Messages Specify the list of messages based on which to generate message in Google Bard.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated: The oldest items are dropped from the messages.

Context (Optional) Specify the context of your prompt to the model to help provide context and guide the responses. If not empty, this context is given to the model first before the examples and messages. When using a context be sure to provide it with every request to maintain continuity.
Examples (Optional) Specify a list of examples of what the model should generate in Google Bard. This includes both user input and the response that the model should emulate.

NOTE: If the total input size exceeds the model's inputTokenLimit the input is truncated. Items are dropped from messages before examples.

Temperature (Optional) Specify the value of the temperature to control the randomness of the output. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 produces responses that are more varied and creative, while a value closer to 0.0 typically results in a more straightforward responses from the model.

NOTE: The default value varies by model.

Candidate Count (Optional) Specify the candidate count based on which the number of generated responses to return.

NOTE: This value must be between [1, 8], inclusive. If unset, defaults to 1.

Cumulative Probability (TopP) (Optional) Specify the maximum cumulative probability of tokens to consider when sampling.

NOTE: The default value varies by model, see the topP attribute of the Model returned by the Get Model Details response.

Token (TopK) (Optional) Specify the maximum number of tokens to consider when sampling.

NOTE: The default value varies by model, see the topK attribute of the Model returned by the Get Model Details response.

Output

The output contains the following populated JSON schema:

{
    "messages": [
        {
            "author": "",
            "content": ""
        }
    ],
    "candidates": [
        {
            "author": "",
            "content": ""
        }
    ]
}

Included playbooks

The Sample - Google Bard - 1.0.0 playbook collection comes bundled with the Google Bard connector. These playbooks contain steps using which you can perform all supported actions. You can see bundled playbooks in the Automation > Playbooks section in FortiSOAR™ after importing the Google Bard connector.

Note: If you are planning to use any of the sample playbooks in your environment, ensure that you clone those playbooks and move them to a different collection since the sample playbook collection gets deleted during connector upgrade and delete.

Previous
Next