ServiceNow
FortiMonitor’s ServiceNow integration enables you to send incident and clear events to your ServiceNow instance.
There are two ServiceNow integration types you may utilize, depending on your ServiceNow workflow. If you want incidents to be sent as events, using the ITOM functionality, select the generic ServiceNow ITOM integration. If you want FortiMonitor to directly create incidents in your incident table, select the ServiceNow ITSM integration. Both integrations require the same parameters.
The integration can be set up via the Integrations page, which is found under the Teams & Activitymenu item.
Configuration
ServiceNow user roles and permissions
To integrate ServiceNow with FortiMonitor, the ServiceNow user account that you will use to authenticate with FortiMonitor should have the proper permissions. A good starting point is the ServiceNow base system role ITIL. The ITIL role has the necessary roles assigned to it that will allow FortiMonitor to send events to your ServiceNow environment.
If you have custom roles, the ITIL role can still be used to see which permissions are required.
For more information on the ITIL role, refer to the ServiceNow documentation.
-
From the main navigation header. select Settings > Integrations.
-
Locate the ServiceNow card and click Configure.
-
Within the configuration modal, you'll need to configure the following items:
-
Title: the name of the integration as it will appear on Alert Timelines
-
ServiceNow Instance Name: the name of your ServiceNow installation. Typically the subdomain portion of your login URL. For example, the instance name forFortiMonitor.service-now.com would be FortiMonitor.
-
Username: the username that FortiMonitor should utilize to authenticate when sending the incident details
-
Password: the ServiceNow password that FortiMonitor should utilize to authenticate when sending the incident details
4. Upon saving, you should now see your ServiceNow integration listed in the Configured Integrations table
Now that your ServiceNow integration is configured, you can add it to any of your Alert Timelines so that ServiceNow receives Incident events accordingly.
ServiceNow endpoint override
To utilize a staging table in ServiceNow instead of inserting FortiMonitor incidents directly into the SNOW Incident table, we’ve introduced the ability to POST to a manually-specified endpoint. You can see an example below.
-
Ensure that you’ve defined your table with the desired columns in ServiceNow.
-
Obtain the endpoint URL of your table.
-
In the FortiMonitor Control Panel, go to Settings > Integrations.
-
Select the ServiceNow ITSM card.
-
Provide a name for the integration.
-
Under Destination, select Custom ServiceNow Table.
-
In the field that appears, enter the endpoint URL for your table from step 2.
-
If you want a new record created when the FortiMonitor incident resolves, select the Send closure as new ServiceNow event option. If you leave it unchecked, upon incident closure, we’ll make a PUT call on the previously created record in your custom table.
-
Enter the appropriate username and password.
-
We provide a basic payload example but you can edit it as needed to match your table schema - it just needs to be valid JSON. As you update the payload, an example will populate below using data from a recent incident on your account.
-
Click Save.
We recommend testing the integration with our incident simulation functionality.
Parameters
The following template parameters may be used in the JSON Payload fields.
Parameter |
Description |
---|---|
$alert_label |
Alert label of the incident/anomaly. |
$compound_service_id |
The ID number of the compound metric affected. |
$custom_attribute |
You can pass custom server attributes that are set on your servers. Use the attribute type as the key. |
$duration |
The duration of incidents/anomalies which will be filled in on-clear. |
$event |
The type of event, either incident event or clear event. |
$fqdn |
The Fully qualified domain name of the server experiencing the incident/clear. |
$incident_summary |
The summary of the incident. |
$incident_tags |
The tags for the incident. |
$incident_timeline |
The entire timeline output for the incident. |
$item_type |
The service type textkeys of the services experiencing the incident/clear, or plugin_textkey/resource_textkey combinations of the resources the experiencing the anomaly/clear, or the OID name of SNMP resources experiencing the outage/clear. |
$items |
Services experiencing the incident/clear or resources experiencing the anomaly/clear. |
$location |
A comma-separated list of the primary monitoring probe names for all network services affected. |
$metric_tags |
The tags for all of the metrics involved in the outage. |
$name |
Name of the server experiencing the incident/clear. |
$network_service_id |
The ID number of the network service affected. |
$outage_id |
The ID number of the associated incident. |
$partner_server_id |
The partner server id for the server. |
$reasons |
The reasons for network service incidents or the details for anomalies. |
$resource |
For resource anomalies: resources experiencing the anomaly/clear. |
$server_id |
The ID number of the server experiencing the incident/clear. |
$server_key |
The server key for the server. |
$server_resource_id |
The ID number corresponding to the resource affected. |
$services |
For service incident: services experiencing the incident/clear. |
$severity |
The severity of the outage/anomaly, either critical or warning. |
$tags |
The tags for the server. |
$severity_number |
The severity of the incident.
|
$timestamp |
UTC timestamp of when the incident/clear occurred. |
$trigger |
The type of event that triggered this payload (outage, ack, broadcast, clear). |
Server Attributes |
$account_id $alertmanager $app $app_kubernetes_io/instance $app_kubernetes_io/managed-by $app_kubernetes_io/name $aws_az $aws_image_description $aws_image_id $aws_instance_id $aws_instance_type $beta_kubernetes_io/arch $beta_kubernetes_io/instance-type $beta_kubernetes_io/os $chart $cloud_aws_account_id $cloud_aws_az $cloud_aws_image_id $cloud_aws_instance_id $cloud_aws_instance_size $cloud_aws_os $cloud_aws_platform $cloud_aws_region $cloud_aws_service $cloud_azure_az $cloud_azure_instance_size $cloud_azure_os $cloud_azure_region $cloud_azure_service $cloud_provider $cloud_provider $cluster $company_name $component $container_id $container_image $container_name $container_platform $controller-revision-hash $database_engine $database_version $doks_digitalocean_com/node-id $doks_digitalocean_com/node-pool $doks_digitalocean_com/node-pool-id $doks_digitalocean_com/version $environment $failure-domain_beta_kubernetes_io/region $helm_sh/chart $heritage $io_cilium/app $joblabel $k8s-app $kubernetes kind $kubernetes_cluster_name $kubernetes_io/arch $kubernetes_io/cluster-service $kubernetes_io/hostname $kubernetes_io/name $kubernetes_io/os $kubernetes_kind $kubernetes_namespace $kubernetes_node $kubernetes_pod_name $launch_time $location $my_attribute $name $namespace $ncm_unimus_activated $ncm_unimus_deactivated $node $operated-alertmanager $operated-prometheus $os $os $platform $pod $pod-template-generation $pod-template-hash $prometheus $provider $region $release $role $run $server_cpu_architecture $server_cpu_core_count $server_instance_source $server_kernel_version $server_location $server_origin $server_os $server_os_distro $server_os_distro_version $service_id $snmp_model $snmp_serial_num $snmp_syscontact $snmp_sysdescr $snmp_syslocation $snmp_sysname $snmp_sysobjectid $snmp_vendor $snmp_device_model $snmp_device_type $snmp_sysdescr $statefulset_kubernetes_io/pod-name $testing $tier $ui_performance_tabs $username $version $vmware_hardware_uuid $vmware_mac_address $vmware_moref $vmware_uuid $vm_id |