AI protection
Artificial Intelligence (AI) is rapidly transforming business operations, cybersecurity, and digital infrastructure. Among its most prominent forms are Large Language Models (LLMs), such as GPT, which are a type of Generative AI (GenAI) widely used in chatbot applications, such as ChatGPT, DeepSeek, and others. As these chatbots become more common in corporate environments, new risks emerge. AI applications, including but not limited to chatbots, can inadvertently expose sensitive data. For instance, an employee might paste a confidential email into a chatbot to generate a customized reply, potentially exposing sensitive information.
Securing GenAI has become critical in this evolving landscape. FortiGate offers several methods to restrict or block access to GenAI tools, helping prevent data leaks and protect company information.
The following table outlines the techniques and highlights scenarios where each method may be most effective.
|
Technique |
Description |
|---|---|
|
Web filter |
Using a web filter profile with the FortiGuard Artificial Intelligence Technology category enables users to monitor and block access to sites in this category by matching the requested URL against FortiGuard’s classification. See Protecting GenAI access using web filter for more information. |
|
Application control |
Using an application control profile with the Generative AI category enables users to monitor and block access to sites in this category by matching traffic patterns against FortiGuard’s Generative AI signatures. This approach goes beyond URL matching by employing deep packet inspection to identify and control application-level traffic, offering enhanced visibility and insight into AI-related activities. See Protecting GenAI access using application control for more information. |
|
Data Loss Prevention (DLP) |
Using a DLP profile, administrators can block access to LLM applications when specific keywords are detected in user prompts. This technique uses keyword matching and FQDN filtering to prevent sensitive or restricted data from being sent to external AI services. This is particularly useful for enforcing compliance and protecting intellectual property. See Protecting GenAI Access using DLP for more information. |