DNS filtering can mitigate a number of AI-based security risks, including data poisoning, indirect prompt injection, and shadow AI.
After reading this article you will be able to:
Copy article link
DNS filtering refers to the use of the Domain Name System to block certain domains and IP addresses from loading within a given network. If domain queries are blocked and are not allowed to resolve to an IP address, then network-connected client devices cannot load them. One of the most common for DNS filtering is blocking users from reaching malicious or inappropriate Internet content on a secure network. Doing so can help prevent cyber attacks like ransomware, botnet activity, and phishing (since credential harvesting sites and other malicious sites can be blocked), and can help ensure that internal users do not violate company acceptable use policies.
In addition to its value for overall web protection, DNS filtering can also support artificial intelligence (AI) security specifically. The rapid adoption of generative AI tools created an environment where internal security teams lack visibility of how sensitive data is flowing. At the same time, attackers are using AI to enhance their attacks and exploit this expanding attack surface. DNS filtering can help keep these risks under control and secure AI usage.
| Type of AI security risk | How DNS filtering helps |
| Shadow AI | Discovery based on domain resolution |
| Access to non-approved AI apps | Preventing untrusted app use by blocking domains |
| AI-enhanced attacks | Blocking phishing sites, C2 servers, DNS tunneling |
| Data poisoning via typosquatting | Blocking malicious imitations of AI services |
| Indirect prompt injection | Stopping model queries to untrusted domains |
Organizations cannot plan for security risks if they do not know what or where they are. However, unapproved AI apps and models often end up integrated into business processes or application infrastructure. 98% of employees use unsanctioned apps across shadow AI and shadow IT use cases, per the 2026 Cloudflare Security Signals Report. Filtering DNS queries provides visibility into shadow AI use by tracking which app domains are resolved.
DNS filtering is a lightweight way to restrict access to applications and services, including AI services. Administrators can use DNS filtering to see DNS queries, set the approval status for apps to which those queries are directed, and set block or allow policies for AI apps based on their approval status. Nonapproved, untrusted, or unreviewed AI tools can be blocked.
AI is a powerful tool in the hands of cyber attackers, and AI-enhanced attacks can help them breach systems far more efficiently than in years past. For example, AI can help attackers:
However, DNS filtering can inhibit or block a wide range of attacks by blocking phishing webpages, messages to command-and-control servers, algorithmically generated domains, and DNS tunneling (which is when attackers disguise their traffic as DNS queries).
Data poisoning is when a model's data is altered in a way that causes the model to behave unexpectedly. Protecting proprietary training data from unauthorized changes goes a long way for preventing data poisoning attacks.
But the issue is that AI apps tend to also rely heavily on data from external libraries, pre-trained models, and feeds from third-party sources. Knowing this, attackers can typosquat domains that host counterfeit AI services. AI developers may accidentally incorporate data from these malicious sources by mistyping a domain or clicking a link.
DNS filtering can block these untrusted domains to ensure such mistakes do not corrupt a model's data.
In an indirect prompt injection, malicious instructions are hidden in a third-party source that an AI model ingests. Attackers can hide instructions in a seemingly safe webpage to fetch data from a secondary domain that they control, a domain that hosts untrusted code or direct prompt injection attacks. DNS filtering can block AI models from being directed to these untrusted domains.
DNS filtering is most often implemented by changing network policies to direct DNS queries to a trusted filtering provider. Organizations that rely on hybrid workforces (both on-premises and remote) will need to ensure the same policy applies to DNS queries even when workers are not connected to internal corporate networks. Adopting a coffee shop networking model makes it simpler to roll out DNS filtering policies across all users at once.
The basic steps for implementation:
DNS filtering uses the Domain Name System to block specific domains and IP addresses from loading on a network. If domain queries are blocked, network-connected client devices cannot load them. This prevents users from reaching malicious or inappropriate content. It also helps prevent cyber attacks like ransomware, botnet activity, and phishing.
The rapid adoption of generative AI tools creates an environment where internal security teams lack visibility into sensitive data flows. At the same time, attackers use AI to enhance their attacks and exploit this expanding attack surface. DNS filtering keeps these risks under control and secures AI usage by giving administrators visibility into what tools are in use and allowing them to block unmanaged tools.
Unapproved AI apps are often integrated into business processes or application infrastructure. DNS filtering provides visibility into shadow AI use by tracking which app domains resolve.
DNS filtering is a lightweight way to restrict access to applications and services. Administrators can use DNS filtering to see DNS queries, set approval statuses for apps, and enforce block or allow policies. This ensures nonapproved, untrusted, or unreviewed AI tools remain blocked.
AI applications rely on external libraries, pre-trained models, and feeds from third-party sources. Attackers may squat on domains hosting counterfeit AI services to exploit this reliance. AI developers might accidentally incorporate data from these malicious sources when they mistype a domain or click a link. DNS filtering blocks such untrusted domains to ensure these mistakes do not corrupt model data.