Artificial intelligence (AI) can help automate and enhance cybersecurity defenses, although it also helps cybercriminals develop more effective attacks.
After reading this article you will be able to:
Copy article link
Artificial intelligence (AI) and machine learning (ML) have long been used to detect, mitigate, and respond to cyber attacks. The release of more advanced AI models over the past several years means cyber defenses can be made even stronger. New AI capabilities have also, however, put more effective tactics in the hands of cyber attackers.
When used for cybersecurity, AI can help detect bots, malware, and insider threats; it can spot behavioral anomalies and sensitive data leaks; and it can improve threat intelligence for security tools and teams. Integrating AI does not make classic security measures foolproof, but it often can make them faster and more accurate. AI can also help automate or eliminate many manual tasks that slow down security teams' and engineering teams' productivity.
Conversely, organizations must be ready for cyber attacks using AI from threat actors. They must also be sure to secure their own AI usage against data leakage and attacks like data poisoning.
AI is a term for the wide range of ways in which computer programs can imitate or reproduce human intelligence, from making predictions to identifying symbols to generating text.
There are many different types of AI. Most relevant for cybersecurity are machine learning (ML), generative AI, and agentic AI.
At a high level, all these types of AI work by making calculations based on large data samples. For instance, an AI model that has seen several samples of malicious code may be able to identify malware it has never seen before. The more examples it sees, the faster and more accurately it can detect malware, and differentiate it from harmless code.
In addition to malware, AI cybersecurity models can similarly analyze user and application behavior, spotting deceptive or fraudulent messages, identifying untrustworthy IP addresses, and more.
Security vendors and their customers can use AI to boost:
Threat intelligence: AI-based analysis of network and web traffic can produce real-time, in-depth threat intelligence about emerging trends and tactics. That intelligence can help cyber defenses adapt to the latest attacks automatically.
Threat protection: A static set of basic security rules can block many attacks, but attackers are likely to change tactics and vectors over time. AI can use statistical analysis to automate the process of identifying threats and adapting defenses even as malware evolves, as attackers change tactics and switch command-and-control servers, and as attacks come from different global locations. AI's ability to learn and refine its outputs over time can also help reduce the rate of false positives (which drag down productivity because they have to be reviewed manually by security teams).
Phishing detection: Phishing remains the most used attack vector — and the most successful, from an attacker's perspective. It is often the way attackers first gain a foothold inside an organization before using lateral movement to reach their final target. Phishing and business email compromise (BEC) attacks are increasingly sophisticated, with attackers using GenAI tools to create realistic emails at a massive scale. AI algorithms can aid in detecting convincingly constructed fraudulent emails via sentiment analysis, machine learning, and assessing the sender's trustworthiness. Detecting and blocking phishing emails eliminates the chance that recipients will be fooled, as can happen to even the most well-trained users.
Deepfake detection: Attackers are increasingly using AI-generated deepfakes in phishing attacks, social engineering schemes, and misinformation campaigns. AI can help to detect deepfakes by identifying subtle inconsistencies and anomalies that indicate when a piece of content or media is not genuine. This can help security teams detect and block sophisticated social engineering attacks.
Behavioral analysis: ML algorithms can identify unusual behavior patterns that deviate from a baseline of normal activity (e.g., if a third-party software plugin starts sending unusual requests). Such deviations can indicate compromise or in-progress malicious activity. Behavioral analysis can help identify a number of attacks, including attacks coming from previously trusted sources that have been compromised.
Insider threat mitigation: Behavioral analysis can also detect unusual behavior from employees, contractors, and other users to detect and stop insider threats.
API security: Application programming interfaces (APIs) are crucial pieces of web application infrastructure. Today, traffic to and from APIs comprises a large percentage of total traffic on the Internet. APIs are also frequent targets for attackers. AI-based API security defenses can construct a model of expected interactions with APIs, then detect anomalies in API traffic to block potential attacks.
Supply chain threat detection: Attackers can indirectly approach their targets by attacking those targets' dependencies, or their "supply chain" — the third-party tools and services they integrate into their applications and networks. One might say this approach allows attackers to slip in through the back door, instead of attacking an organization directly. AI can help identify threats present in third-party dependencies to stop supply-chain attacks, and it can do so in automated fashion.
AI itself can be vulnerable to a multitude of attacks, including data poisoning, prompt injection, and several others. Learn about security for AI models.