(Image by Gerd Altmann)

The Dark Side of AI in Cybersecurity: Unveiling the Hidden Dangers

Gorka Sadowski
6 min readApr 16, 2023

--

Haven’t blogged in a while, and today, we’re going to dive into a topic that is getting a lot of attention — the use of Generative AI and Large Language Models (LLM) (e.g., ChatGPT) for cybersecurity use cases, specifically for threat detection. While these AI-powered cybersecurity solutions offer tremendous benefits, there are several hidden dangers lurking beneath the surface. Some of these issues are specific to LLM, while others have been solved by UEBA vendors thanks to a more narrow and dedicated focus. Generative AI and other LLM tools providers will need to take into account some pretty unique challenges. Let’s look at some of them.

1. Hallucinations: When AI Sees What Isn’t There

“The future of false positives”

As we increasingly rely on AI to identify potential threats, it’s crucial to recognize that these systems can sometimes “hallucinate” or generate false positives. These hallucinations can occur when AI algorithms, such as deep neural networks, misinterpret data and mistakenly flag benign events as malicious. Not only can this lead to unnecessary panic, but it can also waste valuable time and resources as teams scramble to address non-existent threats. Especially with another characteristic of AI systems — the lack of explainability.

2. Lack of Explainability: The AI Black Box

“Impossible triage”

AI systems can be incredibly complex, and their decision-making processes can be difficult to understand or explain. Often times, even the AI developers don’t understand why the AI reached a particular conclusion!! This lack of explainability can be problematic when it comes to cybersecurity, as organizations may struggle to trust AI-generated threat detections if they can’t comprehend the rationale behind them. Worse, how will you triage and investigate false positives? Triage of alerts generated by correlation rules is hard enough, imagine having to triage alerts generated by an AI tool… is this for real? Yet as hard as it will be, correctly triaging AI alerts will be critical as explained in the following section.

3. Feedback Drift: When Analysts Unwittingly Steer AI Astray

“It used to work”

The effectiveness of AI-driven cybersecurity solutions often relies on the feedback provided by security analysts. These human experts help fine-tune AI algorithms by confirming or correcting AI-generated threat detections. However, the process isn’t foolproof. If analysts consistently provide inaccurate or biased feedback, the AI system can gradually drift from its intended purpose, leading to an increase in both false positives and false negatives. This feedback drift phenomenon highlights the importance of continuous training and education for security analysts, as well as the need for checks and balances to ensure that the feedback provided to AI systems remains accurate and reliable. You thought you could get rid of analysts? On the contrary, you will need analysts who are even stronger cybersecurity experts!!

4. Data Leak and Privacy Concerns: A Ticking Time Bomb

“What happens to your sensitive data?”

To effectively detect threats, AI-powered tools require organizations to share vast amounts of highly sensitive security telemetry and context data. However, without well-defined privacy policies and established trust, it’s unclear who has access to this data and how it may be used beyond its intended purpose. As a result, concerns surrounding data leaks and privacy breaches continue to grow. Could this sensitive data fall into the hands of malicious actors? Could it be used to train other models, which attackers could then reverse engineer to exploit vulnerabilities? To mitigate these concerns, it is essential to increase awareness of these risks, establish policies that address such issues, and create tools that guarantee the secure management and proper use of sensitive data within AI-driven cybersecurity solutions.

5. Bias Issues: The Tainted Lens of AI

“Will AI work in — my — environment?”

Another significant concern is the potential for bias in AI systems. Machine learning algorithms are trained on vast datasets, and if these datasets contain inherent biases, the AI may inadvertently perpetuate these biases in its threat detection. This can result in certain types of attacks being overlooked, while others are disproportionately flagged, potentially leaving organizations vulnerable to specific threats. Further, when an AI is trained mainly on a single-vendor environment, how will it perform in an heterogeneous environment?

6. Data Poisoning: Undermining AI from Within

“The future of backdoors”

Data poisoning is a devious technique that attackers can use to sabotage AI systems. By injecting malicious or misleading data into the training process, bad actors can manipulate AI algorithms to produce inaccurate or harmful outputs. This could cause AI-powered threat detection systems to overlook genuine threats, allowing attackers to operate with impunity… Combined with the lack of explainability, this is an ideal backdoor!! You better trust your AI tool, that means — trusting the developer of the AI engine, trusting the people who trained the tool, and trusting the people who provided the training set. Considering all the issues with supply-chain security, that’s a lot of trusting a lot of people with a lot of power.

7. Adversarial Machine Learning: Exploiting AI Model Vulnerabilities

“Fooling AI”

Adversarial machine learning is another attack vector that focuses on exploiting vulnerabilities in already trained AI models. In this approach, bad actors craft carefully designed attacks that offer inputs very similar to legitimate inputs but have been slightly perturbed to cause the AI model to make incorrect predictions or classifications. These adversarial examples are designed to be imperceptible to humans but cause the AI system to malfunction, potentially allowing attackers to evade detection or trigger false alarms. In other words — attack an organization with a slightly deviant method and you can come right in undetected.

8. Prompt Engineering: The Battle Between Humans and Machines

“Do we understand each other?”

Crafting the perfect prompt for an AI system is both an art and a science. The performance of AI-powered cybersecurity tools often depends on the quality of the prompts they receive. Poorly designed prompts can lead to suboptimal results, such as the AI missing critical threats or flagging irrelevant events. Therefore, it is essential to invest in prompt engineering to ensure that AI systems can accurately detect and respond to threats. Techniques like active learning and reinforcement learning can help in improving the communication between humans and AI systems, leading to more effective threat detection.

9. The Accountability Conundrum: Who’s to Blame for AI Missteps?

“Whose fault is it?”

As AI systems become integral to cybersecurity efforts, a pressing question arises: who is responsible when AI makes mistakes? The lack of clear accountability can create challenges in determining liability for AI-generated errors. When false positives or false negatives occur, it’s difficult to pinpoint whether the fault lies with the AI developers, the security analysts providing feedback, or the organizations deploying the technology. This uncertainty can lead to unresolved security breaches or wasted resources, as blame is shifted between various stakeholders. To address the accountability issue, it is vital to establish clear guidelines and regulations around AI usage in cybersecurity, assigning appropriate responsibilities and consequences for all parties involved.

Conclusion: Striking the Right Balance

“Let’s not overhype AI please”

While Generative AI and LLM have the potential to revolutionize cybersecurity and threat detection, it’s crucial to remain vigilant and aware of the potential pitfalls. By addressing hallucinations, lack of explainability, feedback drift, bias issues, data poisoning, adversarial machine learning, prompt engineering challenges, and accountability concerns, we can benefit from the power of Generative AI and LLM while mitigating the risks and maintaining a secure digital landscape. We’re not there yet. Sorry for the bad news…. UEBA vendors had to go through the same lifecycle, and almost a decade later have solved these problems — again the more narrow focus helped.

As always, I hope you found this piece valuable. I’d love to hear your thoughts on the potential problems of using AI for cybersecurity use cases, so feel free to leave a comment below or on my LinkedIn post.

(Thank you ChatGPT for helping me write this piece)

(Blog edited for clarity)

--

--

Gorka Sadowski

Cybersecurity expert and Chief Strategy Officer at Exabeam. Former Gartner analyst driving SIEM and SOC research and builder of the Splunk security ecosystem.