AI in security: empowering or divisive?

AI in security: empowering or divisive?

In the ever-evolving landscape of cybersecurity, AI stands as both a guardian and a potential threat. Basil Shahin, Sales Director for META at Corelight, sheds light on AI’s evolution and its profound impact on security. From its transformative role in bolstering defences to the risks it poses to organisational integrity, Shahin delves into the necessity of ethical AI deployment and the imperative of human oversight. He offers insights into safeguarding against its negative ramifications and discoveries on how Corelight pioneers AI integration in security solutions, shaping the future of cybersecurity with innovation and tailored strategies. 

How has AI evolved over the years and shaped the cybersecurity landscape? 

Basil Shahin, Sales Director, META at Corelight

 

The cybersecurity industry has been an early adopter and testing ground for many AI use cases that help define the possibilities and challenges that technology delivers. This is due to the increasing complexity of analysing larger amounts of data generated by today’s modern enterprise. What has evolved recently is how AI is being used by both defenders and threat actors. The benefits of automatically detecting and responding to threats provide a competitive advantage to defenders as it continuously evolves, however, organisations need to be patient and understand that we are still in the early days of this revolution and are yet to scratch the surface on the full potential of AI in the cybersecurity landscape.  

For example, the recent explosion in the use of large language models (LLMs) had a direct impact on cybersecurity. LLM’s best application for defenders so far, in its current state of maturity, has been the ability to help human security analysts with the summarisation and synthesis of existing information. We found that their ability to create detections by discerning between legitimate and malicious behaviours was weak in our initial tests but validated that these language models can deliver powerful context, insights and next steps to help accelerate investigation and educate analysts. 

Why are cybersecurity teams turning to AI for help? 

The use of AI in cybersecurity has expanded due to several factors – the proliferation of devices, remote connections, cloud deployments and complex supply chains that have increased the attack surface of most organisations.  The data generated by this expanded surface in turn leaves Security Operations Centres (SOC) struggling to monitor and prioritise more alerts and feeds, which compounds the threat of missed alerts and undetected presence of attackers that may remain undetected in key systems.  

Heavy workloads and a significant talent gap also leave cybersecurity teams in a double bind. Analysts can be locked down in repetitive tasks that do little to help these professionals learn new skills to deepen their experience and value to their organisations. Meanwhile, budget cuts, hiring freezes and an insufficient number of developing professionals in educational pipelines have perpetuated the talent gap, even as the gross size of the cybersecurity workforce expands.  

One study found a year-over-year increase of 8.7% in the number of cybersecurity professionals, yet the gap between worker demand and availability grew faster (12.6%).  The talent shortage and overwork of SOC teams are ongoing chronic problems that lead to inefficient use of resources, critical gaps in enterprise security and analysts lacking time and tools to enhance skills and develop new capabilities.  

 Even for organisations that enjoy sufficient resources and cybersecurity talent, the volume and complexity of the datasets created by expanding networks, new endpoints, and increasingly complex supply chains and working environments has outstripped the analytic capabilities of humans. To keep pace with the developments in a digital marketplace, security teams,, and IT teams in general, must harness the power of AI to confront this challenge.   

The rapidly developing capabilities of adversaries makes the need for AI assistance even more urgent. Like a conventional military arms race, cyber attackers and defenders have the capability to make use of an extremely powerful technology. It is certain that attackers, just as they have done since the coming of the Internet, will make use of any tool that helps them achieve their objectives. Increasingly, those tools will incorporate the power of AI and Machine Learning. Defenders must anticipate this and fight fire with fire. 

How can organisations capitalise on the AI competitive advantage? 

To capitalise on the AI advantage, organisations need to reconsider some fundamentals of the cybersecurity lifecycle such as the organisation, processes, tasks and structure of cybersecurity departments.  They also need to constantly consider the kind of data to collect as new systems are being developed, which workloads to move to the cloud, what new applications are being leveraged and a system for hiring and onboarding remote workers. 

Cybersecurity vendors historically have been good at exploring AI through their area of expertise and the data they collect. For example, EDR vendors focus on endpoint data while NDR vendors focus on network data and SIEM vendors focus on log data. Threat actors on the other hand did not have this siloed approach to using AI to analyse data types and have leveraged any type of data they get their hands on. For defenders to catch up, we are experiencing the rise of integrations such as the XDR concept where data can be collected from a wide range of sources and analysed in one location to provide a single source of truth.  Although many vendors are marketing and selling these products today, the realities of collecting large amounts of data, normalising data from different sources, applying Machine Learning across different datasets and producing automation for response are still very challenging and in an infant stage.  

What should be considered when using AI to help defend organisations in both private and public sectors? 

AI is changing the way organisations think about every aspect of work including who they partner with and how they protect their most valuable assets. As cyber attackers will use AI to launch more sophisticated attacks, defenders must use AI-based tools to analyse complex datasets, accelerate and automate decisions which makes it easier to digest complex streams of information. Although AI capabilities are sometimes hyped and exaggerated, it is important to have a clear-eyed view of how AI tools can advance cyber defences including the choices companies make to ensure they get the most value out of vendor relationships and the tools they invest in.   

Also, as most cyber criminals leverage the same tools, cybersecurity professionals for both public and private sector organisations need to stay on the edge of AI’s development whether they are building, using new tools and processes to detect intrusions, analysing alerts or educating other professionals. 

How can AI and the improper deployment of AI on an organisation benefit threat actors? 

Threat actors are leveraging AI and are benefiting from a lower barrier of entry. Reconnaissance and intrusion techniques that required advanced skills can now be executed with far less effort. AI can assist in complex distributed denial of service (DDoS) attacks, brute forcing of credentials, accelerated data exfiltration, vulnerability detection, observation of network traffic and the establishment of command and control (C2) channels.   

Furthermore, attackers can focus on the AI tools defenders use and potentially corrupt training data to skew outputs. Model poisoning has far-reaching implications for business at large. It could result in an attacker manipulating algorithms with the intention of making their activity appear normal or obscuring activity which uncorrupted models might detect. This reality underscores the need for SOCs to attain proficiency in monitoring the AI-powered tools they use. The tools themselves expand the enterprise’s attack surface. 

How can organisations make the right decisions when it comes to leveraging AI for cybersecurity? 

It can be difficult for enterprises to choose which investments will best support their business objectives and SOC teams while also creating defences that best mitigate their cyber -risk and assist in compliance with evolving industry regulations and standards. To make decisions about where and how to invest in AI-powered cybersecurity technologies, it can be helpful to assess the current state of the technology and where it can provide security teams and the enterprise with a strong return on investment and improved cyber defence. 

What are some negative effects of AI on organisations and why should they be concerned about these effects on their security?  

There are many negative effects of the application of AI on organisations including a false sense of security from having full coverage to detect all threats, autonomous threat response, AI hallucinations and the current marketing hype that leads customers into believing, because of AI, they don’t need to find, hire and train the best human capital to do the job.  

Cybersecurity leaders in most organisations today face the real challenge of retaining their top talent due to the global shortage of skilled cybersecurity human capital.  This problem is sometimes exacerbated by vendors who focus heavily on addressing this one single pain point amongst many others by over-selling and over- promising the autonomous vision.   

This becomes a bigger problem when marketing and sales teams start to misinform their customers to a point where organisations become complacent or over reliant on third party ML-based solutions rather than focusing on educating and training employees on the proper use of AI in Cybersecurity.  

Vendors who will survive this revolution and dominate this space in the market have already understood this concept and are working with their customers today to take them on the right path to achieve the desired results required to successfully leverage and deploy AI in cybersecurity.  This is achieved by emphasising the importance of the human security analyst’s skills to utilise AI within a broader, complex socio-technical system and context.  

What are some key security considerations for AI adoption ? 

There have been extensive discussions about the origins of data used for the training and tuning of AI models. Organisations will need to make careful decisions about what data they make available and whether they can remain compliant with industry standards and business priorities. When deploying AI-powered security solutions, the organisation should evaluate a vendor’s approach to transparency regarding the construction of their models and its inputs, as well as the vendor’s risk management framework as explained below: 

Oversight of outputs: AI hallucinations, data poisoning and data modification will continue to be serious concerns for any AI use case. The value of employees with skills for evaluating AI outputs will continue to rise. 

Plugging into a virtuous feedback loop: Customer experience with AI models has the potential to greatly improve the performance of AI-powered tools. Platforms that include safeguards can help create a system for ongoing model tuning and refinement without unnecessary exposure of proprietary customer data. Additionally, tool and platform vendors connected to a wide ecosystem of partners and evidence sources can deliver force-multipliers to detection and response capacities. 

Level setting of expectations: AI already warrants a ‘game changer’ description, but it is important to remember that the game’s primary participants are still humans, and that hype can distort the true extent of AI capabilities and limitations. Organisations need to carefully consider the current state of their security and make investments in AI tools that best address their specific needs and industry threats. 

How is Corelight integrating AI into its security solutions and how does it use AI to empower organisations to defend against cyberthreats? 

Our approach is to leverage AI to make our customers more productive in their day-to-day security operations and we do that in a way that is both responsible and respectful of our customer’s data privacy. Increasing SOC efficiency through better detections and faster upleveling of analyst skills goes directly to addressing the cybersecurity workforce challenges that every organisation is struggling with. 

Machine Learning 

The umbrella term AI covers all the capabilities of Machine Learning (ML), including LLMs. Corelight uses ML models for a variety of detections throughout our Open NDR Platform, both directly on our sensors as well as in our Investigator offering. Having this powerful capability at the Edge and in the cloud allows our customers, whether deployed in air-gapped or fully cloud-connected environments, to harness the power of our ML detections. 

From finding C2 channels to identifying malware, ML continues to be a powerful tool in our analytics toolbox. Our supervised and Deep Learning ML models allow for targeted and effective detections that minimise the false positives commonly associated with some other types of ML models. Our models can identify behaviours like domain generation algorithms (DGAs) which may indicate a host infection, watch for malicious software being downloaded, and identify attempts to exfiltrate data from an organisation through covert channels like DNS. We also use Deep Learning techniques to identify URLs and domains that attempt to trick users into submitting credentials or installing malware, helping to stop attacks early in the life cycle. 

Providing effective ML-based detections is only the beginning of our approach. Having the appropriate context and explainability around our detections is essential to faster triage and resolution. We provide detailed views into what is usually a ‘black box’ of ML detection. Our Investigator platform provides an exposition of the features that make up the model, as well as the weightings that led to a specific detection. That data gives analysts a view into what specific evidence to pivot to for the next steps of an investigation. We continually build new models and tune our existing ones to make sure that our customers are protected against the latest threats. We also are prototyping an anomaly detection framework which has broad applicability to a variety of behavioural use cases from authentication to privilege escalation while still providing a level of explainability that our customers have come to expect from Corelight. 

Large Language Models 

In our experiments with LLMs, we became convinced early on that the power for summarisation and synthesis of existing information was the best application for the current maturity of LLMs. We found their ability to create detections by discerning between legitimate and malicious network traffic weak in our initial tests but validated that these language models can deliver powerful context, insights and next steps to help accelerate investigation and educate analysts. We also benefit from our platform and resulting data being based on open-source tools like Zeek and Suricata, which many commercial LLMs are already trained on. Since Corelight produces a gold standard, open data format for NDR, we quickly delivered a powerful alert summarisation and IR acceleration feature in our Investigator platform, driven by GPT. 

How does Corelight differentiate itself in the market to provide tailored solutions for addressing the dynamic challenges of AI? 

Corelight has been a leader in the development and implementation of AI-powered platforms that give defenders the tools for defence-in-depth without compromising company data. Our Open NDR platform leverages powerful Machine Learning and open source technologies that can detect a wide range of sophisticated attacks and provide analysts with context to interpret security alerts, including LLMs. Our approach delivers significant contextual insights while maintaining customer privacy: No proprietary data is sent to LLMs without any customer’s understanding and authorisation.  Our use of Zeek and Suricata, as well as partnerships with Crowdstrike, Microsoft Security, Google Mandiant and other security consortiums delivers the double benefit of maximised visibility and high-quality contextual evidence that has helped us expand our offerings of supervised and deep learning models for threat detection.  

At Corelight, we’re committed to transparency and responsible stewardship of data, privacy, and AI model development. We help analysts automate workflows, improve detections, and expand investigations via new, powerful context and insights. We encourage you to keep current with how our solutions are optimising SOC efficiency, accelerating response, upleveling analysts and helping to mitigate staffing shortages and skill gaps. 

AI’s power and rapid development comes with caveats. Although necessary, these tools can elevate organisational risk related to misuse (by malicious actors or employees), poor investment choices, and unrealistic expectations. It is important to focus on the immediate implications of AI on the organisation’s overall security while staying alert to emerging trends. 

 How will AI-powered cybersecurity tools improve over time? 

Artificial Intelligence is an iterative process that can scale rapidly when multiple complex datasets train models and tune them over time. Cybersecurity, like all industries, faces the challenge of streamlining disparate and unconnected datasets and making it available for real-time and forensic analysis. AI cybersecurity tools will be essential to connecting data repositories that can then be integrated and synthesised. In cybersecurity, this can lead to a more comprehensive understanding of an organisation’s threat landscape, its normal traffic patterns, and adversarial behaviour during or after a cyber event.  

The development of AI cybersecurity tools is also a function of a larger ecosystem. Purveyors of network security, cloud security, attack frameworks and other security functions can drive integrations and partnerships that provide analysts on the ground with better integrations, dashboards and event context, which can improve over time in a mutually reinforcing matrix. 

Looking ahead, how do you envision Corelight’s use of AI shaping the future of cybersecurity? 

While we began our LLM explorations with OpenAI’s GPT, we continue to track the incredible growth in the market of new models and platforms built around LLMs coming from every corner of the tech industry. In addition to our work with GPT, we have built collaborative relationships with other LLM developers, providing an opportunity to influence and shape elements of their product development, such as the Microsoft Security Copilot private preview program. 

ML detections and ML-assisted workflows are just a few of the ways that we are using AI in our products, but there is plenty more going on behind the scenes. Be on the lookout for many more exciting developments over the coming months focused around Corelight’s use of AI to make investigation workflows more efficient, generate more effective detections and to help uplevel analysts’ understanding of network data. 

Click below to share this article

Browse our latest issue

Intelligent CIO Middle East

View Magazine Archive