The Promising Impact of AI in Software Security

Intrusion Team
May 03, 2023

We already talked about how threat actors might leverage ChatGPT to generate malware. Well, good thing the bad guys don’t have a monopoly over artificial intelligence (AI). If the bad guys can use AI, so can we. In this post, we’ll focus more on the potential (presumably positive) impact AI could have on the efficacy of software security. 

First, let’s define what we mean by software security. 

What is software security?

Software security refers to the practices and processes aimed at protecting software from cyber threats. It can be applied at various stages of the software development life cycle (SDLC), starting from the stage a piece of software is still designed all the way to the point when that software is already deployed and maintained. 

Software security is important because, without it, threat actors can take advantage of vulnerabilities in applications, operating systems, cloud services, and other forms of software, and use them to cause downtime, carry out a data breach, or inflict harm on an individual, an organization or—in the case of nation state threat actors—even an entire country. 

Even the US government considers software security a major priority. Software security is actually one of the main thrusts of the new National Cybersecurity Strategy. In the new strategy, the US government will institute legislative measures that would regulate and incentivize software vendors and other large tech companies so that the software and services they develop become more resilient to cyber threats. 

If you’re a software vendor, software security is imperative

While you certainly need to secure your SDLC if you develop your own software for internal use, that responsibility becomes even more critical if you create software for other organizations. Threat actors know that when they introduce a vulnerability or malicious code into a software application or library that’s installed or imported in other organizations’ systems or software, they’ll have a greater chance of impacting a larger number of victims.  

The malicious practice of targeting suppliers and service providers, which is known as a supply chain attack, is increasingly being used as an attack vector according to BlackBerry’s 2022 Threat Report. This observation has been corroborated by the 2022 Cost of a Data Breach report, which identifies “vulnerability in third-party software” as the fourth most commonly used initial attack vector in data breaches across the globe. 

Bugs or, worse, malicious code that find their way into your software can cause reputation damage to your brand once discovered by bug bounty hunters or other cybersecurity researchers. If ever the findings are somehow publicized, they can discourage current and potential customers from buying your software products or services. 

How artificial intelligence augments software security efforts

Finding and addressing bugs and malicious code in software can be an arduous and tricky undertaking. It involves several processes like static analysis, dynamic analysis, fuzz testing, and penetration testing, which are all prone to human error if done manually. 

Although there are tools that allow you to automate certain aspects of these processes, these tools are highly susceptible to false positives and vulnerability misclassifications. As a result, you’d have to employ additional tools and human analysts to review those processes and their results in order to avoid missing critical vulnerabilities. In spite of these efforts, plus the additional costs, a lot of vulnerabilities are still missed. 

Since these processes often involve huge volumes of data, pattern recognition, and repetitive tasks, not only are they perfect for automation, but they are especially suitable for machine learning (ML) solutions. 

For example, large volumes of data consisting of previously identified bugs can be fed into a machine learning model to train it. Once the ML model has been trained, it can then use what it learned to identify new bugs with better accuracy than a regular automation tool. This is exactly what Microsoft did to handle the tens of thousands of bugs generated by their developers on a monthly basis. 

To improve the efficacy and efficiency of their machine learning model, the Microsoft team adopted a supervised learning method wherein security experts were brought in to review and approve training data before they were fed to the ML model. The experts basically checked whether the data were labeled correctly. In addition, the security experts were also tasked to evaluate the model in production. 

At the time of publishing in 2020, the machine learning model was already capable of accurately distinguishing between security and non-security bugs 99% of the time. In addition, it was also able to classify bugs as critical and non-critical with 97% accuracy. These capabilities significantly improved prioritization efforts and, in turn, substantially reduced the number of missed critical vulnerabilities. 

By employing artificial intelligence, you can eliminate human error, false positives, and bug/vulnerability misclassifications. Basically, you can significantly improve the efficacy of your software security efforts through improved accuracy, speed, and scalability in finding bugs and vulnerabilities. 

More importantly, these improvements will enable you to produce quality and more secure software that are less vulnerable to cyber threats. This should pave the way to earning the distinction of providing highly reliable and secure software—a valuable key differentiator in today’s business environment growing in security awareness.

Resources that might interest you.

Get the insights cybercriminals don’t want you to know.