top of page
Writer's pictureDan Griffith

Software Supply Chain Security in the Age of AI: Evolving Attacks

Updated: 1 day ago




Introduction


As author Watts Humphrey first stated, and Microsoft CEO Satya Nadella re-emphasized, “every business is a software business.” Like other manufacturers, the “software business” (enterprise application production) requires a resilient and secure ecosystem of vendors, supplies, and tools. Over the last two decades, these application development ecosystems, commonly referred to as software supply chains, have become mission-critical for the great majority of enterprises and organizations.  At the same time, they’ve grown vastly more complex, distributed, and interdependent. As a result, the attack surfaces and risks inherent to software supply chains have grown apace, leading to ever increasing attacks resulting in high-profile and costly breaches. The proliferation of AI technologies will present even more challenges for software supply chain security.


Here at Cyberify, we find it helpful to broadly frame AI security discussions using a simple formula: security from AI, security of AI, or security with AI. This analysis will focus on emergent software supply chain security risks from AI attacks, i.e. intended AI-enabled or AI-enhanced exploits directed against a software supply chain. These AI technologies include Generative AI as well as custom Large Language Models (LLM’s) and Small Language Models (SLM’s). Leveraging recent software supply chain attack vectors and methodologies, we can then explore how future attacks will likely evolve in light of the rise of AI technologies. Recommended mitigations for these emerging risks, including the potential use of AI services to enhance security, will be addressed in a future post.


One of the most notorious recent examples of software supply chain vulnerabilities was Log4Shell, which was discovered in Apache’s Log4j2 software library in late 2021. This vulnerability in the widely utilized open source Java logging component allowed Remote Code Execution (RCE) on affected systems and, due to the ubiquity of internet-facing Java applications, presented a high-value target to threat actors. Attacks seeking to identify and exploit this vulnerability began even before public disclosure, and once Log4Shell became public, the attacks increased exponentially. A 2022 Qualys study found that Qualys customers were using 98 Log4J2 versions across 22 million vulnerable app installations, of which over 50% were flagged as end-of-life (i.e. more difficult to maintain and patch). Qualys reported over 22k attempted Log4Shell exploits per week at the height of exploit attacks. A 2022 study from Arctic Wolf found that 25% of their customer base was targeted with attempted Log4Shell exploits in the year after it was discovered, generating 11% of their customer incidents in 2022 with an average response cost of $90k per incident (exclusive of impact costs from ransomware or adverse publicity). Just remediating Log4J2 vulnerabilities was in itself a complex, costly, and, in many cases, extended and/or repetitive exercise that highlighted the need for in-depth software supply chain risk management. This existing trend of increasing software supply chain risk will likely worsen in the near future, at least partly due to the proliferation of AI technologies.





Attack Vectors


To amplify and scale their attacks, threat actors can be expected to increasingly utilize AI-enhanced exploits over time in attempts to compromise software supply chains. These threats will likely manifest across several areas, including:

 

Vulnerability discovery: Customized AI systems can rapidly scan and analyze software artifacts such as libraries, container manifests, and codebases for potential vulnerabilities. Further, AI systems could also be trained to target and map out vulnerable software dependencies. These vulnerable dependencies are prevalent in open source software consumption (1 in 8 OSS component downloads had a known vulnerability in this Sonatype study), not commonly understood by development teams due to dependency chains, and therefore present significant visibility and risk management challenges. Adversarial AI systems will likely generate vulnerability findings faster than those of human attackers or even security analysts using current tooling, and certainly with more persistence (unlike humans, AI bots don’t take time for such distractions as sleeping and eating). The resultant expanded time gap where new risks are unmitigated can give attackers a significant advantage in identifying zero-day vulnerabilities before they are found and patched. 

 

Private vulnerability exploitation: When unpublicized potential software component vulnerabilities are identified through attackers’ AI bots, AI systems can also accelerate the development, testing, and deployment of potential zero-day exploit code, leveraging the expanded patching time window for targeted systems. The AI enhancements will at first likely be roughly equivalent to using “mainstream” AI-enhanced Integrated Development Environments, enabling new exploit code to be assembled much more quickly and with fewer bugs. AI systems could also accelerate attackers’ testing and deployment cycles through scaled automation and even autonomization. It is reasonable to expect these systems to grow in capability as their reach expands, so much so that at some point in the foreseeable future, entire scan-discover-analyze-code-test-deploy-exploit hacking cycles could be run by autonomous AI’s at attackers’ behest.

 

Public vulnerability exploitation: When software component vulnerabilities are identified and confirmed through public disclosure, such as CVE’s, AI systems can accelerate production of working exploit code in much the same way as they do with AI-enhanced exploits against net new vulnerabilities. In these cases, discovery is much more straightforward, simply leveraging publicly available documentation. The attackers’ AI resources are then utilized to accelerate software exploit production via quick coding, testing, and deployment. AI systems could even be leveraged for comparative analysis of exploits, more quickly identifying the most effective candidate kits and even suggesting refinements to exploit code.


SBOM tampering: The Software Bill of Materials, aka SBOM, is a popular standard approach for establishing transparency into application contents. Similar to a manufacturer’s bill of materials (BOM), an SBOM is a manifest of all the building blocks used to create a particular software artifact, including open source, proprietary, and/or commercial components. Since an SBOM can be utilized as a sort of credential to “authenticate” a software artifact, it is a high-value target for an attacker. Expect spoofing attacks on both vendor and end-client SBOM’s to enable the masking of exploit code as legitimate, similar to attacks on certificate authorities.

 

More sophisticated malware: AI algorithms can enable malware to learn and adapt, making it harder to detect and defeating many traditional security measures. For example, AI can be used to generate polymorphic malware that uses an encryption key to change parts of its code to evade signature-based detection. The most famous example of this kind of malware is the BlackMamba keylogger. It is even conceivable that future exploits could leverage AI to quickly build metamorphic malware, chaining multiple polymorphic sub-components together for near on-demand execution of a highly obfuscated attack.

 

Scaled social engineering: AI-powered language models are already being used to create highly convincing phishing emails, texts, and other forms of social engineering attacks. These attacks have been used to trick developers or vendors into disclosing sensitive information such as credentials and build processes as well as installing malicious software. Spear phishing, vendor impersonations, and credential harvesting are just some examples—and soon these AI-enhanced social engineering attacks could be easily made semi-autonomous for hyper-scaling.

 

Stealthy data exfiltration: AI can help attackers hide their tracks when exfiltrating data by blending malicious traffic with learned normal network patterns. This obfuscation would be difficult to maintain manually due to resource constraints. However, AI-enabled adversarial network traffic pattern analysis at scale could enable malware to more easily bypass current data protection technologies, a scary concept given the ever-increasing value of proprietary data.





Navigating the New Software Supply Chain Threat Landscape


We've surveyed some of the emergent software supply chain security risks from AI-enabled and AI-enhanced attacks. We've explored how AI technologies leveraged by threat actors could present novel security challenges for these ecosystems.  And now that we've mapped the emergent threat landscape for software supply chains, the next post in this series will examine strategies and best practices for mitigation. Future posts will also explore how software supply chain security of AI and with AI are likely to undergo their own evolutions. Finally, If you'd like to discuss Cyberify's perspectives on AI and software supply chain security in more depth, book a time to chat!


31 views0 comments

Comments


bottom of page