eCommerceNews UK - Technology news for digital commerce decision-makers
Story image

AI-driven development poses security challenges, says Venafi

Thu, 19th Sep 2024

Venafi, a firm known for its expertise in machine identity management, has released new research addressing organisational uses of artificial intelligence (AI) and open source software in software development.

The study surveys 800 security decision-makers across the United States, United Kingdom, France, and Germany, revealing significant security concerns about the pace at which AI-powered development operates.

The research indicates that 83% of organisations use AI for coding, and open source software is utilised in 61% of applications. However, this rapid development is causing substantial strain on security teams. Sixty-six percent of security leaders report that it is impossible for security teams to keep abreast of AI-powered developers, leading to increased vulnerability to cyberattacks. Additionally, 78% believe AI-developed code will precipitate a "security reckoning," while 59% confess to losing sleep over the security implications posed by AI coding.

Although 72% of security decision-makers feel pressured to permit the use of AI in coding to stay competitive, 63% have considered banning the practice due to safety risks. Kevin Bocek, Chief Innovation Officer at Venafi, remarks on the complexity of the situation: "Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won't give up their superpowers. And attackers are infiltrating our ranks—recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg."

An area of particular concern is the over-reliance on open source software. The research shows that 90% of security leaders trust code from open source libraries, yet 86% believe open source code favours speed over best security practices. Furthermore, 75% of security decision-makers say it is impossible to verify the security of every line of open source code used in their organisations. This has led to a call for more rigorous code verification processes, with 92% of security leaders supporting the use of code signing to ensure the trustworthiness of open source code.

"The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown," Bocek says. "Code now can come from anywhere, including AI and foreign agents. There is only going to be more sources of code, not fewer. Authenticating code, applications and workloads based on its identity to ensure that it has not changed and is approved for use is our best shot today and tomorrow."

Among the primary apprehensions regarding AI-generated code include the risk of developers becoming overly reliant on AI, leading to a decline in coding standards, a lack of thorough quality checks for AI-written code, and the inadvertent use of outdated open source libraries that are not well-maintained. These factors contribute to an unsettling environment for security leaders who struggle with governing AI's safe use, exacerbated by the lack of visibility into AI usage within their organisations.

To mitigate these risks, maintaining a robust code signing chain of trust is imperative, according to Venafi. This measure can prevent unauthorised code execution and help organisations scale their operations to keep up with the rapid use of AI and open source technologies. "In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business' foundational line of defence," Bocek concludes. "Organisations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed."

The full report titled "Organisations Struggle to Secure AI-Generated and Open Source Code" provides comprehensive insights into these findings and their implications for the future of software development and security strategies.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X