The Risks of Relying on Code-Generating AI Systems: A Study on Security Vulnerabilities

Study finds that code-generating AI could introduce security vulnerabilities

Recent research has found that software developers who create apps using code-generating AI are more likely than others to introduce security flaws. The paper was co-authored with a Stanford-affiliated team of researchers and highlights the potential pitfalls that code-generating AI systems can cause as vendors such as GitHub begin to market them.

In an email interview, Neil Perry, PhD student at Stanford and lead co-author of the study, said that code-generation systems were not yet a substitute for human developers. \”Developers who use them to complete tasks in areas outside their expertise should be worried, and those who are using them in order to speed up tasks they are already proficient at should double-check the results and context in which they are used.\”

The Stanford study focused on Codex, an AI code-generation system developed by San Francisco based OpenAI. (Codex is the engine behind Copilot.). The researchers recruited 47 programmers — from undergraduates to professionals with years of experience in programming — to use Codex for security-related issues across programming languages, including Python JavaScript and C.

Source:

Code-generating AI can introduce security vulnerabilities, study finds

Leave a Reply

Your email address will not be published. Required fields are marked *