Study finds that code-generating AI could introduce security vulnerabilities
Recent research has found that software developers who create apps using code-generating AI are more likely than others to introduce security flaws. The paper was co-authored with a Stanford-affiliated team of researchers and highlights the potential pitfalls that code-generating AI systems can cause as vendors such as GitHub begin to market them.
In an email interview, Neil Perry, PhD student at Stanford and lead co-author of the study, said that code-generation systems were not yet a substitute for human developers. \”Developers who use them to complete tasks in areas outside their expertise should be worried, and those who are using them in order to speed up tasks they are already proficient at should double-check the results and context in which they are used.\”