Artificial intelligence (AI) has become an increasingly popular topic in computer programming, with many claiming that it can help coders write better or more secure code. However, a new study suggests that this may be different.
Anaconda says in one of its blogs that a budding articulate AI bot, ChatGPT, understands code, can write code, and can even help developers debug their code. This is backed up by several users taking to Twitter to describe how the AI bot created a simple line of codes in just a couple of words for a prompt. Other AI assistants capable of the same task exist.
Beware of AI-made Code
As first reported by TechRadar, researchers at Stanford University discovered that coders who used AI assistance like GitHub Copilot and Facebook InCoder generated less secure code.
The researchers offer new light on the application of artificial intelligence (AI) in computer programming. According to the study, while AI-powered coding tools can assist in speeding up the coding process, they may not always result in enhanced code quality or increased security.
The study looked at the employment of AI-powered tools in a variety of coding tasks and discovered that their capabilities were restricted. These tools can detect specific patterns or flaws, but they cannot fully comprehend the context in which the code is produced. As a result, innocuous issues may be flagged, while more critical issues may be missed.
Read Also : North Korea Have Reportedly Plundered $1.2 Billion in Crypto Funds, According to South Korea's Spy Agency
AI-powered coding tools may be able to save time and increase efficiency. Still, they may not necessarily lead to better or more secure code. It is essential for programmers to continue to hone their manual coding skills and to use AI-powered tools as a supplement rather than a replacement.
A Closer Look
The authors of the paper found that artificial intelligence (AI) code assistants may lead to less secure code. The study, which was the first large-scale user study examining how users interact with AI code assistants, found that participants who had access to an AI assistant based on the OpenAI codex-davinci-002 model wrote significantly less secure code than those without access.
The study also found that participants who trusted the AI less and engaged more with the language and format of their prompts tended to write code with fewer security vulnerabilities. This suggests that more careful and thoughtful use of AI code assistants may lead to more secure code.
They also cited previous research that found that around 40% of programs created with GitHub Copilot contained vulnerable code, despite the fact that a follow-up study found that coders using Large Language Models (LLM), such as OpenAI's code-cushman-001 codex, on which GitHub Copilot is based, only resulted in 10% more critical security bugs.
The study included an in-depth examination of participants' language and interaction behavior, as well as the release of the user interface as an instrument for future research. This will aid in designing future AI-based code assistants and ensure that they are used in a way that results in better secure code.