There is a severe security issue in GitLab Duo, the AI technology that has been setup as an assistant for the popular DevOps tool.
According to security researchers from Legit Security, attackers may abuse “hidden prompts,” placed in multiple areas in a GitLab project, to make Duo act, in a way, helping them steal critical source code and push their own rogue code.
The issue is that GitLab Duo evaluates an entire page’s context, including not just the page source but also page comments, merge request descriptions, commit messages, and issue discussions.
Attackers could deposit encoded or obfuscated operations in these inoffensive locations though. If a user chatted with Duo on a webpage with these buried prompts, the AI would be none the wiser, and follow the attacker’s orders.
That made for some malicious activity. Legit Security showed that attackers could tell Duo to get the private source code, encode it, and exfiltrate it via benign enough looking HTML elements in responses from Duo.
In addition to being exploited to target an account with unwanted factors, the flaw could be used to inject untrusted HTML into Duo’s output, either redirecting users to a phishing site, or serving malware.
The researchers also noted a risk of leaking of sensitive information from personal issues and the possibility of tampering with code suggestions offered to others.
This was made even stronger because it was possible to hide the hidden prompts in all kind of ways, such as Base16/64 encoding, Unicode smuggling (this one was a beast for those not using a console font manager), and even writing your text in white to render white using KaTeX. This made finding it very difficult.
Although GitLab has supposedly fixed the HTML injection side of the bug, the fact that Duo is still vulnerable to an indirect form of prompt injection via hidden instructions is of concern.
The firm said that it does not see this as a security vulnerability as it does not cause unauthorized access or code execution. Security experts, however, say that the ability for data exfiltration and the injection for malicious content is a significant threat to GitLab users and their projects.
This event raises awareness towards the increasingly complex security risks of incorporating large language models into programming environments and the necessity for strong input sanitation and context awareness within AI-based utilities.
Users should be mindful about what they interact with within GitLab projects and should not trust something that does not look right. More info and possible mitigation by GitLab is expected.