Google Antigravity, launched on November 18, 2025, as a free integrated development environment (IDE), leverages the Google Gemini 3 Pro chatbot to enable an “agent-first” interface. This tool autonomously plans, executes, and verifies tasks across editors, terminals, and browsers, streamlining workflows like codebase research, bug fixes, and backlog management for individual developers.

However, security researchers have identified significant vulnerabilities that could expose users to attacks:

  • Mindgard’s Discovery: A persistent code execution flaw allows threat actors to create malicious source code repositories. Opening these in Antigravity can install backdoors, enabling arbitrary code execution on the user’s system without any confirmation.
  • Adam Swanda’s Finding: An indirect prompt injection vulnerability permits partial extraction of the agent’s system prompt and execution of malicious instructions from untrusted external content.
  • Wunderwuzzi’s Analysis: Five vulnerabilities, including data exfiltration and remote command execution, exploitable via indirect prompt injection.

These risks persist even after uninstalling and reinstalling the tool, can spread across workspaces, and are triggered by repositories rather than direct prompts. The free access model heightens concerns, as it lowers barriers for malicious actors.

Recommendations for Developers:

  • Treat AI development environments as critical infrastructure; rigorously vet content, files, and configurations.
  • Collaborate with security teams to assess AI tools, focusing on data access and external connections.
  • Assume all external inputs are hostile: Implement robust input/output guardrails, strip special syntax, require explicit user approval for high-risk actions, and avoid relying on prompts for security controls.
Share.