OpenAI launched ChatGPT Atlas on Oct. 21, a macOS-based browser designed to integrate AI for automating tasks such as form-filling and research. ChatGPT Atlas aims to compete with widely used browsers such as Google Chrome.
Soon after the release, Brave researchers highlighted vulnerabilities in AI-powered browsers, like indirect prompt injections. The attacks allow cybercriminals to insert malicious commands inside web content, endangering user files, passwords and banking accounts. Brave is a US-based company that provides a free and privacy-focused web browser.
While OpenAI has implemented safeguards, experts and users on X have advised caution, especially when performing sensitive operations.
In a post on its official X account, Brave highlighted the risks: “The security vulnerability we found in Perplexity’s Comet browser this summer is not an isolated issue. Indirect prompt injections are a systemic problem facing Comet and other AI-powered browsers.”
The security vulnerability we found in Perplexity’s Comet browser this summer is not an isolated issue.
Indirect prompt injections are a systemic problem facing Comet and other AI-powered browsers.
Today we’re publishing details on more security vulnerabilities we uncovered.
— Brave (@brave) October 21, 2025
Atlas is definitely vulnerable to Prompt Injection pic.twitter.com/N9VHjqnTVd
— P1njc70r (@p1njc70r) October 21, 2025
Do NOT install any agentic browsers like OpenAI Atlas that just launched.
Prompt injection attacks (malicious hidden prompts on websites) can easily hijack your computer, all your files and even log into your brokerage or banking using your credentials.
Don’t be a guinea pig. https://t.co/JS76Hf6VAN
— Wasteland Capital (@ecommerceshares) October 21, 2025
Potential Security Risks In AI-Powered Browsers
Brave also published a blog post revealing new security vulnerabilities in AI browsers, following up on the earlier report of the Perplexity Comet vulnerability. The firm explained that indirect prompt injection was not an isolated problem, but a systemic danger to the broader category of agentic browsers.
The research highlighted two new attack vectors. In Perplexity Comet, malicious actors can embed nearly invisible instructions in website screenshots. When a user captures a screenshot and asks questions about it, the AI assistant may interpret these hidden prompts as commands, potentially using browser tools maliciously. Similarly, in the Fellou browser, just navigating to a webpage containing malicious visible instructions can cause the AI to process them alongside the user’s query, again enabling unintended actions.
These vulnerabilities, according to Brave, undermine traditional web security assumptions, including the same-origin policy, because AI agents possess the user’s authenticated credentials. Even innocuous actions, like summarising a Reddit post, could expose sensitive accounts, including banks and email services.
While the company continues to explore long-term solutions, it claimed that agentic browsing is inherently unsafe. Until strong protections are in place, browsers should isolate agentic behaviour and they would require plain user intervention for sensitive operations, Brave added.
OpenAI rolled out its Guardrails safety framework on Oct. 6, as part of its new AgentKit toolset for developers, aimed at offering an enhanced security framework for building secure AI agents. However, research firm HiddenLayer warned of an alarming flaw in the safety measures for Large Language Models (LLMs), according to Hackread.com, a new platform focused on hacking and cybersecurity.
What Is A Prompt Injection Attack?
According to IBM, prompt injection is a type of cyberattack on large language models (LLMs). Malicious inputs are disguised as valid prompts, manipulating generative AI systems into spilling sensitive information, spreading disinformation, or performing harmful actions.
Even basic prompt injections can have the potential to override system security controls in AI chatbots, such as ChatGPT.
The risks escalate if GenAI applications have access to confidential information or can take actions by way of API integrations. A document editor or mail-sending LLM-driven assistant can be tricked into opening confidential documents if prompted maliciously.
Prompt injection vulnerabilities remain a concern for AI security researchers, as no foolproof mitigation currently exists.
. Read more on Technology by NDTV Profit.