WhatsApp has introduced a new artificial intelligence chatbot called Meta AI, now built into the app’s chat screen. Marked by a blue circle logo in the bottom-right corner, the AI feature is meant to assist users with information, questions, and creative ideas. But many users are not happy. They say the tool is intrusive and cannot be removed. Though Meta claims the AI is “completely optional,” critics argue that its fixed presence tells a different story.
Meta AI Is “Optional,” But You Can’t Remove It
Meta AI now appears in WhatsApp through a round blue icon on the main chat screen. When tapped, it opens a chatbot powered by Meta’s Llama 4 language model. The bot can answer questions, provide web results, and even create images or suggest ideas. While Meta says using the bot is a choice, the icon stays fixed—users cannot disable or hide it.
A WhatsApp spokesperson said the feature is like other fixed tools such as “channels” and “status.” The company insists that it welcomes feedback and believes in user control. However, the permanent blue circle suggests otherwise, reminding some of Microsoft’s “Recall” tool, which was rolled back after similar complaints.
Rollout Still Limited to Select Users
Not everyone can see the Meta AI icon yet. Meta says the feature is still being tested and is only available in some countries. Even in those countries, not all users will get access right away.
In addition to WhatsApp, Meta AI is also being added to Instagram, Facebook, and Messenger. A search bar at the top of the screen now allows users to either ask Meta AI or search the web. This move is part of Meta’s larger plan to place its new AI system across all its apps.
Before chatting with Meta AI, users are shown a long disclaimer. It says the tool is optional and meant to help with learning, creativity, and daily tasks. But even in early tests, flaws have shown up. For example, a request for weather in Glasgow led to a result for Charing Cross Station—located in London, not Scotland.
Social Media Users Slam WhatsApp’s Decision
Online reaction has been strong and mostly negative. Across Reddit, X (formerly Twitter), and Bluesky, users have shared screenshots and posts expressing frustration. Many say they don’t want AI in their chat app and are angry they can’t disable it.
British journalist Polly Hudson called the change “forced” and “invasive,” saying users should have more control over their app interface.
Privacy experts are also speaking out. Dr. Kris Shrishak, a well-known advisor on AI and data rights, said Meta is using people as test subjects. He warns the chatbot could gather sensitive data, even if users think they’re just chatting with an assistant.
Privacy Concerns and Legal Trouble
Dr. Shrishak also accused Meta of using questionable data to train its AI. Reports from The Atlantic suggest Meta may have scraped websites like Library Genesis, a source of pirated academic texts. Meta has not answered these claims.
Author groups across the world have pushed back. Many writers are suing Meta for using their work without permission. They argue that AI companies should not profit from stolen content. This legal battle is ongoing and could shape how future AI tools are built.
The UK’s Information Commissioner’s Office (ICO) says it is watching closely. A spokesperson said it’s vital that tools like Meta AI handle personal data in a fair and legal way, especially when used by teens and children.
How Safe Are Your WhatsApp Chats?
Meta claims your private chats remain protected with end-to-end encryption. This means even Meta can’t read them. But if you choose to interact with the AI assistant, the rules change. That chat is not fully private anymore—because you’re now talking to Meta.
The company advises users not to share private details or anything they wouldn’t want used in future AI training. In short: if you don’t want it saved, don’t send it to Meta AI.
WhatsApp Users Urged to Be Cautious
Privacy experts say the presence of Meta AI changes how people should use WhatsApp. Even if the feature is “optional,” the blue circle icon makes it hard to ignore. It also raises questions about consent and transparency.
Meta says it is building AI tools to help people. But many argue that useful features should still come with clear choices, especially when privacy is at stake.
Meta Faces Growing Scrutiny Over AI Push
Meta’s latest AI rollout may be one of its most ambitious yet, but it’s also one of the most controversial. From privacy concerns to legal battles, the company faces growing pressure to explain how it collects and uses data. And while the chatbot may help some users, the lack of opt-out options and mounting backlash show that trust is fragile.
Users, experts, and regulators are now asking the same question: Should helpful tools come at the cost of control?