Google has issued an official statement addressing mounting privacy concerns surrounding its newly launched Nano Banana image generation model, following viral reports from Indian users experiencing unsettling incidents with the AI tool. The response comes after widespread speculation about how the AI system could generate personal details not visible in original uploaded photographs.
"The Nano Banana image model that was recently launched was not trained with user data from Google Photos, Google Workspace, or Google Cloud services. Any non-visible attribute that shows up in a generated image that wasn't visible in an input image in the same Gemini conversation is a coincidence," a Google spokesperson told Free Press Journal in response to the growing concerns.
The Viral Incident That Sparked Privacy Fears
The privacy debate erupted after Instagram user Jhalakbhawani shared her disturbing experience with Google's Gemini Nano Banana tool while participating in the viral "AI Saree" trend. She uploaded a photo of herself in a green full-sleeve suit and used a prompt to generate a saree edit. The resulting image was striking, but she noticed an alarming detail. "There is a mole on my left hand in the generated image, which I actually have in real life. The original image I uploaded did not have a mole," she wrote in her post.
She questioned how the AI tool could know about a personal detail not visible in the uploaded photo, calling the experience 'scary and creepy' and urging followers to 'stay safe' when using AI platforms. Her post quickly gained traction across social media platforms, with thousands of users expressing similar concerns about AI privacy and data usage.
Growing Concerns Across Social Media
The incident has sparked broader discussions about privacy risks, deepfake misuse, and scams associated with AI-powered image generation tools. IPS officer VC Sajjanar has issued a public warning asking people to be extra careful while participating in the trend. He wrote on X, "Be cautious with internet trends! Falling for the 'Nano Banana' craze and sharing personal information online can lead to scams. With just one click, you might be sharing more than you intended."
We also issued a PSA and offered some safety tips as well:
The Nano Banana model, part of Google's Gemini 2.5 Flash Image system, has become a viral sensation across Instagram and other social media platforms. The trend involves users uploading a photo to Google's Gemini Nano Banana tool, paired with a prompt to create retro-inspired edits, such as polka-dot or black party-wear sarees with dramatic shadows and grainy textures reminiscent of classic Indian cinema.
Google's Technical Safeguards and Transparency Measures
In response to the privacy concerns, Google has highlighted its existing safety measures for AI-generated content. Google says all images generated or edited using Gemini carry SynthID, an invisible digital watermark, along with metadata tags, to clearly mark them as AI-generated. All images created or edited with Gemini 2.5 Flash Image will include an invisible SynthID digital watermark, so they can be identified as AI-generated or edited.
However, experts warn these measures have limitations. According to aistudio.google.com, SynthID helps verify an image's AI origin, but the detection tool is not yet publicly available, limiting its effectiveness for everyday users.
Google's statement attempts to address the fundamental question of whether AI models can access personal data beyond what users explicitly share. By clarifying that the Nano Banana model was not trained on personal data from Google's ecosystem of services, the company aims to reassure users that any seemingly impossible coincidences in generated images are exactly that – coincidences rather than evidence of privacy breaches.