A viral AI craze with a playful name is not so playful for your privacy. A senior IPS officer, VC Sajjanar, has warned people about the so-called Google Gemini “Banana AI” trend that invites users to upload personal photos—often saree pictures and full-length portraits—to websites and apps promising quick AI makeovers. The concern is simple: once your image goes up to an unknown server, you lose control over where it goes next.
Police and cybersecurity teams say they have seen a familiar pattern. Scam operators set up slick websites or Telegram/Instagram pages that look like genuine AI tools. They ask you to upload a few clear photos, sometimes from multiple angles, then either deliver low-quality results or lock the output behind a paywall. Behind the scenes, those images can be stored, copied, and re-used without your consent.
Why are people getting caught out? The branding sounds credible. “Gemini” is Google’s AI. “Nano” is Google’s small on-device model. Mix in a catchy trend name like “Banana AI,” and a tool can look legit even if it isn’t. Here’s the key point: if a site or app claims on-device processing but demands you upload images to its server, it is not truly on-device. On-device AI runs locally and does not ship your photos off to the cloud.
Security analysts say two risks stand out. First, unauthorized re-use: your photos can be used to train other models, sold in bulk datasets, or fed into face-swap engines to create manipulated content. Second, identity theft: face images combined with names, phone numbers, or social profiles can help criminals open accounts, bypass weak face checks, or run social engineering scams against your contacts.
The warning also highlights a gendered risk. Many women are sharing traditional attire photos in good faith. Criminals target these because they are clear, high-resolution images that make it easier to build deepfakes or stalking profiles. Even if you never see the abuse, the harm can happen out of sight—on closed groups, burner accounts, or overseas servers.
It’s not the first time viral AI makeovers have run ahead of safety. Over the past few years, face filters, stylized portraits, and age-swap apps have triggered questions about data retention and third-party sharing. The difference now is the speed and scale: new tools can spin up overnight, harvest millions of images from a trend, and disappear before victims realize what happened.
Another red flag: the business model. Many of these sites are “free,” but push you to sign in with social accounts, enable notifications, or share referral links. That is not about your convenience. It helps the operator tie your face to an identity graph, grow reach fast, and turn your photos into inventory. If they later ask for a small fee to “unlock HD,” that payment trail can also link your real name to the images you uploaded.
What about Google Gemini itself? Google’s on-device model, Gemini Nano, is designed to run within secure environments on supported devices and apps. That is very different from a random third-party website claiming it uses “Gemini Nano magic.” Scammers often piggyback on big-brand names to appear trustworthy. If a tool is not from an official source or a known developer, assume the claims are marketing at best, and bait at worst.
Law enforcement’s advice aligns with what cybersecurity teams repeat after every viral filter wave: if a site is unknown, lacks a clear company name, hides its address, or has a vague privacy policy, do not upload personal photos—especially not multiple, high-quality shots. Do not share images of minors. Avoid anything that asks for video selfies or asks you to replicate gestures on camera. Those can be used to build lifelike deepfakes or fool weak liveness checks.
If you are already part of the trend, do not panic, but act fast. Revoke third-party access from your Google/Apple/Facebook account settings if you used social sign-in. Delete the app. Empty its cache. If the site offered an account dashboard, try to delete your data there and take screenshots as proof. Then monitor for suspicious logins, new account alerts, and strange messages to your contacts.
You can still enjoy creative AI without giving away your face. The goal is to cut down risk and avoid obvious traps. Use these steps as a quick checklist before you upload any photo to an AI service:
Spot the scam patterns tied to this trend:
If your photos are misused, document everything. Take screenshots of profiles, URLs, handles, dates, and any messages. Do not confront the scammer from your main account. File a report with your local cyber police. In India, you can call the 1930 cybercrime helpline or file a complaint on the national cyber portal. Ask platforms to remove the content using their impersonation or privacy policies, and keep a record of your requests.
There are legal tools that can help. Under the Information Technology Act, identity theft and cheating by personation using computer resources fall under Sections 66C and 66D. Publication or transmission of obscene material can trigger Sections 67 and 67A. Voyeurism and stalking are criminal offenses under the IPC. If someone edits your image to create a sexualized deepfake, that is not a prank—it can be a crime, and you can seek urgent action.
India’s Digital Personal Data Protection Act, 2023, also puts consent at the center. Services are expected to be clear about what they collect, why they collect it, and how long they keep it. If an AI website cannot explain this in plain language, it is not respecting your rights. Keep in mind that many scam sites operate outside India and can vanish fast, which is why quick reporting and takedowns matter.
For women, the advice is practical and non-judgmental. If someone forwards your photo in a manipulated or harassing form, save the message, note the sender, and ask a trusted person to help preserve evidence. Do not let anyone shame you into silence. Police can act even if the original upload was done on a different platform. Support groups and legal aid clinics can help you navigate removal and complaints.
What should responsible platforms and developers be doing? First, clear retention and deletion policies, visible before upload. Second, an opt-out from model training for any user content. Third, strong watermarking and content authenticity tools so AI outputs can be traced. Fourth, independent audits and security disclosures. Finally, easy, human support for takedowns—especially when women and minors are targeted.
There are safer ways to enjoy AI imagery. Use trusted photo editors from recognized developers. Prefer on-device filters that never send your images to a server. If you need to try a cloud tool, crop the image, blur faces, or use a stock model. Treat every “viral makeover” as a public billboard: if you would not paste the photo on a street corner, do not upload it to a mystery site.
Remember, trends fade but data lingers. A single upload can live in backups, caches, and training sets long after a website is gone. You cannot control how a stranger labels, edits, or shares your face once it is in their system. That is exactly why the IPS warning is blunt: think before you upload, and steer clear of photo upload scams that ride on brand names and hype.
As this “Banana AI” wave crests, watch for lookalike pages trying to extend the run. New names pop up the moment old ones are flagged. The best defense is boring but effective: slow down, verify the operator, read the policy, and keep your most personal photos where they belong—on your device, not on a server you do not control.