Scammers are exploiting the hype around Google’s AI products to push a fake cryptocurrency called “Gemini coin” — and they’re using Google’s own AI chatbot to do it. A TechRepublic report details how bad actors have been promoting fraudulent tokens that falsely claim association with Google’s Gemini AI platform, tricking users into believing Google has entered the crypto market.
It hasn’t. Google has no cryptocurrency. Full stop.
The scam works by capitalizing on name confusion between Google’s Gemini AI chatbot and the well-known Gemini cryptocurrency exchange founded by the Winklevoss twins. Fraudsters created tokens branded as “Gemini” or “Google Gemini coin” and promoted them across social media, targeting people who might conflate the AI product with a legitimate crypto offering. The scheme is alarmingly simple, and that’s precisely what makes it effective. When users searched for information about these tokens, some turned to Google’s own Gemini chatbot for answers — and the AI’s responses weren’t always helpful in dispelling the fraud.
This is the core problem. According to TechRepublic, Gemini’s chatbot in some instances failed to clearly flag these tokens as scams, instead generating responses that could be interpreted as lending credibility to the fake coins. AI chatbots pull from vast datasets and don’t inherently distinguish between legitimate financial products and fraudulent ones unless specifically trained or guardrailed to do so. So when a user asks Gemini about “Gemini coin,” the response can inadvertently validate something that doesn’t deserve validation.
Google has acknowledged the issue. The company told TechRepublic that Gemini includes disclaimers advising users not to rely on it for financial advice, and that it’s continuously working to improve the chatbot’s ability to handle queries related to scams and misinformation. But disclaimers buried in fine print don’t carry much weight when a confident-sounding AI response sits front and center.
The broader pattern here is troubling. Brand impersonation scams aren’t new, but AI tools have created fresh attack surfaces. Bad actors don’t need to hack anything — they just need to exploit the trust users place in familiar brand names and the authoritative tone AI chatbots naturally adopt. A user who sees “Gemini” associated with Google and then asks Google’s own AI about a “Gemini coin” is operating within what feels like a closed, trustworthy information loop. It isn’t one.
And this isn’t an isolated incident. Crypto scammers have been spoofing major tech brands for years, but the intersection with generative AI adds a dangerous new dimension. The FTC has repeatedly warned about cryptocurrency fraud, reporting that consumers lost over $5.6 billion to crypto scams in 2023 alone. AI-adjacent scams are a growing subset of that figure.
Security researchers have flagged similar schemes involving fake tokens named after other AI products and companies. The playbook is consistent: create a token, associate it with a trending AI brand, promote it on X (formerly Twitter) and Telegram, then disappear once enough money flows in. Classic rug pull, dressed up in AI branding.
The responsibility question gets complicated. Google didn’t create these scam tokens, and it can’t fully control what third parties do with its brand name in the decentralized crypto world. But it does control what Gemini says. And when your own AI product becomes a vector — however unintentionally — for reinforcing a scam that uses your brand name, that’s a problem you own. At minimum, Gemini should be able to definitively state that Google has no affiliated cryptocurrency whenever the topic comes up. That’s a solvable engineering problem.
For industry professionals, the takeaway is practical. If you’re building or deploying AI chatbots, the Gemini coin situation is a case study in how generative AI can become complicit in fraud through omission rather than commission. Chatbots don’t need to actively promote scams to cause harm — they just need to fail at flagging them. Implementing hardcoded responses for known fraud patterns, especially those involving your own brand, should be table stakes.
On the user side, the old rules apply with renewed urgency: no legitimate major tech company is launching a cryptocurrency through social media promotions. If you see a token named after a well-known AI product, assume it’s fraudulent until proven otherwise. And don’t use AI chatbots as your primary source for investment decisions. They’re not built for that.
Google will likely tighten Gemini’s responses around this specific scam now that it’s gotten press attention. But the structural vulnerability remains. As AI brands proliferate and public awareness of them grows, scammers will keep mining that recognition for profit. The question isn’t whether the next AI-branded crypto scam will appear. It’s which company’s name will be on it.
Google’s ‘Gemini Coin’ Scam Shows How AI Brand Trust Is Being Weaponized Against Users first appeared on Web and IT News.
From 400,000 kilometers away, the blue marble of Earth slips behind the Moon’s jagged horizon.…
Australia’s ASIC has thrown its weight behind a swelling chorus of financial watchdogs eyeing Anthropic’s…
Engineering teams are churning out AI-generated code at breakneck speed. Billions pour into providers like…
Kevin Warsh steps into the Senate Banking Committee’s glare Tuesday, his bid to helm the…
A fundamental flaw in Anthropic’s Model Context Protocol has turned a cornerstone of AI agent…
A wheeled humanoid robot clocked in for full shifts at Siemens’s electronics plant in Erlangen,…
This website uses cookies.