March 27, 2026

Grammarly and Superhuman, two productivity tools trusted by millions of professionals, have been quietly using real people’s names in AI-generated content suggestions — without those people’s explicit consent. A detailed investigation by The Verge reveals that both companies have built features that reference actual experts and public figures by name, raising uncomfortable questions about consent, reputation, and the growing reach of AI-powered writing assistants.

The core issue is straightforward. Grammarly’s AI writing suggestions and Superhuman’s email assistant have been surfacing real names — sometimes attaching them to generated text, recommendations, or attributed expertise — without first asking the individuals involved. No opt-in. No notification. And until recently, no clear opt-out mechanism either.

Sponsored

That’s a problem.

Grammarly, which boasts over 30 million daily active users according to its own disclosures, has positioned its AI features as tools for professional communication. Superhuman, the premium email client favored by executives and venture capitalists, markets itself on speed and intelligence. Both have leaned hard into AI integration over the past two years, competing in a crowded field of productivity tools racing to embed generative capabilities everywhere. But this race has apparently outpaced some basic considerations around how real people’s identities get used in the process.

According to The Verge’s reporting, individuals whose names appeared in these AI features were not contacted beforehand. Some discovered their names were being used only after being told by others or stumbling across it themselves. The lack of transparency is striking for companies that sell trust as a core product value — Grammarly literally markets itself as a tool to make your writing more credible.

So what exactly is happening under the hood? Grammarly’s AI features can suggest expert-backed writing tips or reference known authorities in a given field. Superhuman’s AI assistant, meanwhile, can draft replies and summaries that may include attributed information. In both cases, real names get pulled into AI-generated outputs. The companies appear to source these names from publicly available information, but public availability doesn’t equal consent to commercial use in an AI product.

This distinction matters enormously.

The legal terrain here is murky but evolving fast. Right of publicity laws in the United States vary by state, but they generally protect individuals from unauthorized commercial use of their name or likeness. California’s statute is among the strongest. Whether embedding someone’s name in an AI writing suggestion constitutes a “commercial use” under these laws hasn’t been definitively tested in court — but the argument isn’t hard to make. These are paid products. The names add perceived authority. That’s commercial value being extracted from someone’s reputation without their knowledge.

And the reputational risk cuts both ways. If an AI tool attributes a recommendation or piece of advice to a named expert, and that advice turns out to be wrong or taken out of context, the expert’s credibility takes the hit. They had no say in the matter. They may not even know it happened. For professionals whose careers depend on their public reputation — academics, analysts, consultants, journalists — this is more than an abstract concern.

Both companies have responded to the scrutiny, though with varying degrees of specificity. Grammarly told The Verge it is working on ways for individuals to opt out. Superhuman has similarly acknowledged the issue. But opt-out frameworks place the burden on the people whose names are being used, not on the companies profiting from that use. It’s the familiar pattern: deploy first, ask forgiveness later, then build a settings page.

Sponsored

The broader context makes this even more significant. AI companies across the board are grappling with questions about training data, attribution, and consent. OpenAI faces multiple lawsuits over copyrighted material used to train its models. Stability AI has been sued by Getty Images. The music industry is fighting AI-generated tracks that mimic real artists. What Grammarly and Superhuman are doing isn’t identical to these cases, but it sits on the same spectrum — the unauthorized use of someone’s identity or work to power a commercial AI product.

For enterprise buyers, this should trigger immediate scrutiny. Large organizations that deploy Grammarly across their workforce — and there are many, given the company’s enterprise tier serves teams at companies like Cisco and Databricks — need to understand what their AI writing tools are doing with third-party identities. Compliance and legal teams should be asking pointed questions. If your company’s AI-assisted communications are attributing information to named individuals without verification or consent, that’s a liability waiting to materialize.

The timing is also notable. Grammarly has been aggressively expanding its AI capabilities, launching features powered by large language models throughout 2024 and into 2025. The company raised at a $13 billion valuation in 2021 and has been pushing to justify that number through AI-driven growth. Superhuman, backed by Andreessen Horowitz and others, has similarly doubled down on AI as its primary differentiator in the email client market. The competitive pressure to ship AI features fast is immense. But fast doesn’t excuse careless.

Privacy advocates have been quick to flag the implications. The use of real names in AI outputs without consent sits at the intersection of data privacy, intellectual property, and AI ethics — three areas where regulation is tightening globally. The EU’s AI Act, which began phased enforcement in 2024, imposes transparency requirements on AI systems. While these specific features may not fall under the Act’s highest-risk categories, the principle of transparency and consent runs through the entire regulatory framework.

There’s a simpler way to think about it, too. If you found out a product you’d never signed up for was using your name to sell its features to paying customers, you’d want to know. You’d probably want it to stop. That basic expectation doesn’t change just because the product is powered by AI.

Both Grammarly and Superhuman now face a choice that many AI-powered companies will eventually confront: build consent mechanisms proactively, or wait for lawsuits and regulation to force the issue. The smart money is on getting ahead of it. But the track record of the tech industry on proactive consent is, to put it generously, not inspiring.

For now, professionals whose names might appear in these tools have limited recourse. Check both platforms for opt-out options. Document any instances where your name appears without authorization. And if you’re an enterprise customer, put this on your vendor review agenda immediately. The AI productivity race is moving fast. The rules around identity and consent need to keep up.

Grammarly and Superhuman Are Using Real Names in AI Features Without Clear Permission — Here’s What You Need to Know first appeared on Web and IT News.