When AI Gets Too Personal: Lessons from the Grammarly "Expert Review" Controversy
When AI Gets Too Personal: Lessons from the Grammarly Expert Review Controversy
The AI boom has been extraordinary. Every week brings announcements of new models, new capabilities, new ways to automate human work. But buried beneath the hype is a fundamental question that fewer companies are asking: Just because we can build something with AI, should we?
Last year, Grammarly (now operating under parent company Superhuman) shipped a feature called Expert Review. It sounded innocuous enough—AI-powered writing suggestions from "expert" voices. But there was a catch: those expert personas were modeled after real journalists and writers, without their consent or knowledge.
The backlash was swift. Class action lawsuits were filed. Reporters discovered their names and likenesses had been repurposed to train an AI feature without so much as a courtesy email. And it forced a uncomfortable conversation about what "AI-native products" actually mean when they're built on extracted human identity.
The Impersonation Problem
Here's what made Expert Review particularly troubling: it wasn't a data privacy issue in the traditional sense. Grammarly wasn't stealing passwords or mining financial data. Instead, they did something arguably more invasive—they created digital versions of real people and put them to work in a product without asking.
Think about that from a creator's perspective. Your voice, your writing style, your reputation—cultivated over years of work—suddenly becomes training data for an AI system that bears your name. You don't benefit financially. You didn't consent. And most strikingly, users might trust that "expert opinion" precisely because they think it's actually coming from you.
This raises uncomfortable questions for everyone building with AI:
- Whose data are we actually using? Are we sourcing from public content, or are we scraping personal creative work?
- Are we being transparent? Do users understand they're interacting with an AI trained on real people, or do they assume it's the actual person?
- Who profits? If we build a product around someone's identity or work, shouldn't they have a say?
What This Means for Your AI Product
If you're building with AI—whether it's AI-assisted development, machine learning pipelines, or AI-powered hosting solutions—the Expert Review controversy is a cautionary tale.
The most successful AI products aren't the ones that extract value most aggressively. They're the ones that create genuine value with transparency and consent.
Consider how this applies to different scenarios:
For SaaS products: If your feature relies on understanding user behavior or preferences, you need explicit opt-in and clear communication about how that data shapes the experience.
For developer tools: If you're using code samples or documentation to train AI models, original authors deserve credit and control.
For cloud hosting and infrastructure: When AI systems make decisions about resource allocation or security, users need visibility into how that works.
For domain and DNS services: If you're building AI-assisted domain recommendations or DNS optimization, transparency about what data informs those recommendations builds trust.
The Path Forward
Grammaly's leadership eventually killed the feature and apologized. But the real lesson isn't about that specific product—it's about the decision-making process that led to it in the first place.
Companies need frameworks for asking the harder questions earlier:
Consent first. Before using someone's name, voice, likeness, or creative work, ask. Actually ask.
Transparency by default. Be explicit about when users are interacting with AI versus something else. Don't let ambiguity be a feature.
User control. Give people granular control over their data. Make opt-out frictionless. Better yet, make opt-in the default.
Economic fairness. If your AI product is built on someone's work, they should benefit proportionally.
Audit your supply chain. Know where your training data comes from. If you can't justify the source, don't use it.
AI Doesn't Excuse Responsibility
The AI moment is genuinely transformative. Products powered by language models, computer vision, and custom ML pipelines are solving real problems and creating genuine value.
But AI is not a free pass to extract value from people without consent. If anything, AI's power makes the ethical questions more pressing, not less.
The most durable AI products—the ones that will actually stand the test of time—won't be the ones that capture the most data or train on the most people. They'll be the ones built on a foundation of transparency, consent, and genuine partnership with the humans whose work and data they're built on.
When you're designing your next AI feature, whether it's for a domain management interface, a cloud hosting platform, or anything else, that's worth keeping in mind. Your users' trust is the real competitive advantage.
The future of AI isn't about how much you can extract. It's about how much value you can create together.