In May 2024, the Federal Trade Commission (FTC) issued a sharp warning to marketers, creators, and tech companies: the use of AI-generated product reviews or fake endorsements is not only deceptive—it could lead to serious legal consequences.
As brands increasingly experiment with artificial intelligence in their marketing, many are overlooking key compliance risks. This blog breaks down what the FTC’s latest stance on AI-generated content means for influencers, companies, and the agencies that support them.
The FTC’s warning emphasized that AI is not a shield against liability. Whether a review is written by a human, a chatbot, or a language model, fake product reviews and undisclosed endorsements violate federal law.
Specifically, the FTC cautioned that:
For more, see the FTC’s official press release on AI-generated reviews.
If you’re an influencer, brand manager, agency, or content strategist using AI to create marketing materials—this applies to you.
Scenarios that may trigger FTC scrutiny include:
Even if the content “feels authentic,” it must meet the same standards as human-authored advertising under the FTC influencer guidelines.
A social media risk assessment can help identify potential legal gaps in your current AI-based marketing strategy.
To stay ahead of regulatory action, brands and influencers should:
Working with an influencer lawyer helps ensure your content and contracts meet both current rules and emerging standards.
AI can be a powerful tool—but it’s not exempt from truth-in-advertising laws. With the FTC now actively monitoring how brands and influencers use AI, it’s more important than ever to stay informed, transparent, and legally prepared.
Need help navigating influencer compliance or updating your content strategy? Contact The Social Media Law Firm today to protect your business and brand.
For more legal tips, give us a follow on Instagram, TikTok, Linkedin, or check out our YouTube Channel.
Subscribe to The Social Media Lawcast on Spotify Podcasts.