Sign in

FTC Warns Against Deceptive Practices in Digital Marketing Using Generative AI

Pete Davis
FTC Warns Against Deceptive Practices in Digital Marketing Using Generative AI

Artificial intelligence (AI) has the potential to revolutionize marketing by offering businesses the ability to create personalized campaigns and improve customer targeting. However, as with any new technology, there are potential concerns for unethical or illegal use of AI in marketing that companies need to be aware of to avoid being caught up in the Federal Trade Commission (FTC) crosshairs.

One of the key FTC concerns regarding AI in digital marketing is the potential for companies to use AI in ways that deliberately or inadvertently steer consumers unfairly or deceptively into making harmful decisions related to finances, health, education, housing, and employment.

Deceptive Practices

Manipulation can be considered an unfair or deceptive practice if it causes people to take actions contrary to their intended goals. Companies need to ensure that their practices are not unlawful under the FTC Act, even if not all customers are harmed, and even if those harmed do not comprise a class of people protected by anti-discrimination laws.

Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offersin-game purchases, and attempts to cancel services. - Michael Atleson, Attorney, FTC Division of Advertising Practices

Ad Placements

Another potential concern is the placement of ads within generative AI features. The FTC has provided guidance on presenting online ads, including in search results and elsewhere, to avoid deception or unfairness. Companies should ensure that ads are clearly marked as such, and that any generative AI output clearly distinguishes between organic content and paid content.


Transparency is also crucial when it comes to AI-generated content.

People need to know if they are communicating with a real person or a machine, and if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship.

Ethics and Responsible AI

Given the potential risks associated with AI in marketing, companies building or deploying these tools should not remove or fire personnel devoted to ethics and responsibility for AI and engineering. Instead, they should ensure that risk assessments and mitigations factor in foreseeable downstream uses, the need to train staff and contractors, and monitoring and addressing the actual use and impact of any tools eventually deployed.

If we haven’t made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers. And for people interacting with a chatbot or other AI-generated content, mind Prince’s warning from 1999: “It’s cool to use the computer. Don’t let the computer use you.” - - Michael Atleson, Attorney, FTC Division of Advertising Practices

Pete Davis
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more