Although the enactment of federal legislation regulating the use of AI is not likely on the near horizon, a number of federal agencies, including the Federal Trade Commission (FTC), have reminded the entities that regulate them that the use of AI should remain Compliant with current laws and regulations. In the case of the FTC, this has meant ensuring that companies under its jurisdiction do not use AI to commit unfair or deceptive acts under Section 5 of the FTC Act.
In furtherance of this policy goal, on August 16, 2023, the staff attorney for the FTC’s Advertising Practices Division published a business blog post on the FTC’s website titled “You Can’t Lose What You Never Owned: Claims About Digital Property” “Creativity in the Age of Generative Artificial Intelligence.” In this publication, the FTC notes that improvements in the usability and quality of generative AI allow digital works created by AI tools to increasingly be passed off as the work of human authors. The FTC has identified four key points that companies must keep in mind when working with digital products, in some cases whether or not they are related to artificial intelligence, to avoid misleading consumers or other businesses:
- When providing a generative AI product, companies may need to inform customers whether and to what extent the AI training data includes copyrighted or otherwise protected material.
- Companies should not try to “fool people” into thinking that the works generated by AI are human-made.
- Companies must ensure that customers understand the physical terms and conditions associated with digital products, since these often differ from the terms associated with non-digital goods (eg, digital goods are often licensed and not sold). The FTC has also noted that unilaterally changing terms or undermining reasonable ownership expectations can be problematic.
- Companies that provide a platform for creators to develop and showcase their work should be clear about creators’ right to access and remove their work, as well as how that work is used and displayed.
Of these warnings, the first – regarding the disclosure of AI training data – is perhaps the most interesting. Of course, the FTC is aware that the use of copyrighted and “otherwise protected” material in training data without permission has been controversial: the practice has sparked a series of lawsuits to determine whether such activity constitutes fair use under the FTC. Copyright or infringement of privacy rights, and groups such as the Authors Guild have published strong statements criticizing the practice. It is worth noting that, unlike other warnings, the FTC uses the term “may need” in connection with the issue of training data disclosure, providing companies some leeway as to how they apply this guidance. However, the FTC warns that failing to disclose the use of copyrighted or otherwise protected materials to train an AI tool could raise issues of consumer deception or unfairness, especially when the AI’s output reflects the use of such this Materials. Although the FTC does not explain what it means by “otherwise protected material,” this may be a reference to personal data, which is a topic the FTC is focusing on in the field of artificial intelligence. The FTC notes that transparency on this point about training sources can be relevant to individuals when they decide which AI tool to use, and it can be relevant to companies because they may face liability if the outputs they produce through the use of the AI tool infringe on the business. Protected. . Companies offering AI models and tools, many of which have not disclosed the source of their training data, will need to consider the FTC’s statement on this point.
The Federal Trade Commission’s ongoing activity in the field of artificial intelligence
The August FTC publication is the latest in a series of statements the FTC has issued and actions it has taken against companies related to the use of artificial intelligence. For example, a March 2023 FTC blog post, “Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale,” warned companies against offering AI tools that can be used to deceive. In the publication, the FTC suggested that companies offering AI products at the design stage consider ways in which the product could be misused to defraud or cause harm, and consider whether they should therefore not offer the product. The FTC has also deployed an “algorithmic cannibalization” remedy in some of the agency’s actions, which requires companies found to have unlawfully collected personal data to delete that data as well as any algorithms that were trained on that data.
He stays away
Companies working in the field of artificial intelligence should monitor the FTC’s evolving guidance on the use of artificial intelligence, and ensure that their practices align with these guidelines.