FTC urges AI companies to monitor false claims

The Federal Trade Commission on Monday issued guidance urging companies developing artificial intelligence products not to use false or misleading claims in their marketing.

The guidance, published on the FTC’s website, comes amid a spate of products using the term AI. From art-generating bots like DALL-E-2 and DALL-E mini to writing and search tools like ChatGPT, AI products are popping up everywhere — and other companies are looking for a piece of the pie.

Just today, Snapchat announced its own AI to be integrated into the app. Last week, popular note-taking and productivity app Notion introduced AI to help users edit and write notes.

But not all AI was useful. Microsoft earlier this month launched a competitor to ChatGPT, Bing, which has already had some embarrassing rows with people, including threatening to expose a reporter for murder.

Meanwhile, far-right social media site Gab also promised to build its own AI.

The FTC guidelines urge companies to rein in their often lofty claims about what their AI can do.

“We are not yet living in the realm of science fiction, where computers can generally make reliable predictions about human behavior,” the guidelines say. “Their performance claims would be misleading if they were not scientifically proven or applied only to certain types of users or under certain conditions.”

The commission also urged developers to be aware of the risk they run when designing and bringing AI products into the world, which includes taking the blame when things go wrong.

“When something goes wrong — maybe it fails or produces biased results — you can’t just blame a third-party developer of the technology,” the FTC said. “And you can’t say you’re not responsible because this technology is a ‘black box’ that you can’t understand or don’t know how to test.”

Other guidelines include demonstrating that a given AI product can do something better than a non-AI product, and the actual use of artificial intelligence in AI products.

“Also, before labeling your product as AI-powered, note that simply using an AI tool in the development process is not the same as having a product with AI,” the FTC said. Some chatbots have come under scrutiny for saying they use AI when in reality they are just rules-based systems.

The FTC added that developers “don’t need a machine to predict what the FTC might do if … claims aren’t supported.”


We crawl the web so you don’t have to.

Sign up for the Daily Dot newsletter to get the best and worst of the web delivered to your inbox every day.

*Initial publication: February 27, 2023 at 3:45 pm CST

Jacob Seitz

Jacob Seitz is a freelance journalist originally from Columbus, Ohio, interested in the intersection of culture and politics.

Jacob Seitz

https://www.dailydot.com/debug/ftc-ai-false-claim-guidance/ FTC urges AI companies to monitor false claims

Jaclyn Diaz

InternetCloning is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@internetcloning.com. The content will be deleted within 24 hours.

Related Articles

Back to top button