A few days after announcing the creation of a whole new department devoted to combating the “snake oil” scam in the tech industry, the Federal Trade Commission has once again sent a sassy warning along the lines of “keep your AI claims in check.”.
It was only a few years ago (OK, five years ago) that I wrote that “AI Powered” is nothing more than the tech equivalent of “all natural” in meaningless terms, but it has progressed beyond being cheeky. Most products out there these days claim to be implementing artificial intelligence in some way or another, but only a few of them actually get into the details, and even fewer still can explain how it works and why it works the way it does.
There is a problem with this according to the Federal Trade Commission. “One thing is for sure: it’s a marketing term,” the agency writes, regardless of what people mean when they say something like “powered by artificial intelligence” or some variation thereof. As we know at the FTC, one of the things we are certain of when it comes to hot marketing terms is that some advertisers will be unable to stop themselves from overusing them and abusing them.”
READ MORE: Scrolling on Instagram made easier
There seems to be an agreement that artificial intelligence is revolutionizing everything, but it is one thing to make it sound like that at a TED talk; it is quite another to make it sound like it is an official component of your product. In addition, the Federal Trade Commission wants marketers to know that these claims can count as “false or unsubstantiated,” which is something the agency has a lot of experience with.
The FTC asks customers to consider the following factors when choosing between a product with AI or a product that is claimed to have AI:
- Are you exaggerating what your AI product can do? If you’re making science fiction claims that the product can’t back up — like reading emotions, enhancing productivity, or predicting behavior — you may want to tone those down.
- Are you promising that your AI product does something better than a non-AI product? Sure, you can make those weird claims like “4 out of 5 dentists prefer” your AI-powered toothbrush, but you’d better have all 4 of them on the record. Claiming superiority because of your AI needs proof, “and if such proof is impossible to get, then don’t make the claim.”
- Are you aware of the risks? “Reasonably foreseeable risks and impact” sounds a bit hazy, but your lawyers can help you understand why you shouldn’t push the envelope here. If your product doesn’t work if certain people use it because you didn’t even try, or its results are biased because your dataset was poorly constructed… you’re gonna have a bad time. “And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test,” the FTC adds. If you don’t understand it and can’t test it, why are you offering it, let alone advertising it?
- Does the product actually use AI at all? As I pointed out long ago, claims that something is “AI-powered” because one engineer used an ML-based tool to optimize a curve or something doesn’t mean your product uses AI, yet plenty seem to think that a drop of AI means the whole bucket is full of it. The FTC thinks otherwise.
“You don’t need a machine to predict what the FTC might do when those claims are unsupported,” it concludes, ominously.
Since the agency already put out some common-sense guidelines for AI claims back in 2021 (there were a lot of “detect and predict COVID” ones then), it directs questions to that document, which includes citations and precedents.