SUBSCRIBE & WIN! Sign up for the Daily CHAT News Today Newsletter for a chance to win a $75 South Country Co-op gift card!

Is this real? AI ramps up risk of April Fools’ Day foul-ups for corporate brands

Mar 30, 2024 | 4:03 AM

Rebranding as “Voltswagen.” Shutting down Trader Joe’s. Emailing confirmation of a $750 food delivery.

The range of April Fools’ Day marketing pranks gone awry is as varied as their reception. Met with everything from smiles and social media shares to confusion, derision or even fury and falling stocks, the puckish promotional tactic represents a risk that can endear customers to a brand as swiftly as it can sour them on it.

“One person’s humour is another person’s offense,” said Vivek Astvansh, a marketing professor at McGill University.

As April 1 approaches, consumers would be wise to extend even more skepticism, with experts saying artificial intelligence ramps up the potential for high-tech promotional ploys. Whether through generative text-to-video tools that conjure rich scenes from dashed-off instructions or chatbots that serve up endless ad ideas on command, AI raises new questions of authenticity and could make distinguishing between jokes, facts and deepfakes even harder.

“In the next few days, we will see many ads that were motivated by GPT-4 or other generative AI tools,” Astvansh said in reference to the most current version of OpenAI’s popular ChatGPT program.

Even before the AI breakthroughs of the past 16 months — OpenAI launched ChatGPT in November 2022 — the technology’s power to transcend human capacity has played a role in corporate hijinks.

On April 1, 2019, Google announced it had figured out how to communicate with tulips in their own language, “Tulipish.” It offered translation between the perennial’s petals and dozens of human dialects, citing “great advancements in artificial intelligence.” The video closed off by noting that Google Tulip would only be available that day, leaving few in doubt about the joke.

But past misunderstandings suggest future ones could await, augmented by AI’s abilities.

In the lead-up to April 1, 2021, Volkswagen AG put out a news release stating its American division would change its name to “Voltswagen.” Several news outlets reported the statement, despite some doubts about its authenticity. The confusion that greeted the announcement grew further when the company told reporters who asked if it was an April Fools’ gag that the auto giant was dead serious — only to admit the stunt hours later.

The joke fell flat as an old tire in the wake of Volkswagen’s “diesel dupe” scandal several years earlier, when U.S. authorities found the company had installed software on more than half a million cars that enabled them to cheat on diesel emissions tests.

Other April Fools’ Day ruses that backfired include when Yahoo News falsely reported in 2016 that Trader Joe’s would close all of its 457 stores in less than a year, and when British online food delivery company Deliveroo sent its customers fake confirmation emails in 2021 for orders of $750, causing thousands to think their accounts had been hacked.

Now, the ready accessibility and low user cost of many AI tools opens the door to more companies deploying the technology — including for April Fools’ fun that might go sideways.

“GPT-4 can instantly create multiple advertising campaigns’ content, which could be video or which could be still images. And then within a very short period of time and with very little spending or investment, the internal advertising team or marketing team can sift through the outputs that GPT-4 would have generated,” Astvansh said. All that remains is to select one, tweak it with edits and post it.

To guard against deception, Astvansh said disclosure of both methods and intentions will be key, especially on April 1.

“I hope they declare or they give some information in their content that the seed idea or the seed content was created by a generative AI tool,” he said.

Digital watermarking — embedding a pattern into AI-generated content to help users distinguish real images from fake ones and identify who owns them — is one such disclosure method.

“It’s basically making sure that the images or the videos that are being produced by these platforms are tagged in a way that when they then show up on the internet, labels are being attached to them so … users know what they’re seeing is AI,” said Sam Andrey, managing director of the Dais, a public policy think tank at Toronto Metropolitan University.

The technology’s potential for trickery is already well-established. Witness the scams that use a loved one’s voice to convince their partner to transfer money to fraudsters, or recent robocalls that impersonate prominent political figures. Combine those with sophisticated images or digitally generated characters and the result is a potential for deception on a vast scale, including from corporate actors.

“Even just a year ago it was more cartoonish,” Andrey said of AI-created graphics.

“If it’s generating innocuous, normal media and it’s lowering production costs, that’s less worrisome,” — for example, if AI had been applied to Tim Hortons’ square-shaped Timbits, Ikea Canada’s meatball vending machines or Jeep Canada’s all-flannel interior “keeping you as cozy as a lumberjack in the Canadian wilderness.” All were April Fools’ Day pranks last year.

“But we should not be using AI to deceive people,” Andrey said.

This report by The Canadian Press was first published March 30, 2024.

Christopher Reynolds, The Canadian Press