New workplace technologies often start life as both status symbols and productivity aids. The first car phones and PowerPoint presentations closed deals and also signaled their users’ clout.

Some partners at EY, the accounting giant formerly known as Ernst & Young, are now testing a new workplace gimmick for the era of artificial intelligence. They spice up client presentations or routine emails with synthetic talking-head-style video clips starring virtual body doubles of themselves made with AI software—a corporate spin on a technology commonly known as deepfakes.

The firm’s exploration of the technology, provided by UK startup Synthesia, comes as the pandemic has quashed more traditional ways to cement business relationships. Golf and long lunches are tricky or impossible, Zoom calls and PDFs all too routine.

EY partners have used their doubles in emails, and to enhance presentations. One partner who does not speak Japanese used the translation function built into Synthesia’s technology to display his AI avatar speaking the native language of a client in Japan, to apparently good effect.

Synthesia, a London startup, has developed tools that make it easy to create synthetic videos of real people. Video courtesy of Synthesia.

“We’re using it as a differentiator and reinforcement of who the person is,” says Jared Reeder, who works at EY on a team that provides creative and technical assistance to partners. In the past few months he has come to specialize in making AI doubles of his coworkers. “As opposed to sending an email and saying ‘Hey we’re still on for Friday,’ you can see me and hear my voice,” he says.

The clips are presented openly as synthetic, not as real videos intended to fool viewers. Reeder says they have proven to be an effective way to liven up otherwise routine interactions with clients. “It’s like bringing a puppy on camera,” he says. “They warm up to it.”

New corporate tools require new lingo: EY calls these its virtual doubles ARIs, for artificial reality identity, instead of deepfakes. Whatever you call them, they’re the latest example of the commercialization of AI-generated imagery and audio, a technical concept that first came to broad public notice in 2017 when synthetic and pornographic clips of Hollywood actors began to circulate online. Deepfakes have steadily gotten more convincing, commercial, and easier to make since.

The technology has found uses in customizing stock photos, generating models to show off new clothing, and in conventional Hollywood productions. Lucasfilm recently hired a prominent member of the thriving online community of amateur deepfakers, who had won millions of views for clips in which he reworked faces in Star Wars clips. Nvidia, whose graphics chips power many AI projects, revealed last week that a recent keynote by CEO Jensen Huang had been faked with the help of machine learning.

Synthesia, which powers EY’s ARIs, has developed a suite of tools for creating synthetic video. Its clients include advertising company WPP, which has used the technology to blast out internal corporate messaging in different languages without the need for multiple video shoots. EY has helped some consulting clients make synthetic clips for internal announcements.

Source link