Skip to main content

Many Are Upset That Anthony Bourdain Documentary Roadrunner Used an AI Deepfake of His Voice

Antony Bourdain

The documentary Roadrunner: A Film About Anthony Bourdain is in the spotlight, and it’s not only because celebrity chef and No Reservations TV presenter Bourdain was immensely loved. It’s got people talking because the documentary includes an AI-generated “deepfake” version of the icon’s voice speaking words he never said in life.

Recommended Videos

Morgan Neville, the director of the film, admitted in an interview with The New Yorker that he had “created an A.I. model of his voice.” He apparently really wanted Bourdain’s voice for the three quotes in question, and it’s raised some eyebrows at the integrity of such a move and whether the move is ethical.

For me, and for many others, this feels like the height of entitlement. Who is the director to pick and choose what Bourdain’s left behind and generate the words in his own voice? No one should be able to mess with that. Artificially creating what we perceive to be part of someone’s legacy is feels like overreaching, even if Bourdain’s estate reportedly approved of the AI readings.

And now Bourdain’s widow Ottavia Bourdain has disputed Neville’s claim that he approached her for permission, according to a new report from The Washington Post.

Others openly questioned whether Bourdain would even allow something like this. Yes, “he expressed and encouraged empathy” wherever he went and his hallmark was openness and honesty, but would he allow his own voice reading his email to be recreated because a director felt like it was within their right to do so? Especially considering the words are those of someone struggling with mental health issues, with the AI voice reading these words out of his email: “…and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?”

Combine this with the narrative that Neville apparently played with in the film of placing responsibility for Bourdain’s death on Asia Argento, who he was in a relationship with when he died—and you’ve got an even messier documentary rooted in making women the enemy, something we can be pretty sure in saying that Bourdain wouldn’t have approved.

In a review of the documentary by Sean Burns for WBUR, he says, “Other talking heads have conjured a catty Yoko Ono narrative about Argento, who conveniently was not interviewed for the film. The last half-hour of the picture is tabloid trash, pointing fingers in a fashion that will sound sickly familiar to anyone who remembers the ugliness leveled at Courtney Love in the wake of Kurt Cobain’s suicide.”

If Neville could do this with a beloved icon like Bourdain, what’s stopping other creators from following in his footsteps and using AI voices as a means of making money or shaping their narrative in the alleged voice of the subject? We’re likely going to be seeing a lot more of this kind of thing going forward (though the backlash to Neville might give others pause).

For many of us watching on the sidelines, this is a step into creepy territory where the passing of someone cherished by many is disrespected as a means of telling a posthumous story and profiting from it. And it feeds into a growing problem of how people’s images are used after their deaths through new technology.

On top of it all, the documentary did itself no favors in the careless way the ethics of the situation were handled. It contains no mention of the use of the AI voice, with the director even telling The New Yorker that it’s so convincing that viewers would never know which lines had been created:

Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.”

David Leslie, ethics lead at the Alan Turing Institute, told the BBC that this demonstrates the importance of at least providing audiences with such a warning when AI is being used for a purpose like this, to avoid giving viewers the idea that they’re being intentionally deceived. Without Neville’s interview, audiences may not have known this was happening.

At the end of the day, the death of Bourdain hit those who loved him really hard—and there were millions of people worldwide who had read and watched him for years and loved him, too. There is something so unsettling about an AI voice being conjured up that many will not see the documentary because of it.

And who can blame them? I wouldn’t either, especially when it feels like the beginning of directors setting a precedent where their vision is more important than any ethical issues a movie like Roadrunner: A Film About Anthony Bourdain might bring up and shrugging it off as something we can discuss later.

(image: Craig Barritt/Getty Images for The New Yorker)

Want more stories like this? Become a subscriber and support the site!

The Mary Sue has a strict comment policy that forbids, but is not limited to, personal insults toward anyone, hate speech, and trolling.—

Have a tip we should know? [email protected]

Author
Lyra Hale
Lyra (She/Her) is a queer Latinx writer who stans badass women in movies, TV shows, and books. She loves crafting, tostones, and speculating all over queer media. And when not writing she's scrolling through TikTok or rebuilding her book collection.

Filed Under:

Follow The Mary Sue: