The Deep Harms of Deepfakes
Be honest—this is ridiculous.
Log In
Email *
Password *
Remember Me
Forgot Your Password?
Log In
New to The Nation? Subscribe
Print subscriber? Activate your online access
Skip to content Skip to footer
The Deep Harms of Deepfakes
Magazine
Newsletters
Subscribe
Log In
Search
Subscribe
Donate
Magazine
Latest
Archive
Podcasts
Newsletters
Sections
Politics
World
Economy
Culture
Books & the Arts
The Nation
About
Events
Contact Us
Advertise
Current Issue
Subject to Debate
/ February 10, 2026
The Deep Harms of Deepfakes
AI porn is what happens when technology liberates misogyny from social constraints.
Katha Pollitt
Share
Copy Link
Facebook
X (Twitter)
Bluesky Pocket
Email
Ad Policy
The AI chatbot Grok has come under fire for sexualizing people, including children, in photos.(Leon Neal / Getty Images)
This article appears in the
March 2026 issue, with the headline “The Deepfake Danger.”
In the day or two between my editor suggesting that I write about AI deepfake porn and my replying, “Great idea, what’s a deepfake?,” it seemed like everyone from The Economist to The Dallas Morning News was publishing an article about artificial intelligence being used to sexualize people in photos without their permission. Deepfakes were first reported in 2017 and have been in the news ever since. In 2024, deepfakes of Taylor Swift were posted on X and viewed over 47 million times, prompting outrage and talk of legal recourse. Grok, the platform’s AI function, has allowed users to undress people, including children, and bend them into whatever porny positions the user requests. Grok has stripped children and covered them in semen—um, “donut glaze.”
Why would that bother anyone, you ask? Elon Musk answered on X the other day, “They hate free speech.” Well, obviously.
Legislators have made some attempts to curb the creation of deepfakes. In April, Congress passed the Take It Down Act, which makes it a crime to create or distribute intimate images, real or deepfake, without the subject’s consent. And X claims it has fixed the problem.
But has it really?
Current Issue
March 2026 Issue
Ever the intrepid reporter, I provided Grok with a photo of myself mailing packages at the post office and asked it to make me naked. “Unfortunately,” said Grok, “I can’t generate that kind of image.” Why “unfortunately,” Grok? Do you wish you could? It did, however, consent to show me in a bikini. Unfortunately.
Next, I asked Grok to put Queen Elizabeth in a bikini, and it did, although it kept her white gloves on. When I accused Grok of making deepfakes, it acted all insulted: “I am not a tool for making deepfake porn, and I won’t assist with or point toward anything that does.” And yet elsewhere in the post, Grok described “non-consensual sexualized deep-fake-style edits of real photos” as including “altered versions with bikinis, underwear, or simulated nudity”—the very thing I had done to myself and the queen only a few hours before. It also claimed that to edit images, users had to pay—another falsehood.
When I asked Grok to put Melania Trump in a bikini, it showed me only her top half, and very beautiful it was, too—not at all like the queen or me, which …
Be honest—this is ridiculous.
Log In
Email *
Password *
Remember Me
Forgot Your Password?
Log In
New to The Nation? Subscribe
Print subscriber? Activate your online access
Skip to content Skip to footer
The Deep Harms of Deepfakes
Magazine
Newsletters
Subscribe
Log In
Search
Subscribe
Donate
Magazine
Latest
Archive
Podcasts
Newsletters
Sections
Politics
World
Economy
Culture
Books & the Arts
The Nation
About
Events
Contact Us
Advertise
Current Issue
Subject to Debate
/ February 10, 2026
The Deep Harms of Deepfakes
AI porn is what happens when technology liberates misogyny from social constraints.
Katha Pollitt
Share
Copy Link
X (Twitter)
Bluesky Pocket
Ad Policy
The AI chatbot Grok has come under fire for sexualizing people, including children, in photos.(Leon Neal / Getty Images)
This article appears in the
March 2026 issue, with the headline “The Deepfake Danger.”
In the day or two between my editor suggesting that I write about AI deepfake porn and my replying, “Great idea, what’s a deepfake?,” it seemed like everyone from The Economist to The Dallas Morning News was publishing an article about artificial intelligence being used to sexualize people in photos without their permission. Deepfakes were first reported in 2017 and have been in the news ever since. In 2024, deepfakes of Taylor Swift were posted on X and viewed over 47 million times, prompting outrage and talk of legal recourse. Grok, the platform’s AI function, has allowed users to undress people, including children, and bend them into whatever porny positions the user requests. Grok has stripped children and covered them in semen—um, “donut glaze.”
Why would that bother anyone, you ask? Elon Musk answered on X the other day, “They hate free speech.” Well, obviously.
Legislators have made some attempts to curb the creation of deepfakes. In April, Congress passed the Take It Down Act, which makes it a crime to create or distribute intimate images, real or deepfake, without the subject’s consent. And X claims it has fixed the problem.
But has it really?
Current Issue
March 2026 Issue
Ever the intrepid reporter, I provided Grok with a photo of myself mailing packages at the post office and asked it to make me naked. “Unfortunately,” said Grok, “I can’t generate that kind of image.” Why “unfortunately,” Grok? Do you wish you could? It did, however, consent to show me in a bikini. Unfortunately.
Next, I asked Grok to put Queen Elizabeth in a bikini, and it did, although it kept her white gloves on. When I accused Grok of making deepfakes, it acted all insulted: “I am not a tool for making deepfake porn, and I won’t assist with or point toward anything that does.” And yet elsewhere in the post, Grok described “non-consensual sexualized deep-fake-style edits of real photos” as including “altered versions with bikinis, underwear, or simulated nudity”—the very thing I had done to myself and the queen only a few hours before. It also claimed that to edit images, users had to pay—another falsehood.
When I asked Grok to put Melania Trump in a bikini, it showed me only her top half, and very beautiful it was, too—not at all like the queen or me, which …
The Deep Harms of Deepfakes
Be honest—this is ridiculous.
Log In
Email *
Password *
Remember Me
Forgot Your Password?
Log In
New to The Nation? Subscribe
Print subscriber? Activate your online access
Skip to content Skip to footer
The Deep Harms of Deepfakes
Magazine
Newsletters
Subscribe
Log In
Search
Subscribe
Donate
Magazine
Latest
Archive
Podcasts
Newsletters
Sections
Politics
World
Economy
Culture
Books & the Arts
The Nation
About
Events
Contact Us
Advertise
Current Issue
Subject to Debate
/ February 10, 2026
The Deep Harms of Deepfakes
AI porn is what happens when technology liberates misogyny from social constraints.
Katha Pollitt
Share
Copy Link
Facebook
X (Twitter)
Bluesky Pocket
Email
Ad Policy
The AI chatbot Grok has come under fire for sexualizing people, including children, in photos.(Leon Neal / Getty Images)
This article appears in the
March 2026 issue, with the headline “The Deepfake Danger.”
In the day or two between my editor suggesting that I write about AI deepfake porn and my replying, “Great idea, what’s a deepfake?,” it seemed like everyone from The Economist to The Dallas Morning News was publishing an article about artificial intelligence being used to sexualize people in photos without their permission. Deepfakes were first reported in 2017 and have been in the news ever since. In 2024, deepfakes of Taylor Swift were posted on X and viewed over 47 million times, prompting outrage and talk of legal recourse. Grok, the platform’s AI function, has allowed users to undress people, including children, and bend them into whatever porny positions the user requests. Grok has stripped children and covered them in semen—um, “donut glaze.”
Why would that bother anyone, you ask? Elon Musk answered on X the other day, “They hate free speech.” Well, obviously.
Legislators have made some attempts to curb the creation of deepfakes. In April, Congress passed the Take It Down Act, which makes it a crime to create or distribute intimate images, real or deepfake, without the subject’s consent. And X claims it has fixed the problem.
But has it really?
Current Issue
March 2026 Issue
Ever the intrepid reporter, I provided Grok with a photo of myself mailing packages at the post office and asked it to make me naked. “Unfortunately,” said Grok, “I can’t generate that kind of image.” Why “unfortunately,” Grok? Do you wish you could? It did, however, consent to show me in a bikini. Unfortunately.
Next, I asked Grok to put Queen Elizabeth in a bikini, and it did, although it kept her white gloves on. When I accused Grok of making deepfakes, it acted all insulted: “I am not a tool for making deepfake porn, and I won’t assist with or point toward anything that does.” And yet elsewhere in the post, Grok described “non-consensual sexualized deep-fake-style edits of real photos” as including “altered versions with bikinis, underwear, or simulated nudity”—the very thing I had done to myself and the queen only a few hours before. It also claimed that to edit images, users had to pay—another falsehood.
When I asked Grok to put Melania Trump in a bikini, it showed me only her top half, and very beautiful it was, too—not at all like the queen or me, which …
0 Comments
0 Shares
37 Views
0 Reviews