
By: Andrew R. Chow
The Take It Down Act, passed in 2025, is the U.S.’s first substantial law aimed specifically at curbing harms caused by AI-generated deepfakes—especially nonconsensual pornographic images. The bill makes distribution of such deepfake content a crime and requires online platforms to remove it within 48 hours of receiving notice. It enjoys broad bipartisan support and is expected to be signed into law, reflecting growing political will to address abuses enabled by rapidly improving generative AI tools.
The story behind the legislation underscores the human cost driving policy. Two teenage girls — Elliston Berry of Texas, and Francesca Mani of New Jersey — became victims when classmates used AI to fabricate and distribute nude images. Their experiences of trauma, compounded by platforms’ and schools’ initial inaction, galvanized advocacy and ultimately legislative action. Senator Ted Cruz, among others, championed the bill in Congress, collaborating with the victims and their families to build awareness and momentum.
Although widely supported, the bill is not without its critics. Some civil society groups warn that its language could be overbroad, allowing bad-faith actors to misuse the process by falsely claiming content is “nonconsensual illicit imagery” or suppressing lawful speech. Questions have also been raised about the power and effectiveness of enforcement, particularly because the measure relies on the U.S. Federal Trade Commission (FTC), which, according to critics, has been weakened in recent years. Ensuring responsive enforcement, protecting free speech, and preventing abuse will be key challenges going forward.

