Michigan has enacted new legislation that makes it illegal to create AI-generated sexual imagery of someone without their written consent. With Governor Gretchen Whitmer’s signature on House Bills 4047 and 4048, the state becomes the 48th in the country to adopt its own deepfake regulations. The move highlights how quickly states are moving to address concerns about artificial intelligence and digital harassment.
Michigan’s New Law and Its Provisions
The Michigan law establishes criminal penalties for those who create or distribute deepfakes depicting sexual activity without consent. According to the statute, making such content can result in misdemeanor charges punishable by up to one year in prison and fines of up to $3,000. The law specifies that liability applies if the creator knew, or reasonably should have known, that the deepfake would cause harm to the person depicted.
Penalties increase significantly in cases involving financial harm, profit-making, or repeat offenses. If a deepfake is posted online, linked to harassment or extortion, or distributed for commercial gain, the offense rises to a felony. This reflects the legislature’s recognition of the serious impact deepfake content can have on a victim’s personal and professional life.
Governor Whitmer emphasized the risks in a press release announcing the legislation. “These videos can ruin someone’s reputation, career, and personal life,” she said. “As such, these bills prohibit the creation of deep fakes that depict individuals in sexual situations and create sentencing guidelines for the crime.” The law is intended to protect individuals from digital exploitation and align Michigan’s statutes with the growing number of states addressing the issue.
Federal Rules and Civil Liberties Concerns
Michigan’s new rules exist alongside federal legislation already in effect. The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, or TAKE IT DOWN Act, was introduced in June 2024 and signed into law in May. That measure holds online platforms responsible for moderating deepfakes and requires them to respond quickly to reports of abusive content. While it aims to curb digital harassment, civil liberties groups have raised concerns that its scope could lead to overreach.
Critics argue that because the Federal Trade Commission (FTC) is responsible for enforcing the TAKE IT DOWN Act, political influence could shape which platforms face scrutiny. Advocacy groups like the Cyber Civil Rights Initiative have warned that platforms aligned with current political leadership may avoid enforcement, while others could be disproportionately targeted. This uneven application, they say, risks undermining free expression.
Kate Ruane, director of the Free Expression Project at the Center for Democracy and Technology, has cautioned that vague definitions of deepfakes could lead to excessive censorship. “For a social media company, it is not rational for them to open themselves up to that risk,” she explained. “Any video with any amount of editing, which is like every single TikTok video, could then be banned for distribution on those social media sites.” Her comments underscore the tension between protecting victims of abuse and preserving online speech rights.
Nationwide Momentum and Remaining Gaps
Deepfake laws now exist in nearly every state, with only New Mexico and Missouri lacking specific statutes. While the scope of these laws varies, most focus on non-consensual sexual imagery. Some states, like Wisconsin, have chosen to expand existing child sexual abuse imagery laws to cover deepfakes, rather than creating new frameworks. This patchwork of state approaches reflects the evolving nature of legislation around artificial intelligence.
Despite the rapid adoption of laws, challenges remain in enforcing them effectively. Victims of deepfake harassment often face steep legal and financial hurdles when pursuing cases, which can prolong the harm and lead to further trauma. Even when laws are in place, the process of reporting, investigation, and litigation can be difficult and resource-intensive.
As both state and federal governments continue to refine their responses, deepfake technology itself is advancing quickly. Regulators are working to balance the protection of individuals from harm with broader concerns about privacy, speech, and innovation. Michigan’s law represents the latest step in a national trend, but it also illustrates the complexities of addressing abuse in the digital era.