The continuing stories about A.I. systems creating sexualized images of real people is just the latest example of new technology allowing people to be horrible in new ways.
Except that as New Hampshire knows from a 2007 legal case, people have been using similar tech to be horrible for a long time.
That case concerned a photographer at a New Hampshire summer camp who gave his bosses some CD-ROMs containing photos of campers, not realizing he’d forgotten to remove some doctored images. In the images, he had taken the faces of 15-year-old girls from his camp photos and Photoshopped them onto the bodies of adult women in pornographic pictures.
These days he could do much worse things much more easily. The free A.I. agent called Grok and other software using the technologies called “artificial intelligence” can remove clothing from real pictures of people or create sexual images of individuals that look incredibly realistic. There’s an epidemic of such “deepfakes” being created in order to harass or blackmail people, with cases cropping up everywhere from middle schools to workplaces.
This has produced plenty of outrage, and while a couple of countries have gone so far as to ban Grok, nothing much has been done about it in the U.S.
That’s partly because of the power of the technology and the way its corporate owners control regulators, but it can also be hard to translate legitimate disgust into legal action without harming innocent ideas, as New Hampshire found 18 years ago.
Back then, when camp officials discovered those images on the CD-ROMs, they called police. The photographer was arrested and eventually convicted by a jury of possessing child pornography and sentenced to 7 years in prison.
But the state Supreme Court in “New Hampshire vs. Zidel” overturned the conviction on first-amendment grounds, citing a 2002 ruling in which the U.S. Supreme Court said computer simulations of sex was โvirtual pornographyโ protected as free speech under the First Amendment. The state Supreme Court said that although the images were appalling and repulsive they didnโt meet the legal definition of child pornography because they werenโt made with actual children and had not been deliberately publicized or circulated.
The last point is important. Ted Lothstein, the Concord attorney whose oral arguments before the state Supreme Court convinced it to overturn the conviction, argued that the images were a sexual fantasy in digital form and that, if the court were to criminalize them, they would be criminalizing private thoughts.
I talked to Lothstein recently to get his thoughts on the spread of A.I.-generated “deepfakes.”
He stood by the argument that private thoughts are protected but agreed that the same principle is not very relevant when dealing with online sites and a culture in which sharing images is the whole point. “As soon as you are distributing something that could harm a person’s reputation, libel and slander laws have been held not to violate the First Amendment,” he said.
Not everything goes because we’re shielded by freedom of speech.
“The Constitution is not a suicide pact,” he said, quoting a line often attributed to Abraham Lincoln. In other words, the freedoms it assigns cannot be taken to such an extreme that society collapses. “If a litigant is arguing that under the First Amendment we should be able to make deepfakes of anything and not face any legal consequence, that’s wrong.”
In other words, it’s complicated, as human interaction always is.
For reasons of legal wording and the overlap of state vs. federal laws, he said, the effect of the New Hampshire case on later rulings has been limited. But the questions it raised are just as important as ever, especially as technology keeps making it easier to turn our worst urges into something tangible.
At the end of our conversation, Lothstein expressed a deep concern about this technical trend, and not just regarding pornography. The increasing ability of software to fool our senses โ to create realistic photographs and audio recordings and even full movies, is making it harder to have a shared โ agreed-upon reality as a basis of society.
“I think it’s a seriously urgent emergency. If we have technology that can make deepfakes that nobody can tell are deepfakes then we’re about to say goodbye to civilized democracy,” he said. “If everything is debatable and nothing can proven, we’re done as any kind of civilization.”
