‘In its essence, technology is something that man does not control.’ –Heidegger

imp-kerr-truth.jpg

AI-generated videos that show a person’s face on another’s body are called “deepfakes.” […]

Airbrushing and Photoshop long ago opened photos to easy manipulation. Now, videos are becoming just as vulnerable to fakes that look deceptively real. Supercharged by powerful and widely available artificial-intelligence software developed by Google, these lifelike “deepfake” videos have quickly multiplied across the Internet, blurring the line between truth and lie. […] A growing number of deepfakes target women far from the public eye, with anonymous users on deepfakes discussion boards and private chats calling them co-workers, classmates and friends. Several users who make videos by request said there’s even a going rate: about $20 per fake. […]

Deepfake creators often compile vast bundles of facial images, called “facesets,” and sex-scene videos of women they call “donor bodies.” Some creators use software to automatically extract a woman’s face from her videos and social-media posts. Others have experimented with voice-cloning software to generate potentially convincing audio. […]

The requester of the video with the woman’s face atop the body with the pink off-the-shoulder top had included 491 photos of her face, many taken from her Facebook account. […] One creator on the discussion board 8chan made an explicit four-minute deepfake featuring the face of a young German blogger who posts videos about makeup; thousands of images of her face had been extracted from a hair tutorial she had recorded in 2014. […]

The victims of deepfakes have few tools to fight back. Legal experts say deepfakes are often too untraceable to investigate and exist in a legal gray area: Built on public photos, they are effectively new creations, meaning they could be protected as free speech. […]

Many of the deepfake tools, built on Google’s artificial-intelligence library, are publicly available and free to use. […] Google representatives said the company takes its ethical responsibility seriously, but that restrictions on its AI tools could end up limiting developers pushing the technology in a positive way. […]

“If a biologist said, ‘Here’s a really cool virus; let’s see what happens when the public gets their hands on it,’ that would not be acceptable. And yet it’s what Silicon Valley does all the time,” he said.

{ Washington Post | Continue reading }

Technical experts and online trackers say they are developing tools that could automatically spot these “deepfakes” by using the software’s skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.

The Defense Advanced Research Projects Agency, the Pentagon’s high-tech research arm known as DARPA, is funding researchers with hopes of designing an automated system that could identify the kinds of fakes that could be used in propaganda campaigns or political blackmail. Military officials have advertised the contracts — code-named “MediFor,” for “media forensics” — by saying they want “to level the digital imagery playing field, which currently favors the manipulator.”

The photo-verification start-up Truepic checks for manipulations in videos and saves the originals into a digital vault so other viewers — insurance agencies, online shoppers, anti-fraud investigators — can confirm for themselves. […]

However, the rise of fake-spotting has spurred a technical blitz of detection, pursuit and escape, in which digital con artists work to refine and craft evermore deceptive fakes. In some recent pornographic deepfakes, the altered faces appear to blink naturally — a sign that creators have already conquered one of the telltale indicators of early fakes, in which the actors never closed their eyes. […] “The counterattacks have just gotten worse over time, and deepfakes are the accumulation of that,” McGregor said. “It will probably forever be a cat-and-mouse game.”

{ Washington Post | Continue reading }