robots & ai

‘In its essence, technology is something that man does not control.’ –Heidegger

imp-kerr-truth.jpg

AI-generated videos that show a person’s face on another’s body are called “deepfakes.” […]

Airbrushing and Photoshop long ago opened photos to easy manipulation. Now, videos are becoming just as vulnerable to fakes that look deceptively real. Supercharged by powerful and widely available artificial-intelligence software developed by Google, these lifelike “deepfake” videos have quickly multiplied across the Internet, blurring the line between truth and lie. […] A growing number of deepfakes target women far from the public eye, with anonymous users on deepfakes discussion boards and private chats calling them co-workers, classmates and friends. Several users who make videos by request said there’s even a going rate: about $20 per fake. […]

Deepfake creators often compile vast bundles of facial images, called “facesets,” and sex-scene videos of women they call “donor bodies.” Some creators use software to automatically extract a woman’s face from her videos and social-media posts. Others have experimented with voice-cloning software to generate potentially convincing audio. […]

The requester of the video with the woman’s face atop the body with the pink off-the-shoulder top had included 491 photos of her face, many taken from her Facebook account. […] One creator on the discussion board 8chan made an explicit four-minute deepfake featuring the face of a young German blogger who posts videos about makeup; thousands of images of her face had been extracted from a hair tutorial she had recorded in 2014. […]

The victims of deepfakes have few tools to fight back. Legal experts say deepfakes are often too untraceable to investigate and exist in a legal gray area: Built on public photos, they are effectively new creations, meaning they could be protected as free speech. […]

Many of the deepfake tools, built on Google’s artificial-intelligence library, are publicly available and free to use. […] Google representatives said the company takes its ethical responsibility seriously, but that restrictions on its AI tools could end up limiting developers pushing the technology in a positive way. […]

“If a biologist said, ‘Here’s a really cool virus; let’s see what happens when the public gets their hands on it,’ that would not be acceptable. And yet it’s what Silicon Valley does all the time,” he said.

{ Washington Post | Continue reading }

Technical experts and online trackers say they are developing tools that could automatically spot these “deepfakes” by using the software’s skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.

The Defense Advanced Research Projects Agency, the Pentagon’s high-tech research arm known as DARPA, is funding researchers with hopes of designing an automated system that could identify the kinds of fakes that could be used in propaganda campaigns or political blackmail. Military officials have advertised the contracts — code-named “MediFor,” for “media forensics” — by saying they want “to level the digital imagery playing field, which currently favors the manipulator.”

The photo-verification start-up Truepic checks for manipulations in videos and saves the originals into a digital vault so other viewers — insurance agencies, online shoppers, anti-fraud investigators — can confirm for themselves. […]

However, the rise of fake-spotting has spurred a technical blitz of detection, pursuit and escape, in which digital con artists work to refine and craft evermore deceptive fakes. In some recent pornographic deepfakes, the altered faces appear to blink naturally — a sign that creators have already conquered one of the telltale indicators of early fakes, in which the actors never closed their eyes. […] “The counterattacks have just gotten worse over time, and deepfakes are the accumulation of that,” McGregor said. “It will probably forever be a cat-and-mouse game.”

{ Washington Post | Continue reading }

P.P., don’t carry that weight

4.jpg

Active, polymorphic material (“Utility Fog”) can be designed as a conglomeration of 100-micron robotic cells (‘foglets’). Such robots could be built with the techniques of molecular nanotechnology […] The Fog acts as a continuous bridge between actual physical reality and virtual reality.

{ NASA | Continue reading }

photo { Joel Meyerowitz, Times Square, New York City, 1963 }

Said I wouldn’t mention Sisqo, fuck he’s a bum

After 4 hours of training, AlphaZero became the strongest chess entity of the planet with an estimated ELO of around 3,400.

{ AlphaZero vs Stockfish 8 | ELO ratings of chess players }

more { How AlphaZero quickly learns each game [chess, shogi, and Go] to become the strongest player in history for each }

related { The ability to distort reality has taken an exponential leap forward with “deep fake” technology. We survey a broad array of responses. | Previously: Researchers can now detect AI-generated fake videos with a 95% success rate }

‘The love of stinking.’ –Nietzsche

4.jpg

{ aversion | panic | Thanks Tim }

related { Dick Stain Donald Trump got zero comments for the Stock Market Drop }

Three Billboards is a good damn movie. I give it two billboards up!

2.jpg

Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars.

Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods.

Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level.

The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%).

{ PNAS | PDF }

photo { Tod Papageorge }

Where the sun doesn’t shine

21.jpg

ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms.

With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. […]

The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence.

The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. […]

Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise.

{ NY Times | Continue reading }

art { Ellsworth Kelly, Concorde I (state), 1981-82 }

‘We understand that the tragic hero—in contrast to the baroque character of the preceding period—can never be mad; and that conversely madness cannot bear within itself those values of tragedy which we have known since Nietzsche and Artaud.’ –Michel Foucault

213.jpg

{ IEEE | full story | Thanks Tim! }

‘To him who looks upon the world rationally, the world in its turn presents a rational aspect.’ –Hegel

4.jpg

Google started testing their cars on public roads back in 2009, long before any regulations were even dreamed of. An examination of the California Vehicle Code indicated there was nothing in there prohibiting testing.

For testing purposes, Google has a trained safety driver sitting behind the wheel, ready to take it at any moment. Any attempt to take the wheel or use the pedals disables the automatic systems and the safety driver is in control. The safety drivers took special driving safety courses and were instructed to take control if they have any doubt about safe operation. For example, if a vehicle is not braking as expected when approaching a cross walk, take the controls immediately, do not wait to see if it will detect the pedestrians and stop.

The safety drivers are accompanied by a second person in the passenger seat. Known as the software operator, this person monitors diagnostic screens showing what the system is perceiving and planning, and tells the safety driver if something appeared to be going wrong. The software operator is also an extra set of eyes on the road from time to time.

Many other developers have taken this approach, and some of the regulations written have coded something similar to it into law.

This style of testing makes sense if you consider how we train teen-agers to drive. We allow them to get behind the wheel with almost no skill at all, and a driving instructor sits in the passenger seat. While not required, professional driving instructors tend to have their own brake pedal, and know how and when to grab the wheel if need be. They let the student learn and make minor mistakes, and correct the major ones.

The law doesn’t require that, of course. After taking a simple written test, a teen is allowed to drive with a learner’s permit as long as almost any licenced adult is in the car with them. While it varies from country to country, we let these young drivers get full solo licences after only a fairly simple written test and a short road test which covers only a tiny fraction of situations we will encounter on the road. They then get their paperwork and become the most dangerous drivers on the road.

In contrast, robocar testing procedures have been much more strict, with more oversight by highly trained supervisors. With regulations, there have been requirements for high insurance bonds and special permits to go even further. Both software systems and teens will make mistakes, but the reality is the teens are more dangerous.

{ Brad Templeton | Continue reading }

related { Will You Need a New License to Operate a Self-Driving Car? }

‘I feel like I totally understand gothic architecture in all of its brilliance’ —deanna havas

The Random Darknet Shopper is an automated online shopping bot which we provide with a budget of $100 in Bitcoins per week. Once a week the bot goes on shopping spree in the deep web where it randomly choses and purchases one item and has it mailed to us.

34.jpg

65.jpg

Spineless swines, cemented minds

21.jpg

Researchers at the MIT are testing out their version of a system that lets them see and analyze what autonomous robots, including flying drones, are “thinking.” […]

The system is a “spin on conventional virtual reality that’s designed to visualize a robot’s ‘perceptions and understanding of the world,’” Ali-akbar Agha-mohammadi, a post-doctoral associate at MIT’s Aerospace Controls Laboratory, said in a statement.

{ LiveScience | Continue reading | via gettingsome }

image { Lygia Clark }

‘Fiction gives us a second chance that life denies us.’ —Paul Theroux

314.jpg

We evaluated the impact of different presentation methods for evaluating how funny jokes are. We found that the same joke was perceived as significantly funnier when told by a robot than when presented only using text.

{ Dr. Hato | PDF }

Ongoing projects: Adding farting to the joking robots.

{ Dr. Hato | Continue reading }

we took a young woman w/ severe memory loss and helped her forget she ever had it

{ Samantha West The Telemarketer Robot Who Swears She’s Not a Robot | more }