robots & ai

Maria enters with the news that Malvolio is now about to make an ass of himself by approaching Olivia in yellow stockings, cross-gartered, and with his face wrinkled in smiles

7.jpg

Last month, China saw its first lawsuit filed over the use of [facial recognition] technology by a Chinese law professor in eastern Zhejiang province. The professor sued a local safari park after it began forcing visitors to scan their faces to enter the park. The case has not been heard yet, but the park decided to allow visitors to opt between having their face scanned or using a fingerprint system—which still means the collection of visitors’ biometric data.

{ QZ | Continue reading }

related { New app claims it can identify venture capitalists using facial recognition }

electrophotographic (3M Color-in-Color) print { Sonia Landy Sheridan, SOnia in Time, 1975 }

Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move

24.jpg

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers […]

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score. HireVue’s “AI-driven assessments” have become so pervasive in some industries, including hospitality and finance, that universities make special efforts to train students on how to look and speak for best results. More than 100 employers now use the system, including Hilton, Unilever and Goldman Sachs, and more than a million job seekers have been analyzed.

But some AI researchers argue the system is digital snake oil — an unfounded blend of superficial measurements and arbitrary number-crunching, unrooted in scientific fact.

{ Washington Post | Continue reading }

Police databases now feature the faces of nearly half of Americans — most of whom have no idea their image is there

{ NY Times | full story }

Or Culex feel etchy if Pulex don’t wake him?

42.jpg

21.jpg

‘McDonald’s removed the mcrib from its menu so it could suck its own dick’ –@jaynooch

41.jpg

iBorderCtrl is an AI based lie detector project funded by the European Union’s Horizon 2020. The tool will be used on people crossing borders of some European countries. It officially enables faster border control. It will be tested in Hungary, Greece and Letonia until August 2019 and should then be officially deployed.

The project will analyze facial micro-expressions to detect lies. We really have worries about such a project. For those who don’t have any knowledge on AI and CS, the idea of using a computer to detect lies can sound really good. Computers are believed to be totally objective.

But the AI community knows it is far from being true: biases are nearly omnipresent. We have no idea how the dataset used by iBorderCtrl has been built.

More globally, we have to remind that AI has no understanding of humans (to be honest, it has no understanding at all). It just starts being able to recognize the words we pronounce, but it doesn’t understand their meaning.

Lies rely on complex psychological mechanisms. Detecting them would require a lot more than a simple literal understanding. Trying to detect them using some key facial expressions looks utopian, especially as facial expressions can vary from a culture to another one. As an example, nodding the head usually means “yes” in western world, but it means “no” in countries such as Greece, Bulgaria and Turkey.

{ ActuIA | Continue reading }

The ‘iBorderCtrl’ AI system uses a variety of ‘at home’ pre-registration systems and real time ‘at the airport’ automatic deception detection systems. Some of the critical methods used in automated deception detection are that of micro-expressions. In this opinion article, we argue that considering the state of the psychological sciences current understanding of micro-expressions and their associations with deception, such in vivo testing is naïve and misinformed. We consider the lack of empirical research that supports the use of micro-expressions in the detection of deception and question the current understanding of the validity of specific cues to deception. With such unclear definitive and reliable cues to deception, we question the validity of using artificial intelligence that includes cues to deception, which have no current empirical support.

{ Security Journal | Continue reading }

Surveiller et punir

imp-kerr-ecstasy.jpg

Paul Hildreth peered at a display of dozens of images from security cameras surveying his Atlanta school district and settled on one showing a woman in a bright yellow shirt walking a hallway.

A mouse click instructed the artificial-intelligence-equipped system to find other images of the woman, and it immediately stitched them into a video narrative of her immediate location, where she had been and where she was going.

There was no threat, but Hildreth’s demonstration showed what’s possible with AI-powered cameras. If a gunman were in one of his schools, the cameras could quickly identify the shooter’s location and movements, allowing police to end the threat as soon as possible, said Hildreth, emergency operations coordinator for Fulton County Schools.

AI is transforming surveillance cameras from passive sentries into active observers that can identify people, suspicious behavior and guns, amassing large amounts of data that help them learn over time to recognize mannerisms, gait and dress. If the cameras have a previously captured image of someone who is banned from a building, the system can immediately alert officials if the person returns.

{ LA Times | Continue reading }

installation sketch { ecstasy, 2018 }

‘The weak and ill-constituted shall perish: first principle of our philanthropy. And one shall help them to do so.’ –Nietzsche

5.jpg

Tesla is a car company whose stock trades like a tech company. Tesla might sell 400,000 cars this year. By contrast, Ford might sell 6 million, GM 8.5 million. Granted, the Tesla Model 3 looks and drives like a dream. But when you count salaries and overhead according to Tesla’s own quarterly statements, it costs more to make a Tesla than people are willing to pay for it. And that calculus includes the federal subsidies that will dry up on December 31 of this year. Ford is worth $35 billion and makes money on its cars. Tesla is worth $40 billion and doesn’t. How is this math possible?

Tesla’s stock trades at such a large multiple of its revenue because Musk has convinced shareholders that it’s not a car company, but an artificial-intelligence company that happens to use a fleet of 500,000 cars to collect and label data. It’s a clever sleight-of-hand, but it’s not fooling those who matter. As a fund manager on Wall Street once told me, “You’re not a hedge-fund manager until you’ve shorted Tesla at least once.” […]

We estimate that ninety percent of the startups in the autonomous-vehicle space today will not exist in five years. […] The big crunch is coming because, over the next year, all the major auto and trucking companies will decide on who will be the suppliers for their main production lines in 2022. This won’t be for full self-driving, but for something a little more modest if still vitally important: a car so safe it is incapable of crashing.

{ National Review | Continue reading }

Just a whisk brisk sly spry spink spank sprint of a thing theresomere, saultering

3.jpg

An artificial intelligence system should be recognised as the inventor of two ideas in patents filed on its behalf, a team of academics says.

The AI has designed interlocking food containers that are easy for robots to grasp and a warning light that flashes in a rhythm that is hard to ignore.

Patents offices insist innovations are attributed to humans - to avoid legal complications that would arise if corporate inventorship were recognised.

The academics say this is “outdated”.

{ BBC | Continue reading }

enamel on linen { Christopher Wool, Untitled, 2007 }

‘Consciousness is nature’s nightmare.’ –Cioran

29.jpg

Human-robot interaction in workplaces is a research area which remains unexplored.

In this paper, we present the results and analysis of a social experiment we conducted by introducing a humanoid robot (Nadine) into a collaborative social workplace.

The humanoid’s primary task was to function as a receptionist and provide general assistance to the customers. Moreover, the employees who interacted with Nadine were given over a month to get used to her capabilities, after which, the feedback was collected from the staff on the grounds of influence on productivity, affect experienced during interaction and their views on social robots assisting with regular tasks.

Our results show that the usage of social robots for assisting with normal day-to-day tasks is taken quite positively by the co-workers and that in the near future, more capable humanoid social robots can be used in workplaces for assisting with menial tasks.

{ PsyArXiv | Continue reading }

related { Is an Army of Robots Marching on Chinese Jobs? }

art { Hajime Sorayama }

Facebook algorithm can recognise people in photographs even when it can’t see their faces

25.jpg

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts. […]

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017. […]

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

{ South China Morning Post | Continue reading }

photo { The Collection of the Australian National Maritime Museum }

I am the Nightrider. I’m a fuel injected suicide machine.

38.jpg

After one too many snowstorms, Boston tech executive Larry Kim had had it with shoveling out his car and struggling to find parking. So in 2014 he ditched his Infiniti luxury sedan and began commuting by Uber and Lyft—at an annual cost of as much as $20,000. I would never go back to owning a car,” says Kim […]

Auto sales in the U.S., after four record or near-record years, are declining this year, and analysts say they may never again reach those heights. […] IHS sees the biggest impact of mobility services coming in China. Auto sales there plunged 18 percent in January, an unprecedented seventh consecutive monthly decline, as commuters rapidly embraced ride-hailing. Last year, 550 million Chinese took 10 billion rides with the Didi ride-hailing service. That’s twice as many rides as Uber provided globally in 2018. “Increasing numbers of Chinese are opting for mobility as a service over car ownership,” wrote Michael Dunne, CEO of automotive researcher ZoZo Go. […]

Replacing a taxi driver with a robot cuts 60 percent from a ride’s cost, making travel in a driverless cab much cheaper than driving your own car.

{ Bloomberg | Continue reading }

Not a soul but ourselves

41.jpg

[I]nside of a Google server or a Facebook server is a little voodoo doll, avatar-like version of you […] All I have to do is simulate what conversation the voodoo doll is having, and I know the conversation you just had without having to listen to the microphone.

{ Quartz | Continue reading }

…a phenomenon privacy advocates have long referred to as the “if you build it, they will come” principle — anytime a technology company creates a system that could be used in surveillance, law enforcement inevitably comes knocking. Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.

The new orders, sometimes called “geofence” warrants, specify an area and a time period, and Google gathers information from Sensorvault about the devices that were there. It labels them with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices they think belong to suspects or witnesses, Google reveals the users’ names and other information. […]

Google uses the data to power advertising tailored to a person’s location, part of a more than $20 billion market for location-based ads last year.

{ NY Times | Continue reading }