spy & security

Police databases now feature the faces of nearly half of Americans — most of whom have no idea their image is there

{ NY Times | full story }

‘Je ne parlerai pas, je ne penserai rien : mais l’amour infini me montera dans l’âme.’ —Rimbaud

45.jpg

The ads you see online are based on the sites, searches, or Facebook posts that get your interest. Some rebels therefore throw a wrench into the machinery — by demonstrating phony interests.

“Every once in a while, I Google something completely nutty just to mess with their algorithm,” wrote Shaun Breidbart. “You’d be surprised what sort of coupons CVS prints for me on the bottom of my receipt. They are clearly confused about both my age and my gender.”

[…]

“You never want to tell Facebook where you were born and your date of birth. That’s 98 percent of someone stealing your identity! And don’t use a straight-on photo of yourself — like a passport photo, driver’s license, graduation photo — that someone can use on a fake ID.”

[…]

“Create a different email address for every service you use”

[…]

“Oh yeah — and don’t use Facebook.”

{ NY Times | Continue reading }

Miss Yiss, you fascinator, you

Japanese idol Ena Matsuoka was attacked outside her home last month after a fan figured out her address from selfies she posted on social media — just by zooming in on the reflection on her pupils.

The fan, Hibiki Sato, 26, managed to identify a bus stop and the surrounding scenery from the reflection on Matsuoka’s eyes and matched them to a street using Google Maps.

{ Asia One | Continue reading }

Tokyo Shimbun, a metropolitan daily, which reported on the stalking case, warned readers even casual selfies may show surrounding buildings that will allow people to identify the location of the photos.

It also said people shouldn’t make the V-sign with their hand, which Japanese often do in photos, because fingerprints could be stolen.

{ USA Today | Continue reading }

‘McDonald’s removed the mcrib from its menu so it could suck its own dick’ –@jaynooch

41.jpg

iBorderCtrl is an AI based lie detector project funded by the European Union’s Horizon 2020. The tool will be used on people crossing borders of some European countries. It officially enables faster border control. It will be tested in Hungary, Greece and Letonia until August 2019 and should then be officially deployed.

The project will analyze facial micro-expressions to detect lies. We really have worries about such a project. For those who don’t have any knowledge on AI and CS, the idea of using a computer to detect lies can sound really good. Computers are believed to be totally objective.

But the AI community knows it is far from being true: biases are nearly omnipresent. We have no idea how the dataset used by iBorderCtrl has been built.

More globally, we have to remind that AI has no understanding of humans (to be honest, it has no understanding at all). It just starts being able to recognize the words we pronounce, but it doesn’t understand their meaning.

Lies rely on complex psychological mechanisms. Detecting them would require a lot more than a simple literal understanding. Trying to detect them using some key facial expressions looks utopian, especially as facial expressions can vary from a culture to another one. As an example, nodding the head usually means “yes” in western world, but it means “no” in countries such as Greece, Bulgaria and Turkey.

{ ActuIA | Continue reading }

The ‘iBorderCtrl’ AI system uses a variety of ‘at home’ pre-registration systems and real time ‘at the airport’ automatic deception detection systems. Some of the critical methods used in automated deception detection are that of micro-expressions. In this opinion article, we argue that considering the state of the psychological sciences current understanding of micro-expressions and their associations with deception, such in vivo testing is naïve and misinformed. We consider the lack of empirical research that supports the use of micro-expressions in the detection of deception and question the current understanding of the validity of specific cues to deception. With such unclear definitive and reliable cues to deception, we question the validity of using artificial intelligence that includes cues to deception, which have no current empirical support.

{ Security Journal | Continue reading }

Can’t hear with the waters of. The chittering waters of. Flittering bats, fieldmice bawk talk.

The U.S. government is in the midst of forcing a standoff with China over the global deployment of Huawei’s 5G wireless networks around the world. […] This conflict is perhaps the clearest acknowledgement we’re likely to see that our own government knows how much control of communications networks really matters, and our inability to secure communications on these networks could really hurt us.

{ Cryptography Engineering | Continue reading }

related { Why Controlling 5G Could Mean Controlling the World }

Surveiller et punir

imp-kerr-ecstasy.jpg

Paul Hildreth peered at a display of dozens of images from security cameras surveying his Atlanta school district and settled on one showing a woman in a bright yellow shirt walking a hallway.

A mouse click instructed the artificial-intelligence-equipped system to find other images of the woman, and it immediately stitched them into a video narrative of her immediate location, where she had been and where she was going.

There was no threat, but Hildreth’s demonstration showed what’s possible with AI-powered cameras. If a gunman were in one of his schools, the cameras could quickly identify the shooter’s location and movements, allowing police to end the threat as soon as possible, said Hildreth, emergency operations coordinator for Fulton County Schools.

AI is transforming surveillance cameras from passive sentries into active observers that can identify people, suspicious behavior and guns, amassing large amounts of data that help them learn over time to recognize mannerisms, gait and dress. If the cameras have a previously captured image of someone who is banned from a building, the system can immediately alert officials if the person returns.

{ LA Times | Continue reading }

installation sketch { ecstasy, 2018 }

‘No, everything stays, doesn’t it? Everything.’ –Flaubert

31.jpg

Before you hand over your number, ask yourself: Is it worth the risk? […]

Your phone number may have now become an even stronger identifier than your full name. I recently found this out firsthand when I asked Fyde, a mobile security firm in Palo Alto, Calif., to use my digits to demonstrate the potential risks of sharing a phone number.

He quickly plugged my cellphone number into a public records directory. Soon, he had a full dossier on me — including my name and birth date, my address, the property taxes I pay and the names of members of my family.

From there, it could have easily gotten worse. Mr. Tezisci could have used that information to try to answer security questions to break into my online accounts. Or he could have targeted my family and me with sophisticated phishing attacks.

{ NY Times | Continue reading }

image { Bell telephone magazine, March/April 1971 }

Ces dames préfèrent le mambo

212.jpg

Behavioural patterns of Londoners going about their daily business are being tracked and recorded an unprecedented scale, internet expert Ben Green warns. […]

Large-scale London data-collection projects include on-street free Wi-Fi beamed from special kiosks, smart bins, police facial recognition and soon 5G transmitters embedded in lamp posts.

Transport for London announced this week they would track, collect and analyse movements of commuters around 260 Tube stations starting from July by using mobile Wi-Fi data and device MAC addresses to help improve journeys. Customers can opt out by turning off their Wi-Fi. 

{ Standard | Continue reading }

previously { The Business of Selling Your Location }

art { Poster for Autechre by the Designers Republic, 2016 }

Facebook algorithm can recognise people in photographs even when it can’t see their faces

25.jpg

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts. […]

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017. […]

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

{ South China Morning Post | Continue reading }

photo { The Collection of the Australian National Maritime Museum }

the moyles and moyles of it

21.jpg

Products developed by companies such as Activtrak allow employers to track which websites staff visit, how long they spend on sites deemed “unproductive” and set alarms triggered by content considered dangerous. […]

To quantify productivity, “profiles” of employee behaviour — which can be as granular as mapping an individual’s daily activity — are generated from “vast” amounts of data. […]

If combined with personal details, such as someone’s age and sex, the data could allow employers to develop a nuanced picture of ideal employees, choose whom they considered most useful and help with promotion and firing decisions. […]

Some technology, including Teramind’s and Activtrak’s, permits employers to take periodic computer screenshots or screen-videos — either with employees’ knowledge or in “stealth” mode — and use AI to assess what it captures.

Depending on the employer’s settings, screenshot analysis can alert them to things like violent content or time spent on LinkedIn job adverts. 

But screenshots could also include the details of private messages, social media activity or credit card details in ecommerce checkouts, which would then all be saved to the employer’s database. […]

Meanwhile, smart assistants, such as Amazon’s Alexa for Business, are being introduced into workplaces, but it is unclear how much of office life the devices might record, or what records employers might be able to access.

{ Financial Times | Continue reading }

Google uses Gmail to track a history of things you buy. […] Google says it doesn’t use this information to sell you ads.

{ CNBC | Continue reading }

unrelated { Navy Seal’s lawyers received emails embedded with tracking software }

photo { Philip-Lorca diCorcia, Paris, 1996 }

Not a soul but ourselves

41.jpg

[I]nside of a Google server or a Facebook server is a little voodoo doll, avatar-like version of you […] All I have to do is simulate what conversation the voodoo doll is having, and I know the conversation you just had without having to listen to the microphone.

{ Quartz | Continue reading }

…a phenomenon privacy advocates have long referred to as the “if you build it, they will come” principle — anytime a technology company creates a system that could be used in surveillance, law enforcement inevitably comes knocking. Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.

The new orders, sometimes called “geofence” warrants, specify an area and a time period, and Google gathers information from Sensorvault about the devices that were there. It labels them with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices they think belong to suspects or witnesses, Google reveals the users’ names and other information. […]

Google uses the data to power advertising tailored to a person’s location, part of a more than $20 billion market for location-based ads last year.

{ NY Times | Continue reading }

The “panopticon” refers to an experimental laboratory of power in which behaviour could be modified

5.jpg

We’ve all been making some big choices, consciously or not, as advancing technology has transformed the real and virtual worlds. That phone in your pocket, the surveillance camera on the corner: You’ve traded away a bit of anonymity, of autonomy, for the usefulness of one, the protection of the other.

Many of these trade-offs were clearly worthwhile. But now the stakes are rising and the choices are growing more fraught. Is it O.K., for example, for an insurance company to ask you to wear a tracker to monitor whether you’re getting enough exercise, and set your rates accordingly? Would it concern you if police detectives felt free to collect your DNA from a discarded coffee cup, and to share your genetic code? What if your employer demanded access to all your digital activity, so that it could run that data through an algorithm to judge whether you’re trustworthy?

These sorts of things are already happening in the United States.

{ NY Times | Continue reading }