spy & security

typography can save the world just kidding

22.jpg

Google is engaged with one of the country’s largest health-care systems to collect and crunch the detailed personal health information of millions of Americans across 21 states.

The initiative, code-named “Project Nightingale,” appears to be the biggest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. […] Google began the effort in secret last year with St. Louis-based Ascension, the second-largest health system in the U.S., with the data sharing accelerating since summer, the documents show.

The data involved in Project Nightingale encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and documents.

Some Ascension employees have raised questions about the way the data is being collected and shared, both from a technological and ethical perspective, according to the people familiar with the project. But privacy experts said it appeared to be permissible under federal law. That law, the Health Insurance Portability and Accountability Act of 1996, generally allows hospitals to share data with business partners without telling patients, as long as the information is used “only to help the covered entity carry out its health care functions.”

Google in this case is using the data, in part, to design new software, underpinned by advanced artificial intelligence and machine learning, that zeroes in on individual patients to suggest changes to their care.

{ Wall Street Journal | Continue reading }

oil on panel { Mark Ryden, Incarnation, 2009 | Work in progress of the intricate frame for Mark Ryden’s painting Incarnation }

Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips mov

Founded in 2004 by Peter Thiel and some fellow PayPal alumni, Palantir cut its teeth working for the Pentagon and the CIA in Afghanistan and Iraq. The company’s engineers and products don’t do any spying themselves; they’re more like a spy’s brain, collecting and analyzing information that’s fed in from the hands, eyes, nose, and ears. The software combs through disparate data sources—financial documents, airline reservations, cellphone records, social media postings—and searches for connections that human analysts might miss. It then presents the linkages in colorful, easy-to-interpret graphics that look like spider webs. U.S. spies and special forces loved it immediately; they deployed Palantir to synthesize and sort the blizzard of battlefield intelligence. It helped planners avoid roadside bombs, track insurgents for assassination, even hunt down Osama bin Laden. The military success led to federal contracts on the civilian side. The U.S. Department of Health and Human Services uses Palantir to detect Medicare fraud. The FBI uses it in criminal probes. The Department of Homeland Security deploys it to screen air travelers and keep tabs on immigrants.

Police and sheriff’s departments in New York, New Orleans, Chicago, and Los Angeles have also used it, frequently ensnaring in the digital dragnet people who aren’t suspected of committing any crime. People and objects pop up on the Palantir screen inside boxes connected to other boxes by radiating lines labeled with the relationship: “Colleague of,” “Lives with,” “Operator of [cell number],” “Owner of [vehicle],” “Sibling of,” even “Lover of.” If the authorities have a picture, the rest is easy. Tapping databases of driver’s license and ID photos, law enforcement agencies can now identify more than half the population of U.S. adults. […]

In March a former computer engineer for Cambridge Analytica, the political consulting firm that worked for Donald Trump’s 2016 presidential campaign, testified in the British Parliament that a Palantir employee had helped Cambridge Analytica use the personal data of up to 87 million Facebook users to develop psychographic profiles of individual voters. […] The employee, Palantir said, worked with Cambridge Analytica on his own time. […]

Legend has it that Stephen Cohen, one of Thiel’s co-founders, programmed the initial prototype for Palantir’s software in two weeks. It took years, however, to coax customers away from the longtime leader in the intelligence analytics market, a software company called I2 Inc.

In one adventure missing from the glowing accounts of Palantir’s early rise, I2 accused Palantir of misappropriating its intellectual property through a Florida shell company registered to the family of a Palantir executive. A company claiming to be a private eye firm had been licensing I2 software and development tools and spiriting them to Palantir for more than four years. I2 said the cutout was registered to the family of Shyam Sankar, Palantir’s director of business development.

I2 sued Palantir in federal court, alleging fraud, conspiracy, and copyright infringement. […] Palantir agreed to pay I2 about $10 million to settle the suit. […]

Sankar, Palantir employee No. 13 and now one of the company’s top executives, also showed up in another Palantir scandal: the company’s 2010 proposal for the U.S. Chamber of Commerce to run a secret sabotage campaign against the group’s liberal opponents. Hacked emails released by the group Anonymous indicated that Palantir and two other defense contractors pitched outside lawyers for the organization on a plan to snoop on the families of progressive activists, create fake identities to infiltrate left-leaning groups, scrape social media with bots, and plant false information with liberal groups to subsequently discredit them.

After the emails emerged in the press, Palantir offered an explanation similar to the one it provided in March for its U.K.-based employee’s assistance to Cambridge Analytica: It was the work of a single rogue employee.

{ Bloomberg | Continue reading }

Police databases now feature the faces of nearly half of Americans — most of whom have no idea their image is there

{ NY Times | full story }

‘Je ne parlerai pas, je ne penserai rien : mais l’amour infini me montera dans l’âme.’ —Rimbaud

45.jpg

The ads you see online are based on the sites, searches, or Facebook posts that get your interest. Some rebels therefore throw a wrench into the machinery — by demonstrating phony interests.

“Every once in a while, I Google something completely nutty just to mess with their algorithm,” wrote Shaun Breidbart. “You’d be surprised what sort of coupons CVS prints for me on the bottom of my receipt. They are clearly confused about both my age and my gender.”

[…]

“You never want to tell Facebook where you were born and your date of birth. That’s 98 percent of someone stealing your identity! And don’t use a straight-on photo of yourself — like a passport photo, driver’s license, graduation photo — that someone can use on a fake ID.”

[…]

“Create a different email address for every service you use”

[…]

“Oh yeah — and don’t use Facebook.”

{ NY Times | Continue reading }

Miss Yiss, you fascinator, you

Japanese idol Ena Matsuoka was attacked outside her home last month after a fan figured out her address from selfies she posted on social media — just by zooming in on the reflection on her pupils.

The fan, Hibiki Sato, 26, managed to identify a bus stop and the surrounding scenery from the reflection on Matsuoka’s eyes and matched them to a street using Google Maps.

{ Asia One | Continue reading }

Tokyo Shimbun, a metropolitan daily, which reported on the stalking case, warned readers even casual selfies may show surrounding buildings that will allow people to identify the location of the photos.

It also said people shouldn’t make the V-sign with their hand, which Japanese often do in photos, because fingerprints could be stolen.

{ USA Today | Continue reading }

‘McDonald’s removed the mcrib from its menu so it could suck its own dick’ –@jaynooch

41.jpg

iBorderCtrl is an AI based lie detector project funded by the European Union’s Horizon 2020. The tool will be used on people crossing borders of some European countries. It officially enables faster border control. It will be tested in Hungary, Greece and Letonia until August 2019 and should then be officially deployed.

The project will analyze facial micro-expressions to detect lies. We really have worries about such a project. For those who don’t have any knowledge on AI and CS, the idea of using a computer to detect lies can sound really good. Computers are believed to be totally objective.

But the AI community knows it is far from being true: biases are nearly omnipresent. We have no idea how the dataset used by iBorderCtrl has been built.

More globally, we have to remind that AI has no understanding of humans (to be honest, it has no understanding at all). It just starts being able to recognize the words we pronounce, but it doesn’t understand their meaning.

Lies rely on complex psychological mechanisms. Detecting them would require a lot more than a simple literal understanding. Trying to detect them using some key facial expressions looks utopian, especially as facial expressions can vary from a culture to another one. As an example, nodding the head usually means “yes” in western world, but it means “no” in countries such as Greece, Bulgaria and Turkey.

{ ActuIA | Continue reading }

The ‘iBorderCtrl’ AI system uses a variety of ‘at home’ pre-registration systems and real time ‘at the airport’ automatic deception detection systems. Some of the critical methods used in automated deception detection are that of micro-expressions. In this opinion article, we argue that considering the state of the psychological sciences current understanding of micro-expressions and their associations with deception, such in vivo testing is naïve and misinformed. We consider the lack of empirical research that supports the use of micro-expressions in the detection of deception and question the current understanding of the validity of specific cues to deception. With such unclear definitive and reliable cues to deception, we question the validity of using artificial intelligence that includes cues to deception, which have no current empirical support.

{ Security Journal | Continue reading }

Can’t hear with the waters of. The chittering waters of. Flittering bats, fieldmice bawk talk.

The U.S. government is in the midst of forcing a standoff with China over the global deployment of Huawei’s 5G wireless networks around the world. […] This conflict is perhaps the clearest acknowledgement we’re likely to see that our own government knows how much control of communications networks really matters, and our inability to secure communications on these networks could really hurt us.

{ Cryptography Engineering | Continue reading }

related { Why Controlling 5G Could Mean Controlling the World }

Surveiller et punir

imp-kerr-ecstasy.jpg

Paul Hildreth peered at a display of dozens of images from security cameras surveying his Atlanta school district and settled on one showing a woman in a bright yellow shirt walking a hallway.

A mouse click instructed the artificial-intelligence-equipped system to find other images of the woman, and it immediately stitched them into a video narrative of her immediate location, where she had been and where she was going.

There was no threat, but Hildreth’s demonstration showed what’s possible with AI-powered cameras. If a gunman were in one of his schools, the cameras could quickly identify the shooter’s location and movements, allowing police to end the threat as soon as possible, said Hildreth, emergency operations coordinator for Fulton County Schools.

AI is transforming surveillance cameras from passive sentries into active observers that can identify people, suspicious behavior and guns, amassing large amounts of data that help them learn over time to recognize mannerisms, gait and dress. If the cameras have a previously captured image of someone who is banned from a building, the system can immediately alert officials if the person returns.

{ LA Times | Continue reading }

installation sketch { ecstasy, 2018 }

‘No, everything stays, doesn’t it? Everything.’ –Flaubert

31.jpg

Before you hand over your number, ask yourself: Is it worth the risk? […]

Your phone number may have now become an even stronger identifier than your full name. I recently found this out firsthand when I asked Fyde, a mobile security firm in Palo Alto, Calif., to use my digits to demonstrate the potential risks of sharing a phone number.

He quickly plugged my cellphone number into a public records directory. Soon, he had a full dossier on me — including my name and birth date, my address, the property taxes I pay and the names of members of my family.

From there, it could have easily gotten worse. Mr. Tezisci could have used that information to try to answer security questions to break into my online accounts. Or he could have targeted my family and me with sophisticated phishing attacks.

{ NY Times | Continue reading }

image { Bell telephone magazine, March/April 1971 }

Ces dames préfèrent le mambo

212.jpg

Behavioural patterns of Londoners going about their daily business are being tracked and recorded an unprecedented scale, internet expert Ben Green warns. […]

Large-scale London data-collection projects include on-street free Wi-Fi beamed from special kiosks, smart bins, police facial recognition and soon 5G transmitters embedded in lamp posts.

Transport for London announced this week they would track, collect and analyse movements of commuters around 260 Tube stations starting from July by using mobile Wi-Fi data and device MAC addresses to help improve journeys. Customers can opt out by turning off their Wi-Fi. 

{ Standard | Continue reading }

previously { The Business of Selling Your Location }

art { Poster for Autechre by the Designers Republic, 2016 }

Facebook algorithm can recognise people in photographs even when it can’t see their faces

25.jpg

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts. […]

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017. […]

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

{ South China Morning Post | Continue reading }

photo { The Collection of the Australian National Maritime Museum }

the moyles and moyles of it

21.jpg

Products developed by companies such as Activtrak allow employers to track which websites staff visit, how long they spend on sites deemed “unproductive” and set alarms triggered by content considered dangerous. […]

To quantify productivity, “profiles” of employee behaviour — which can be as granular as mapping an individual’s daily activity — are generated from “vast” amounts of data. […]

If combined with personal details, such as someone’s age and sex, the data could allow employers to develop a nuanced picture of ideal employees, choose whom they considered most useful and help with promotion and firing decisions. […]

Some technology, including Teramind’s and Activtrak’s, permits employers to take periodic computer screenshots or screen-videos — either with employees’ knowledge or in “stealth” mode — and use AI to assess what it captures.

Depending on the employer’s settings, screenshot analysis can alert them to things like violent content or time spent on LinkedIn job adverts. 

But screenshots could also include the details of private messages, social media activity or credit card details in ecommerce checkouts, which would then all be saved to the employer’s database. […]

Meanwhile, smart assistants, such as Amazon’s Alexa for Business, are being introduced into workplaces, but it is unclear how much of office life the devices might record, or what records employers might be able to access.

{ Financial Times | Continue reading }

Google uses Gmail to track a history of things you buy. […] Google says it doesn’t use this information to sell you ads.

{ CNBC | Continue reading }

unrelated { Navy Seal’s lawyers received emails embedded with tracking software }

photo { Philip-Lorca diCorcia, Paris, 1996 }