Canto III: The Gate of Hell


What technology, or potential technology, worries you the most?

In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain—you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

{ Interview with Nick Bostrom | Continue reading }

Death is a sickness


Quantum Archaeology (QA) is the controversial science of resurrecting the dead including their memories. It assumes the universe is made of events and the laws that govern them, and seeks to make maps of brain/body states to the instant of death for everyone in history.

Anticipating process technologies due in 20 – 40 years, it involves construction of the Quantum Archaeology Grid to plot known events filling the gaps by cross-referencing heuristically within the laws of science. Specialist grids already exist waiting to be merged, including cosmic ones with trillions of moving evolution points. The result will be a mega-matrix good enough to describe and simulate the past. Quantum computers and super-recursive algorithms both in their infancy may allow vast calculation into the quantum world, and artificial intelligence has no upper limit to what it might do.

{ Transhumanity | Continue reading }

photo { Erwin Olaf }

‘Car Je est un autre. Si le cuivre s’éveille clairon, il n’y a rien de sa faute.’ –Rimbaud


All seems to indicate that the next decade, the 20s, will be the magic decade of the brain, with amazing science but also amazing applications. With the development of nanoscale neural probes and high speed, two-way Brain-Computer interfaces (BCI), by the end of the next decade we may have our iPhones implanted in our brains and become a telepathic species. […]

Last month the New York Times revealed that the Obama Administration may soon seek billions of dollars from Congress for a Brain Activity Map (BAM) project. […] The project may be partly based on the paper “The Brain Activity Map Project and the Challenge of Functional Connectomics” (Neuron, June 2012) by six well-known neuroscientists. […]

A new paper “The Brain Activity Map” (Science, March 2013), written as an executive summary by the same six neuroscientists and five more, is more explicit: “The Brain Activity Map (BAM), could put neuroscientists in a position to understand how the brain produces perception, action, memories, thoughts, and consciousness… Within 5 years, it should be possible to monitor and/or to control tens of thousands of neurons, and by year 10 that number will increase at least 10-fold. By year 15, observing 1 million neurons with markedly reduced invasiveness should be possible. With 1 million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse. In parallel, we envision developing nanoscale neural probes that can locally acquire, process, and store accumulated data. Networks of “intelligent” nanosystems would be capable of providing specific responses to externally applied signals, or to their own readings of brain activity.”

{ IEET | Continue reading }

photo { Adam Broomberg & Oliver Chanarin }

‘Between grief and nothing I will take grief.’ –Faulkner


There are good reasons for any species to think darkly of its own extinction. […]

Simple, single-celled life appeared early in Earth’s history. A few hundred million whirls around the newborn Sun were all it took to cool our planet and give it oceans, liquid laboratories that run trillions of chemical experiments per second. Somewhere in those primordial seas, energy flashed through a chemical cocktail, transforming it into a replicator, a combination of molecules that could send versions of itself into the future.

For a long time, the descendants of that replicator stayed single-celled. They also stayed busy, preparing the planet for the emergence of land animals, by filling its atmosphere with breathable oxygen, and sheathing it in the ozone layer that protects us from ultraviolet light. Multicellular life didn’t begin to thrive until 600 million years ago, but thrive it did. In the space of two hundred million years, life leapt onto land, greened the continents, and lit the fuse on the Cambrian explosion, a spike in biological creativity that is without peer in the geological record. The Cambrian explosion spawned most of the broad categories of complex animal life. It formed phyla so quickly, in such tight strata of rock, that Charles Darwin worried its existence disproved the theory of natural selection.

No one is certain what caused the five mass extinctions that glare out at us from the rocky layers atop the Cambrian. But we do have an inkling about a few of them. The most recent was likely borne of a cosmic impact, a thudding arrival from space, whose aftermath rained exterminating fire on the dinosaurs. […]

Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. […] There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. […]

Humans have a long history of using biology’s deadlier innovations for ill ends; we have proved especially adept at the weaponisation of microbes. In antiquity, we sent plagues into cities by catapulting corpses over fortified walls. Now we have more cunning Trojan horses. We have even stashed smallpox in blankets, disguising disease as a gift of good will. Still, these are crude techniques, primitive attempts to loose lethal organisms on our fellow man. In 1993, the death cult that gassed Tokyo’s subways flew to the African rainforest in order to acquire the Ebola virus, a tool it hoped to use to usher in Armageddon. In the future, even small, unsophisticated groups will be able to enhance pathogens, or invent them wholesale. Even something like corporate sabotage, could generate catastrophes that unfold in unpredictable ways. Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food. […]

The average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine. Either figure is extraordinary relative to the rest of the animal kingdom, but completely arbitrary as a hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes. It doesn’t look like the brain can be made to handle that kind of cognitive workload, but it might be able to build a machine that could. […]

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. […] You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it. […]

‘The really impressive stuff is hidden away inside AI journals,’ Dewey said. He told me about a team from the University of Alberta that recently trained an AI to play the 1980s video game Pac-Man. Only they didn’t let the AI see the familiar, overhead view of the game. Instead, they dropped it into a three-dimensional version, similar to a corn maze, where ghosts and pellets lurk behind every corner. They didn’t tell it the rules, either; they just threw it into the system and punished it when a ghost caught it. ‘Eventually the AI learned to play pretty well,’ Dewey said. ‘That would have been unheard of a few years ago, but we are getting to that point where we are finally starting to see little sparkles of generality.’

{ Ross Andersen/Aeon | Continue reading }

The past died yesterday


The jobs wai­ting for me after I finished college, simply aren’t there anymore. And yet the schools still act like they are.

{ Hugh MacLeod | Continue reading }

images { 1. Guy Bourdin | 2 }

I am ruined. A few pastilles of aconite. The blinds drawn.


Do you feel that the world is, on balance, improved by technology?

Well, if you ask that question from the point of view of almost anything in this world that’s not a human being like you and me, the answer’s almost certainly No. You might get a few Yea votes from the likes of albino rabbits and gene-spliced tobacco plants. Ask any living thing that’s been around in the world since before the Greeks made up the word “technology,” like say a bristlecone pine or a coral reef. You would hear an awful tale of woe.


The number one trend in the world, the biggest, the most important trend, is climate change. People hate watching it; they either flinch in guilty fear or shudder away in denial, but it makes a deeper, more drastic difference to your future than anything else that is happening now.


Things that have already successfully lived a long time, such as the Pyramids, are likely to stay around longer than 99.9% of our things. It might be a bit startling to realize that it’s mostly our paper that will survive us as data, while a lot of our electronics will succumb to erasure, loss, and bit rot.

{ Bruce Streling/40k | Continue reading }

Tossed to fat lips his chalice, drank off his tiny chalice, sucking the last fat violet syrupy drops


China has been running the world’s largest and most successful eugenics program for more than thirty years, driving China’s ever-faster rise as the global superpower. […] Chinese eugenics will quickly become even more effective, given its massive investment in genomic research on human mental and physical traits. BGI-Shenzhen employs more than 4,000 researchers. It has far more “next-generation” DNA sequencers that anywhere else in the world, and is sequencing more than 50,000 genomes per year. It recently acquired the California firm Complete Genomics to become a major rival to Illumina.


A new kind of misplaced worries is likely to become more and more common. The ever-accelerating current scientific and technological revolution results in a flow of problems and opportunities that presents unprecedented cognitive and decisional challenges. Our capacity to anticipate these problems and opportunities is swamped by their number, novelty, speed of arrival, and complexity.


If we have a million photos, we tend to value each one less than if we only had ten. The internet forces a general devaluation of the written word: a global deflation in the average word’s value on many axes. As each word tends to get less reading-time and attention and to be worth less money at the consumer end, it naturally tends to absorb less writing-time and editorial attention on the production side. Gradually, as the time invested by the average writer and the average reader in the average sentence falls, society’s ability to communicate in writing decays. And this threat to our capacity to read and write is a slow-motion body-blow to science, scholarship, the arts—to nearly everything, in fact, that is distinctively human, that muskrats and dolphins can’t do just as well or better.


I know that my own perception of time has been changed by technology. If I go from using a fast computer or web connection to using even a slightly slower one, processes that take just a second or two longer—waking the machine from sleep, launching an application, opening a web page—seem almost intolerably slow. Never before have I been so aware of, and annoyed by, the passage of mere seconds. […] As we experience faster flows of information online, we become, in other words, less patient people. But it’s not just a network effect. The phenomenon is amplified by the constant buzz of Facebook, Twitter, texting, and social networking in general. Society’s “activity rhythm” has never been so harried. Impatience is a contagion spread from gadget to gadget.

{ What Should We Be Worried About? | Edge }

I’m sorry, I wasn’t listening


Slowly, but surely, robots (and virtual ’bots that exist only as software) are taking over our jobs; according to one back-of-the-envelope projection, in ninety years “70 percent of today’s occupations will likewise be replaced by automation.” […]

If history repeats itself, robots will replace our current jobs, but, says Kelly, we’ll have new jobs, that we can scarcely imagine:

In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation of trip optimizer, a person who tweaks the traffic system for optimal energy and time usage. Routine robosurgery will necessitate the new skills of keeping machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data.

Well, maybe. Or maybe the professional analysts will be robots (or least computer programs), and ditto for the trip optimizers and sterilizers.

{ The New Yorker | Continue reading }

‘The old woman dies, the burden is lifted.’ –Schopenhauer


This report spells out what the world would be like if it warmed by 4 degrees Celsius, which is what scientists are nearly unanimously predicting by the end of the century, without serious policy changes.

The 4°C scenarios are devastating: the inundation of coastal cities; increasing risks for food production potentially leading to higher malnutrition rates; many dry regions becoming dryer, wet regions wetter; unprecedented heat waves in many regions, especially in the tropics; substantially exacerbated water scarcity in many regions; increased frequency of high-intensity tropical cyclones; and irreversible loss of biodiversity, including coral reef systems. […]

The science is unequivocal that humans are the cause of global warming, and major changes are already being observed: global mean warming is 0.8°C above pre industrial levels; oceans have warmed by 0.09°C since the 1950s and are acidifying; sea levels rose by about 20 cm since pre-industrial times and are now rising at 3.2 cm per decade; an exceptional number of extreme heat waves occurred in the last decade; major food crop growing areas are increasingly affected by drought.

{ World Bank | PDF }

The gates of the drive opened wide to give egress to the vice-regal cavalcade


The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in roughly a century.

{ Overcoming Bias | Continue reading }

A case can be made that the hypothesis that we are living in a computer simulation should be given a significant probability. The basic idea behind this so-called “Simulation argument” is that vast amounts of computing power may become available in the future, and that it could be used, among other things, to run large numbers of fine-grained simulations of past human civilizations. Under some not-too-implausible assumptions, the result can be that almost all minds like ours are simulated minds, and that we should therefore assign a significant probability to being such computer-emulated minds rather than the (subjectively indistinguishable) minds of originally evolved creatures. And if we are, we suffer the risk that the simulation may be shut down at any time.

{ Nick Bostrom | Continue reading | Related: Nick Bostrom, Are you living in a computer simulation?, 2003 }

photo { Matthew Pillsbury }

Subjectivity after Wittgenstein: The Post-Cartesian Subject and the ‘Death of Man’


Foxconn, the maker of Apple’s iPhone and iPad, plans to rely more on robots for manufacturing over the coming years, allowing the company to invest more in research and development and save on labor costs. […]

Local Chinese media reported that Foxconn CEO Terry Gou had said the company plans on deploying 1 million robots over the next three years to complete routine assembly tasks. Foxconn currently uses 10,000 robots. […]

The Taiwan-based company has more than 1 million employees, the majority of which are located at facilities in mainland China. Foxconn is one of the world’s largest producers of electronics. Aside from Apple, the company also manufactures products for companies like HP, Sony and Nintendo.

{ IT World | Continue reading }

Number two on the other hand, she of the cherry rouge and coiffeuse white, whose hair owes not a little to our tribal elixir of gopherwood


Michael McAlpine’s shiny circuit doesn’t look like something you would stick in your mouth. It’s dashed with gold, has a coiled antenna and is glued to a stiff rectangle. But the antenna flexes, and the rectangle is actually silk, its stiffness melting away under water. And if you paste the device on your tooth, it could keep you healthy.

The electronic gizmo is designed to detect dangerous bacteria and send out warning signals, alerting its bearer to microbes slipping past the lips. Recently, McAlpine, of Princeton University, and his colleagues spotted a single E. coli bacterium skittering across the surface of the gadget’s sensor. The sensor also picked out ulcer-causing H. pylori amid the molecular medley of human saliva, the team reported earlier this year in Nature Communications.

At about the size of a standard postage stamp, the dental device is still too big to fit comfortably in a human mouth. “We had to use a cow tooth,” McAlpine says, describing test experiments. But his team plans to shrink the gadget so it can nestle against human enamel. McAlpine is convinced that one day, perhaps five to 10 years from now, everyone will wear some sort of electronic device. “It’s not just teeth,” he says. “People are going to be bionic.”

{ ScienceNews | Continue reading }