future

‘I don’t know who’s trolling who, but Richie Incognito is NOT a real name.’ –Aaron Bady

38.jpg

The National Intelligence Council has just released its much anticipated forecasting report, a 140-page document that outlines major trends and technological developments we should expect in the next 20 years. Among their many predictions, the NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. But they also envision a future in which humans have been significantly modified by their technologies — what will herald the dawn of the transhuman era. […]

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be adminstered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

{ io9 | Continue reading | Thanks Tim }

[on Dave’s return to the ship, after HAL has killed the rest of the crew] Look Dave, I can see you’re really upset about this.

41.jpg

In the industrial revolution — and revolutions since — there was an invigoration of jobs. For instance, assembly lines for cars led to a vast infrastructure that could support mass production giving rise to everything from car dealers to road building and utility expansion into new suburban areas. But the digital revolution is not following the same path, said Daryl Plummer. “What we’re seeing is a decline in the overall number of people required to do a job,” he said.

{ ComputerWorld | Continue reading }

photo { David Campany }

Particular things I call possible in so far as, while regarding the causes whereby they must be produced, we know not, whether such causes be determined for producing them.

510.jpg

Back in 1996, economist Paul Krugman wrote an essay about the next 100 years of economic history, as if looking back from the year 2096. […]

When something becomes abundant, it also becomes cheap. A world awash in information is one in which information has very little market value. In general, when the economy becomes extremely good at doing something, that activity becomes less, rather than more, important. Late-20th-century America was supremely efficient at growing food; that was why it had hardly any farmers. Late-21st-century America is supremely efficient at processing routine information; that is why traditional white-collar workers have virtually disappeared.

… Many of the jobs that once required a college degree have been eliminated. The others can be done by any intelligent person, whether or not she has studied world literature.

{ io9 | Continue reading }

‘What matters most is how well you walk through the fire.’ –Bukowski

416.jpg

By early 2030s, experts predict nanorobots will be developed to improve the human digestive system, and by 2040, as radical as this sounds, we could eliminate our need for food and eating. This is the vision of futurist Ray Kurzweil and nutritionist Terry Grossman, M.D. […] By mid-2030s, nutritional needs tailored solely to meet each person’s requirements will be more clearly understood. The required nutrients could then be provided inexpensively by a nano-replicator and delivered directly to each cell by nanorobots; thus eliminating the need to eat food.

{ IEET | Continue reading }

photo { Stephen Shore }

Mike D with the rump shakin action

351.jpg

A person named “John Titor” started posting on the Internet one day, claiming to be from the future and predicting the end of the world. Then he suddenly disappeared, never to be heard from again. […]

He claimed he was a soldier sent from 2036, the year the computer virus wiped the world. […]

Titor responded to every question other posters had, describing future events in poetically-phrased ways, always submitted with a general disclaimer that alternate realities do exist, so his reality may not be our own.

{ Pacific Standard | Continue reading | johntitor.com }

The patty duke, the wrench and then I bust the tango

37.jpg

We live in an age of unusually rapid fundamental discovery. This age cannot last long; it must soon slow down as we run out of basic things to discover. We may never run out of small things to discover, but there can be only so many big things.

Such discovery brings status. Many are proud to live in the schools, disciplines, cities, or nations from which discovery is seen to originate. We are also proud to live in this age of discovery. […]

This ability to unite via our discoveries is a scarce resource that we now greedily consume, at the cost of future generations to whom they will no longer be available. Some of these discoveries will give practical help, and aid our ability to grow our economy, and thereby help future generations. […] But many other sorts of discoveries are pretty unlikely to give practical help. […]

This all suggests that we consider delaying some sorts of discovery. The best candidates are those that produce great pride, are pretty unlikely to lead to any practical help, and for which the costs of discovery seem to be falling. The best candidate to satisfy these criteria is, as far as I can tell, cosmology.

While once upon a time advances in cosmology aided advances in basic physics, which lead to practical help, over time such connections have gotten much weaker.

{ OvercomingBias | Continue reading }

Step aside for the flex Terminator X

218.jpg

From assembly line robots to ATMs and self-checkout terminals, each year intelligent machines take over more jobs formerly held by humans; and experts predict this trend will not stop anytime soon. […]

“By 2015, robots should be able to assist teachers in the classroom. By 2018, they should be able to teach on their own, and this will cause many teachers to lose their jobs.” […]

The ultimate tool to replace doctors could be the nanorobot, a tiny microscopic-size machine that can whiz through veins replacing aging and damaged cells with new youthful ones. This nanowonder with expected development time of mid-to-late 2030s could eliminate nearly all need for human doctors. […]

Experts estimate by 2035, 50 million jobs will be lost to machines […] and by the end of the century, or possibly much sooner, all jobs will disappear. Some believe the final solution will take the form of a Basic Income Guarantee, made available as a fundamental right for everyone. […] America should create a $25,000 annual stipend for every U.S. adult, Brain says, which would be phased in over two-to-three decades. The payments could be paid for by ending welfare programs, taxing automated systems, adding a consumption tax, allowing ads on currency, and other creative ideas.

{ IEET | Continue reading }

Canto III: The Gate of Hell

29.jpg

What technology, or potential technology, worries you the most?

In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain—you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

{ Interview with Nick Bostrom | Continue reading }

Death is a sickness

425.jpg

Quantum Archaeology (QA) is the controversial science of resurrecting the dead including their memories. It assumes the universe is made of events and the laws that govern them, and seeks to make maps of brain/body states to the instant of death for everyone in history.

Anticipating process technologies due in 20 – 40 years, it involves construction of the Quantum Archaeology Grid to plot known events filling the gaps by cross-referencing heuristically within the laws of science. Specialist grids already exist waiting to be merged, including cosmic ones with trillions of moving evolution points. The result will be a mega-matrix good enough to describe and simulate the past. Quantum computers and super-recursive algorithms both in their infancy may allow vast calculation into the quantum world, and artificial intelligence has no upper limit to what it might do.

{ Transhumanity | Continue reading }

photo { Erwin Olaf }

‘Car Je est un autre. Si le cuivre s’éveille clairon, il n’y a rien de sa faute.’ –Rimbaud

66.jpg

All seems to indicate that the next decade, the 20s, will be the magic decade of the brain, with amazing science but also amazing applications. With the development of nanoscale neural probes and high speed, two-way Brain-Computer interfaces (BCI), by the end of the next decade we may have our iPhones implanted in our brains and become a telepathic species. […]

Last month the New York Times revealed that the Obama Administration may soon seek billions of dollars from Congress for a Brain Activity Map (BAM) project. […] The project may be partly based on the paper “The Brain Activity Map Project and the Challenge of Functional Connectomics” (Neuron, June 2012) by six well-known neuroscientists. […]

A new paper “The Brain Activity Map” (Science, March 2013), written as an executive summary by the same six neuroscientists and five more, is more explicit: “The Brain Activity Map (BAM), could put neuroscientists in a position to understand how the brain produces perception, action, memories, thoughts, and consciousness… Within 5 years, it should be possible to monitor and/or to control tens of thousands of neurons, and by year 10 that number will increase at least 10-fold. By year 15, observing 1 million neurons with markedly reduced invasiveness should be possible. With 1 million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse. In parallel, we envision developing nanoscale neural probes that can locally acquire, process, and store accumulated data. Networks of “intelligent” nanosystems would be capable of providing specific responses to externally applied signals, or to their own readings of brain activity.”

{ IEET | Continue reading }

photo { Adam Broomberg & Oliver Chanarin }

‘Between grief and nothing I will take grief.’ –Faulkner

342.jpg

There are good reasons for any species to think darkly of its own extinction. […]

Simple, single-celled life appeared early in Earth’s history. A few hundred million whirls around the newborn Sun were all it took to cool our planet and give it oceans, liquid laboratories that run trillions of chemical experiments per second. Somewhere in those primordial seas, energy flashed through a chemical cocktail, transforming it into a replicator, a combination of molecules that could send versions of itself into the future.

For a long time, the descendants of that replicator stayed single-celled. They also stayed busy, preparing the planet for the emergence of land animals, by filling its atmosphere with breathable oxygen, and sheathing it in the ozone layer that protects us from ultraviolet light. Multicellular life didn’t begin to thrive until 600 million years ago, but thrive it did. In the space of two hundred million years, life leapt onto land, greened the continents, and lit the fuse on the Cambrian explosion, a spike in biological creativity that is without peer in the geological record. The Cambrian explosion spawned most of the broad categories of complex animal life. It formed phyla so quickly, in such tight strata of rock, that Charles Darwin worried its existence disproved the theory of natural selection.

No one is certain what caused the five mass extinctions that glare out at us from the rocky layers atop the Cambrian. But we do have an inkling about a few of them. The most recent was likely borne of a cosmic impact, a thudding arrival from space, whose aftermath rained exterminating fire on the dinosaurs. […]

Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. […] There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. […]

Humans have a long history of using biology’s deadlier innovations for ill ends; we have proved especially adept at the weaponisation of microbes. In antiquity, we sent plagues into cities by catapulting corpses over fortified walls. Now we have more cunning Trojan horses. We have even stashed smallpox in blankets, disguising disease as a gift of good will. Still, these are crude techniques, primitive attempts to loose lethal organisms on our fellow man. In 1993, the death cult that gassed Tokyo’s subways flew to the African rainforest in order to acquire the Ebola virus, a tool it hoped to use to usher in Armageddon. In the future, even small, unsophisticated groups will be able to enhance pathogens, or invent them wholesale. Even something like corporate sabotage, could generate catastrophes that unfold in unpredictable ways. Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food. […]

The average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine. Either figure is extraordinary relative to the rest of the animal kingdom, but completely arbitrary as a hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes. It doesn’t look like the brain can be made to handle that kind of cognitive workload, but it might be able to build a machine that could. […]

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. […] You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it. […]

‘The really impressive stuff is hidden away inside AI journals,’ Dewey said. He told me about a team from the University of Alberta that recently trained an AI to play the 1980s video game Pac-Man. Only they didn’t let the AI see the familiar, overhead view of the game. Instead, they dropped it into a three-dimensional version, similar to a corn maze, where ghosts and pellets lurk behind every corner. They didn’t tell it the rules, either; they just threw it into the system and punished it when a ghost caught it. ‘Eventually the AI learned to play pretty well,’ Dewey said. ‘That would have been unheard of a few years ago, but we are getting to that point where we are finally starting to see little sparkles of generality.’

{ Ross Andersen/Aeon | Continue reading }

The past died yesterday

652.jpg

The jobs wai­ting for me after I finished college, simply aren’t there anymore. And yet the schools still act like they are.

{ Hugh MacLeod | Continue reading }

images { 1. Guy Bourdin | 2 }