future

Mike D with the rump shakin action

351.jpg

A person named “John Titor” started posting on the Internet one day, claiming to be from the future and predicting the end of the world. Then he suddenly disappeared, never to be heard from again. […]

He claimed he was a soldier sent from 2036, the year the computer virus wiped the world. […]

Titor responded to every question other posters had, describing future events in poetically-phrased ways, always submitted with a general disclaimer that alternate realities do exist, so his reality may not be our own.

{ Pacific Standard | Continue reading | johntitor.com }

The patty duke, the wrench and then I bust the tango

37.jpg

We live in an age of unusually rapid fundamental discovery. This age cannot last long; it must soon slow down as we run out of basic things to discover. We may never run out of small things to discover, but there can be only so many big things.

Such discovery brings status. Many are proud to live in the schools, disciplines, cities, or nations from which discovery is seen to originate. We are also proud to live in this age of discovery. […]

This ability to unite via our discoveries is a scarce resource that we now greedily consume, at the cost of future generations to whom they will no longer be available. Some of these discoveries will give practical help, and aid our ability to grow our economy, and thereby help future generations. […] But many other sorts of discoveries are pretty unlikely to give practical help. […]

This all suggests that we consider delaying some sorts of discovery. The best candidates are those that produce great pride, are pretty unlikely to lead to any practical help, and for which the costs of discovery seem to be falling. The best candidate to satisfy these criteria is, as far as I can tell, cosmology.

While once upon a time advances in cosmology aided advances in basic physics, which lead to practical help, over time such connections have gotten much weaker.

{ OvercomingBias | Continue reading }

Step aside for the flex Terminator X

218.jpg

From assembly line robots to ATMs and self-checkout terminals, each year intelligent machines take over more jobs formerly held by humans; and experts predict this trend will not stop anytime soon. […]

“By 2015, robots should be able to assist teachers in the classroom. By 2018, they should be able to teach on their own, and this will cause many teachers to lose their jobs.” […]

The ultimate tool to replace doctors could be the nanorobot, a tiny microscopic-size machine that can whiz through veins replacing aging and damaged cells with new youthful ones. This nanowonder with expected development time of mid-to-late 2030s could eliminate nearly all need for human doctors. […]

Experts estimate by 2035, 50 million jobs will be lost to machines […] and by the end of the century, or possibly much sooner, all jobs will disappear. Some believe the final solution will take the form of a Basic Income Guarantee, made available as a fundamental right for everyone. […] America should create a $25,000 annual stipend for every U.S. adult, Brain says, which would be phased in over two-to-three decades. The payments could be paid for by ending welfare programs, taxing automated systems, adding a consumption tax, allowing ads on currency, and other creative ideas.

{ IEET | Continue reading }

Canto III: The Gate of Hell

29.jpg

What technology, or potential technology, worries you the most?

In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain—you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

{ Interview with Nick Bostrom | Continue reading }

Death is a sickness

425.jpg

Quantum Archaeology (QA) is the controversial science of resurrecting the dead including their memories. It assumes the universe is made of events and the laws that govern them, and seeks to make maps of brain/body states to the instant of death for everyone in history.

Anticipating process technologies due in 20 – 40 years, it involves construction of the Quantum Archaeology Grid to plot known events filling the gaps by cross-referencing heuristically within the laws of science. Specialist grids already exist waiting to be merged, including cosmic ones with trillions of moving evolution points. The result will be a mega-matrix good enough to describe and simulate the past. Quantum computers and super-recursive algorithms both in their infancy may allow vast calculation into the quantum world, and artificial intelligence has no upper limit to what it might do.

{ Transhumanity | Continue reading }

photo { Erwin Olaf }

‘Car Je est un autre. Si le cuivre s’éveille clairon, il n’y a rien de sa faute.’ –Rimbaud

66.jpg

All seems to indicate that the next decade, the 20s, will be the magic decade of the brain, with amazing science but also amazing applications. With the development of nanoscale neural probes and high speed, two-way Brain-Computer interfaces (BCI), by the end of the next decade we may have our iPhones implanted in our brains and become a telepathic species. […]

Last month the New York Times revealed that the Obama Administration may soon seek billions of dollars from Congress for a Brain Activity Map (BAM) project. […] The project may be partly based on the paper “The Brain Activity Map Project and the Challenge of Functional Connectomics” (Neuron, June 2012) by six well-known neuroscientists. […]

A new paper “The Brain Activity Map” (Science, March 2013), written as an executive summary by the same six neuroscientists and five more, is more explicit: “The Brain Activity Map (BAM), could put neuroscientists in a position to understand how the brain produces perception, action, memories, thoughts, and consciousness… Within 5 years, it should be possible to monitor and/or to control tens of thousands of neurons, and by year 10 that number will increase at least 10-fold. By year 15, observing 1 million neurons with markedly reduced invasiveness should be possible. With 1 million neurons, scientists will be able to evaluate the function of the entire brain of the zebrafish or several areas from the cerebral cortex of the mouse. In parallel, we envision developing nanoscale neural probes that can locally acquire, process, and store accumulated data. Networks of “intelligent” nanosystems would be capable of providing specific responses to externally applied signals, or to their own readings of brain activity.”

{ IEET | Continue reading }

photo { Adam Broomberg & Oliver Chanarin }

‘Between grief and nothing I will take grief.’ –Faulkner

342.jpg

There are good reasons for any species to think darkly of its own extinction. […]

Simple, single-celled life appeared early in Earth’s history. A few hundred million whirls around the newborn Sun were all it took to cool our planet and give it oceans, liquid laboratories that run trillions of chemical experiments per second. Somewhere in those primordial seas, energy flashed through a chemical cocktail, transforming it into a replicator, a combination of molecules that could send versions of itself into the future.

For a long time, the descendants of that replicator stayed single-celled. They also stayed busy, preparing the planet for the emergence of land animals, by filling its atmosphere with breathable oxygen, and sheathing it in the ozone layer that protects us from ultraviolet light. Multicellular life didn’t begin to thrive until 600 million years ago, but thrive it did. In the space of two hundred million years, life leapt onto land, greened the continents, and lit the fuse on the Cambrian explosion, a spike in biological creativity that is without peer in the geological record. The Cambrian explosion spawned most of the broad categories of complex animal life. It formed phyla so quickly, in such tight strata of rock, that Charles Darwin worried its existence disproved the theory of natural selection.

No one is certain what caused the five mass extinctions that glare out at us from the rocky layers atop the Cambrian. But we do have an inkling about a few of them. The most recent was likely borne of a cosmic impact, a thudding arrival from space, whose aftermath rained exterminating fire on the dinosaurs. […]

Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. […] There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. […]

Humans have a long history of using biology’s deadlier innovations for ill ends; we have proved especially adept at the weaponisation of microbes. In antiquity, we sent plagues into cities by catapulting corpses over fortified walls. Now we have more cunning Trojan horses. We have even stashed smallpox in blankets, disguising disease as a gift of good will. Still, these are crude techniques, primitive attempts to loose lethal organisms on our fellow man. In 1993, the death cult that gassed Tokyo’s subways flew to the African rainforest in order to acquire the Ebola virus, a tool it hoped to use to usher in Armageddon. In the future, even small, unsophisticated groups will be able to enhance pathogens, or invent them wholesale. Even something like corporate sabotage, could generate catastrophes that unfold in unpredictable ways. Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food. […]

The average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine. Either figure is extraordinary relative to the rest of the animal kingdom, but completely arbitrary as a hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes. It doesn’t look like the brain can be made to handle that kind of cognitive workload, but it might be able to build a machine that could. […]

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. […] You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it. […]

‘The really impressive stuff is hidden away inside AI journals,’ Dewey said. He told me about a team from the University of Alberta that recently trained an AI to play the 1980s video game Pac-Man. Only they didn’t let the AI see the familiar, overhead view of the game. Instead, they dropped it into a three-dimensional version, similar to a corn maze, where ghosts and pellets lurk behind every corner. They didn’t tell it the rules, either; they just threw it into the system and punished it when a ghost caught it. ‘Eventually the AI learned to play pretty well,’ Dewey said. ‘That would have been unheard of a few years ago, but we are getting to that point where we are finally starting to see little sparkles of generality.’

{ Ross Andersen/Aeon | Continue reading }

The past died yesterday

652.jpg

The jobs wai­ting for me after I finished college, simply aren’t there anymore. And yet the schools still act like they are.

{ Hugh MacLeod | Continue reading }

images { 1. Guy Bourdin | 2 }

I am ruined. A few pastilles of aconite. The blinds drawn.

4321.jpg

Do you feel that the world is, on balance, improved by technology?

Well, if you ask that question from the point of view of almost anything in this world that’s not a human being like you and me, the answer’s almost certainly No. You might get a few Yea votes from the likes of albino rabbits and gene-spliced tobacco plants. Ask any living thing that’s been around in the world since before the Greeks made up the word “technology,” like say a bristlecone pine or a coral reef. You would hear an awful tale of woe.

[…]

The number one trend in the world, the biggest, the most important trend, is climate change. People hate watching it; they either flinch in guilty fear or shudder away in denial, but it makes a deeper, more drastic difference to your future than anything else that is happening now.

[…]

Things that have already successfully lived a long time, such as the Pyramids, are likely to stay around longer than 99.9% of our things. It might be a bit startling to realize that it’s mostly our paper that will survive us as data, while a lot of our electronics will succumb to erasure, loss, and bit rot.

{ Bruce Streling/40k | Continue reading }

Tossed to fat lips his chalice, drank off his tiny chalice, sucking the last fat violet syrupy drops

2332.jpg

China has been running the world’s largest and most successful eugenics program for more than thirty years, driving China’s ever-faster rise as the global superpower. […] Chinese eugenics will quickly become even more effective, given its massive investment in genomic research on human mental and physical traits. BGI-Shenzhen employs more than 4,000 researchers. It has far more “next-generation” DNA sequencers that anywhere else in the world, and is sequencing more than 50,000 genomes per year. It recently acquired the California firm Complete Genomics to become a major rival to Illumina.

[…]

A new kind of misplaced worries is likely to become more and more common. The ever-accelerating current scientific and technological revolution results in a flow of problems and opportunities that presents unprecedented cognitive and decisional challenges. Our capacity to anticipate these problems and opportunities is swamped by their number, novelty, speed of arrival, and complexity.

[…]

If we have a million photos, we tend to value each one less than if we only had ten. The internet forces a general devaluation of the written word: a global deflation in the average word’s value on many axes. As each word tends to get less reading-time and attention and to be worth less money at the consumer end, it naturally tends to absorb less writing-time and editorial attention on the production side. Gradually, as the time invested by the average writer and the average reader in the average sentence falls, society’s ability to communicate in writing decays. And this threat to our capacity to read and write is a slow-motion body-blow to science, scholarship, the arts—to nearly everything, in fact, that is distinctively human, that muskrats and dolphins can’t do just as well or better.

[…]

I know that my own perception of time has been changed by technology. If I go from using a fast computer or web connection to using even a slightly slower one, processes that take just a second or two longer—waking the machine from sleep, launching an application, opening a web page—seem almost intolerably slow. Never before have I been so aware of, and annoyed by, the passage of mere seconds. […] As we experience faster flows of information online, we become, in other words, less patient people. But it’s not just a network effect. The phenomenon is amplified by the constant buzz of Facebook, Twitter, texting, and social networking in general. Society’s “activity rhythm” has never been so harried. Impatience is a contagion spread from gadget to gadget.

{ What Should We Be Worried About? | Edge }

I’m sorry, I wasn’t listening

f.jpg

Slowly, but surely, robots (and virtual ’bots that exist only as software) are taking over our jobs; according to one back-of-the-envelope projection, in ninety years “70 percent of today’s occupations will likewise be replaced by automation.” […]

If history repeats itself, robots will replace our current jobs, but, says Kelly, we’ll have new jobs, that we can scarcely imagine:

In the coming years robot-driven cars and trucks will become ubiquitous; this automation will spawn the new human occupation of trip optimizer, a person who tweaks the traffic system for optimal energy and time usage. Routine robosurgery will necessitate the new skills of keeping machines sterile. When automatic self-tracking of all your activities becomes the normal thing to do, a new breed of professional analysts will arise to help you make sense of the data.

Well, maybe. Or maybe the professional analysts will be robots (or least computer programs), and ditto for the trip optimizers and sterilizers.

{ The New Yorker | Continue reading }

‘The old woman dies, the burden is lifted.’ –Schopenhauer

312.jpg

This report spells out what the world would be like if it warmed by 4 degrees Celsius, which is what scientists are nearly unanimously predicting by the end of the century, without serious policy changes.

The 4°C scenarios are devastating: the inundation of coastal cities; increasing risks for food production potentially leading to higher malnutrition rates; many dry regions becoming dryer, wet regions wetter; unprecedented heat waves in many regions, especially in the tropics; substantially exacerbated water scarcity in many regions; increased frequency of high-intensity tropical cyclones; and irreversible loss of biodiversity, including coral reef systems. […]

The science is unequivocal that humans are the cause of global warming, and major changes are already being observed: global mean warming is 0.8°C above pre industrial levels; oceans have warmed by 0.09°C since the 1950s and are acidifying; sea levels rose by about 20 cm since pre-industrial times and are now rising at 3.2 cm per decade; an exceptional number of extreme heat waves occurred in the last decade; major food crop growing areas are increasingly affected by drought.

{ World Bank | PDF }