nswd

ideas

In the autumn we shall go back to live in Paris. How strange it is.


“While I remained at Paris, near you, my father,” said Fleur-de-Marie, “I was so happy, oh! so completely happy, that those delicious days would not be too well paid for by years of suffering. You see I have at least known what happiness is.”

“During some days, perhaps?”

“Yes, but what pure and unmingled felicity! Love surrounded me then, as ever, with the tenderest care. I gave myself up without fear to the emotions of gratitude and affection which every moment raised my heart to you. The future dazzled me: a father to adore, a second mother to love doubly.

{ Eugene Sue, Mysteries of Paris, 1842-1843 | Continue reading }

George Bernard Shaw to Winston Churchill: ‘I am enclosing two tickets to the first night of my new play, bring a friend… if you have one.’ Winston Churchill, in response to George Bernard Shaw: ‘Cannot possibly attend first night; will attend second, if there is one.’

461.jpg

The first thing to remember about probability questions is that everyone finds them mind-bending, even mathematicians. The next step is to try to answer a similar but simpler question so that we can isolate what the question is really asking.

So, consider this preliminary question: “I have two children. One of them is a boy. What is the probability I have two boys?”

This is a much easier question, though a controversial one as I later discovered. After the gathering ended, Foshee’s Tuesday boy problem became a hotly discussed topic on blogs around the world. The main bone of contention was how to properly interpret the question. The way Foshee meant it is, of all the families with one boy and exactly one other child, what proportion of those families have two boys?

To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3.

Now we can repeat this technique for the original question. Let’s list the equally likely possibilities of children, together with the days of the week they are born in. Let’s call a boy born on a Tuesday a BTu. Our possible situations are:

▪ When the first child is a BTu and the second is a girl born on any day of the week: there are seven different possibilities.

▪ When the first child is a girl born on any day of the week and the second is a BTu: again, there are seven different possibilities.

▪ When the first child is a BTu and the second is a boy born on any day of the week: again there are seven different possibilities.

▪ Finally, there is the situation in which the first child is a boy born on any day of the week and the second child is a BTu – and this is where it gets interesting. There are seven different possibilities here too, but one of them – when both boys are born on a Tuesday – has already been counted when we considered the first to be a BTu and the second on any day of the week. So, since we are counting equally likely possibilities, we can only find an extra six possibilities here.

Summing up the totals, there are 7 + 7 + 7 + 6 = 27 different equally likely combinations of children with specified gender and birth day, and 13 of these combinations are two boys. So the answer is 13/27, which is very different from 1/3.

It seems remarkable that the probability of having two boys changes from 1/3 to 13/27 when the birth day of one boy is stated – yet it does, and it’s quite a generous difference at that. In fact, if you repeat the question but specify a trait rarer than 1/7 (the chance of being born on a Tuesday), the closer the probability will approach 1/2.

Which is surprising, weird… and, to recreational mathematicians at least, delightfully entertaining.

{ Magic numbers: A meeting of mathemagical tricksters | NewScientist | Continue reading }

photo { Bill Owen }

Across the page the symbols moved in grave morrice, in the mummery of their letters, wearing quaint caps of squares and cubes

6546.jpg

If you have an imagination, you don’t have to be the protagonist in H. G. Wells’ The Time Machine to travel through time. If you’ve ever eagerly awaited (or painfully dreaded) an upcoming birthday, recalled winning the lottery (don’t you wish), or fantasized about eating lunch while listening to your teacher drone on about why the river in the novel symbolizes the birth of modern civilization, you’ve mentally traveled through time.

Mental time travel is what got me through some of my postdoctoral days; watching spots (proteins) move across a computer screen was usually less than riveting. Besides alleviating boredom, the ability to use past experience (retrospection) to predict future scenarios (prospection) is extremely useful; you’ll be far more careful about when and where you leave your bicycle outside if it gets stolen.

{ National Association of Science Writers | Continue reading }

photo { Richard Kalvar }

Secrets weary of their tyranny: tyrants willing to be dethroned.

4.jpg

The claim that “no one understands quantum mechanics” is often attributed to Richard Feyman, who said that to illustrate the perceived “randomness” that is at the heart of quantum mechanics and the Copenhagen Interpretation of QM. The unfortunate consequence of this phrase is that we now have people using it to claim that we know NOTHING about QM, and that no one understands it.

Without even going into what QM is, let’s consider the following first and foremost: we have used QM to produce a zoo of devices and techniques ranging from your modern electronics to medical procedure such as MRI and PET scans, etc. Already one can question whether this is a symptom of something that no one understands? When was the last time you place your life and the lives of your loved ones in something that NO ONE understands? That is what you do when you fly in an airplane or drive in a car that nowadays use modern electronics. All of these depend on QM for their operations!

The issue here is what is meant by the word “understand”. In physics, and among physicists, we usually consider something to be “fully understood” when it has reached a universal consensus that this is the most valid description of a phenomenon. We say that we understand Newton’s Laws because it is well-tested and we know that it definitely work within a certainly range of condition. (…)

So in physics, the criteria to say that we understand something is very, very strict. It requires a well-verified theory that matches practically all of the empirical observations, and a general consensus among experts in the field that agree with it. This means that in many instances, physicists would tend to say that we don’t understand so-and-so, because there are many areas of physics that haven’t been fully answered, verified, or have reached a general consensus. To us, this does not allow us to say that we have understood it. But it certainly does not mean we know NOTHING about it. (…)

Do we understand QM? Damn right we do! Do we understand it COMPLETELY? Sure if what we mean by “completely” only includes things that we can test and measure. QM is THE most successful theory of the physical world that human has invented up to now and no experimental observation so far has contradicted it. So that alone is a very strong argument that we DO understand QM. However, if we ask if we understand how QM comes up with all the correct predictions of what nature does, or if there’s anything underlying all the QM’s predictions, then no, we don’t. (…)

I’ve been known to reply, whenever I get another question such as this, that we understand QM MORE than you understand your own family members. Why? I can use QM to make QUANTITATIVE predictions, not just qualitative ones, and make these predictions uncannily accurate. When was the last time you can do that with your family member consistently, day in, day out, a gazillion times a second? We use QM to do that and more.

{ Physics and physicists | Continue reading }

artwork { Joseph Beuys }

Will dub me a new name: the billbefriending bard.

{ Bill Evans, 1966}

The lions couchant on the pillars as he passed out through the gate; toothless terrors. Still I will help him in his fight.

7898798788.jpg

The darkness was like nothing I’d ever seen. I couldn’t see my hand in front of my face; after a while I could barely believe that my hand was there, in front of my face, waving.

That darkness is what I think about when I think of black. I was going to write, the color black, but as every child knows black isn’t a color. Black is a lack, a void of light. When you think about it, it’s surprising that we can see black at all: our eyes are engineered to receive light; in its absence, you’d think we simply wouldn’t see, any more than we taste when our mouths are empty. Black velvet, charcoal black, Ad Reinhart’s black paintings, black-clad Goth kids with black fingernails: how do we see them?

According to modern neurophysiology, the answer is that photoreceptors in our retinas respond to photons of light, and we see black in those areas of the retina where the photoreceptors are relatively inactive. But what happens when no photoreceptors are working—as happens in a cave? Here we turn to Aristotle, who notes that sight, unlike touch or taste, continues to operate in the absence of anything visible:

Even when we are not seeing, it is by sight that we discriminate darkness from light, though not in the same way as we distinguish one colour from another. Further, in a sense even that which sees is coloured; for in each case the sense-organ is capable of receiving the sensible object without its matter. That is why even when the sensible objects are gone the sensings and imaginings continue to exist in the sense-organs.

We “see” in total darkness because sight itself has a color, Aristotle suggests, and that color is black: the feedback hum that lets us know the machine is still on.

{ Paul LaFarge/Cabinet Magazine | Continue reading }

Well she’s up against the register, with an apron and a spatula

789789.jpg

{ Google Books | Nietzsche, The Antichrist, 1888 }

She said, How you gonna like ‘em, over medium or scrambled?

4567.jpg

During the winter of 2007, a UCLA professor of psychiatry named Gary Small recruited six volunteers—three experienced Web surfers and three novices—for a study on brain activity. He gave each a pair of goggles onto which Web pages could be projected. Then he slid his subjects, one by one, into the cylinder of a whole-brain magnetic resonance imager and told them to start searching the Internet. As they used a handheld keypad to Google various preselected topics—the nutritional benefits of chocolate, vacationing in the Galapagos Islands, buying a new car—the MRI scanned their brains for areas of high activation, indicated by increases in blood flow.

The two groups showed marked differences. Brain activity of the experienced surfers was far more extensive than that of the newbies, particularly in areas of the prefrontal cortex associated with problem-solving and decisionmaking. Small then had his subjects read normal blocks of text projected onto their goggles; in this case, scans revealed no significant difference in areas of brain activation between the two groups. The evidence suggested, then, that the distinctive neural pathways of experienced Web users had developed because of their Internet use.

The most remarkable result of the experiment emerged when Small repeated the tests six days later. In the interim, the novices had agreed to spend an hour a day online, searching the Internet. The new scans revealed that their brain activity had changed dramatically; it now resembled that of the veteran surfers. “Five hours on the Internet and the naive subjects had already rewired their brains,” Small wrote. He later repeated all the tests with 18 more volunteers and got the same results.

When first publicized, the findings were greeted with cheers. By keeping lots of brain cells buzzing, Google seemed to be making people smarter. But as Small was careful to point out, more brain activity is not necessarily better brain activity. The real revelation was how quickly and extensively Internet use reroutes people’s neural pathways. “The current explosion of digital technology not only is changing the way we live and communicate,” Small concluded, “but is rapidly and profoundly altering our brains.”

What kind of brain is the Web giving us? That question will no doubt be the subject of a great deal of research in the years ahead. Already, though, there is much we know or can surmise—and the news is quite disturbing. Dozens of studies by psychologists, neurobiologists, and educators point to the same conclusion: When we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning. Even as the Internet grants us easy access to vast amounts of information, it is turning us into shallower thinkers, literally changing the structure of our brain. (…)

What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization: We are evolving from cultivators of personal knowledge into hunters and gatherers in the electronic data forest. In the process, we seem fated to sacrifice much of what makes our minds so interesting.

{ Nicholas Carr/Wired | Continue reading }

In an ideal world, I would sit down at my computer, do my work, and that would be that. In this world, I get entangled in surfing and an hour disappears. (…)

For years I would read during breakfast, the coffee stirring my pleasure in the prose. You can’t surf during breakfast. Well, maybe you can. Now I don’t have coffee and I don’t eat breakfast. I get up and check my e-mail, blog comments and Twitter.

{ Roger Ebert/Chicago Sun-Times }

photo { Stephen Shore }

Why not rather from its last? From today?

98464367.jpg

Tell about places you have been, strange customs. The other one, jar on her head, was getting the supper: fruit, olives, lovely cool water out of the well stonecold like the hole in the wall at Ashtown. Must carry a paper goblet next time I go to the trottingmatches. She listens with big dark soft eyes.

{ James Joyce, Ulysses, published in 1922 | Continue reading | Ulysses contains approximately 265,000 words from a lexicon of 30,030 words (including proper names, plurals and various verb tenses), divided into eighteen episodes. | Wikipedia | Continue reading }

images { Hilo Chen, Beach, 2005 | Huata, Open The Gates of Shambhala EP }

Put that in your pipe and smoke it

9865296586.jpg

Under such circumstances, we are simply determined in our ideas by our fortuitous and haphazard encounter with things in the external world. This superficial acquaintance will never provide us with knowledge of the essences of those things. In fact, it is an invariable source of falsehood and error. This “knowledge from random experience” is also the origin of great delusions, since we –thinking ourselves free– are, in our ignorance, unaware of just how we are determined by causes.

{ Stanford Encyclopedia of Philosophy | Continue reading }

Knowledge of the first kind is based on sense experience and imagination. It includes all our knowledge of the moment-to-moment state of our body (warmth, cold, hunger, thirst, desire, etc.) as well as our knowledge of the properties of external bodies. All knowledge of this sort is partial or inadequate.

Knowledge of the second kind is based on reason or understanding. It includes all our knowledge of the common properties of bodies and minds (and of the sciences that concern these). Knowledge of the second kind also includes our knowledge of the definitions of substance, mode, God [Nature], etc. (…) According to Spinoza, this knowledge is always adequate and necessarily true. Spinoza discusses two important characteristics of knowledge of the second kind: 1) reason always regards things as necessary; 2) reason perceives things “in the light of eternity,” i.e. without any relationship to time. (…)

What we do not acquire in this way, however, is an understanding of our own existence “in the light of eternity.” Instead, we are generally caught up in the flow of time–the past, the present and the future–and this, Spinoza believes, often leads to unhappiness. Regarding the present as the only reality, we regret the loss of the past and either hope or fear for what the future will bring. But this is a confused (or inadequate) conception of our existence.

{ Don Rutherford, Notes on Part II of the Ethics | Continue reading }

more { Deleuze on Spinoza, 1978 | Deleuze on Spinoza, 1981 }

Was that then real? The only true thing in life? She was no more.

489498.jpg

A double bind is a dilemma in communication in which an individual (or group) receives two or more conflicting messages, with one message negating the other. This creates a situation in which a successful response to one message results in a failed response to the other, so that the person will be automatically wrong regardless of response. The nature of a double bind is that the person cannot confront the inherent dilemma, and therefore can neither comment on the conflict, nor resolve it, nor opt out of the situation.

A double bind generally includes different levels of abstraction in orders of messages, and these messages can be stated or implicit within the context of the situation, or conveyed by tone of voice or body language. Further complications arise when frequent double binds are part of an ongoing relationship to which the person or group is committed.

Double bind theory is more clearly understood in the context of complex systems and cybernetics because human communication and also the mind itself function in an interactive manner similar to ecosystems. Complex systems theory helps us understand the interdependence of the parts of a message and provides “an ordering of what to the Newtonian looks like chaos.”

{ Wikipedia | Continue reading }

photo { Charlize Theron and Patty Jenkins photographed by Richard Avedon, 2004 | more }

Possess her once take the starch out of her

897798.jpg

Over the past year or so, Stanley Fish has occasionally devoted his New York Times blog to the notion that, as he put it recently, higher education is “distinguished by the absence” of a relationship between its activities and any “measurable effects in the world.” (…) “The humanities, Fish claimed, do not have an extrinsic utility—an instrumental value—and therefore cannot increase economic productivity, fashion an informed citizenry, sharpen moral perceptions, or reduce prejudice and discrimination. (…)

The real issue, as Fish concedes, is not whether art, music, history, or literature has instrumental value, but whether academic research into those subjects has such value. Few would claim that art and literature have no intrinsic worth, and very few would claim that they possess no measurable utility. Students at Harvard Medical School, for instance, like students at a growing number of medical schools across the country, now take art courses. Studying works of art, researchers believe, makes students more observant, more open to complexity, and more-flexible thinkers—in short, better doctors. (…)

In fact, humanities research already has instrumental value. That value, however, is rarely immediate or predictable. Consider the following examples: (…)

“Stream of consciousness” was a phrase first used by William James, in 1890, to describe the flow of perception in the human mind. It was later adopted by literary critics like Melvin J. Friedman, author of the 1955 book Stream of Consciousness: A Study in Literary Method, who used the term to explain the unedited forms of interior monologue common in modernist novels of the 1920s. However, by his own acknowledgment, Knuth’s innovations were most clearly influenced by the work of the Belgian computer scientist Pierre-Arnoul de Marneffe, who was in turn inspired by Arthur Koestler’s 1967 book, The Ghost in the Machine, on the structure of complex organisms. And that book took its title and its point of departure from a key piece of 20th-century humanities research, Gilbert Ryle’s The Concept of Mind (1949), which challenged Cartesian dualism.

There is, then, a visible legacy of utility that begins with research into Descartes and leads to important innovations in computer science. (…)

Examples of research with unclear instrumental value abound, in all disciplines. Scientists at the University of British Columbia have found that working in front of a blue wall (and not a red one) improves creative thinking.

{ Stephen J. Mexal/The Chronicle of Higher Education | Continue reading }

Did you know, Mr. Torrance, that your son is attempting to bring an outside party into this situation? Did you know that?

897648.jpg

In the social sciences (following the work of Michel Foucault), a discourse is considered to be a formalized way of thinking that can be manifested through language, a social boundary defining what can be said about a specific topic, or, as Judith Butler puts it, “the limits of acceptable speech”—or possible truth.

Discourses are seen to affect our views on all things; it is not possible to escape discourse. For example, two notably distinct discourses can be used about various guerrilla movements describing them either as “freedom fighters” or “terrorists.” In other words, the chosen discourse delivers the vocabulary, expressions and perhaps also the style needed to communicate. Discourse is closely linked to different theories of power and state, at least as long as defining discourses is seen to mean defining reality itself. It also helped some of the world’s greatest thinkers express their thoughts and ideas into what is now called “public orality.”

This conception of discourse is largely derived from the work of French philosopher Michel Foucault (see below).

{ Wikipedia | Continue reading }

photo { Richard Kalvar }

Every single show she out there reppin’ like a mascot

56446.jpg

Alberto Giacometti’s six-foot-tall bronze Walking Man I sold at Sotheby’s London in February 2010 for the equivalent of $104.3 million, and was briefly (until overtaken this month by a Picasso) the most expensive artwork ever sold at auction. It remains, by far, the most expensive work available in multiple examples. The sculpture, cast by the artist himself in 1961, was reportedly bought by Lily Safra, widow of banker Edmund Safra. (…)

But by most people’s standards it is a very large sum of money; and in relation to the production cost of the sculpture it is an absurdly large sum of money. To make a copy of Walking Man I today (I am told by Morris Singer Foundry, which does much casting for artists in the UK) would cost in the region of $25,000, including the price of the bronze. If we allow for Giacometti’s time to make the piece, it does not substantially alter the enormity of the disparity between production cost and market price. Given that the sculpture exists in an edition of six, it would seem that, back in the early 1960s, Giacometti single-handedly created more than a half a billion dollars of goods, at today’s prices, in at most a few weeks. (…)

Writing about the Giacometti sale, the Australian journalist Andrew Frost posed the question that no doubt many people ask themselves even if they do not utter it out loud: “Since the material value of art is negligible, we’re paying for something — but what?”

Pablo Picasso, legend has it, had an answer to this: you are paying for “a lifetime of experience.” But this explanation fails when we consider the market in work by Picasso himself. The most expensive works by Picasso that have been sold at auction are the 1932 Nude, Green Leaves and Bust, purchased recently for $106.5 million; and a 1905 Rose Period painting, Garcon a la Pipe, which was auctioned for $104.2 million in 2004. Allowing for inflation, the Garcon cost more in real terms than the Nude. Picasso, born in 1881, was roughly 24 years old when he painted it, and he eventually lived to 91: the art he produced in his old age, with a “lifetime of experience” behind him, is worth less, not more.

A sense of the disconnect between the production cost and market price of artworks has already become a part of modern consciousness. (…)

Recently a number of books have been published by economists who aim to reveal the mechanism that leads to the formation of staggering prices for art, especially modern art. As part of their analyses, these economists have attempted to define exactly what the quality or qualities are that collectors pay for. Among these books are Don Thompson’s The $12 Million Stuffed Shark, David W. Galenson’s Artistic Capital, and Olav Velthuis’s Talking Prices. Of course, theories of art can be more sophisticated than those proposed in the above books, but as a rule they don’t directly address the question “What are we paying for?” By applying the principle of Follow The Money, maybe we can arrive at an insight into art itself.

Thompson’s theory is that the price of the preserved shark to which his title alludes (a work by Damien Hirst) and of other expensive works of contemporary art is a reflection of “brand equity” produced by marketing and publicity. He compares explicitly the purchase of a “branded” artwork, i.e. one blessed by the gallery-auction-museum-press apparatus, to the purchase of a Louis Vuitton handbag, and suggests that branding is relied upon by buyers as a substitute for their own judgment, about which they feel insecure.

{ Matthew Bown | Continue reading }

photo { Isabelle Pateer }

Never yet have I found the woman by whom I should like to have children, unless it be this woman whom I love: for I love you, O Eternity!

Going to bed with every dream that dies here every morning

641.jpg

“Creativity is a complex concept; it’s not a single thing,” he said, adding that brain researchers needed to break it down into its component parts. Dr. Kounios, who studies the neural basis of insight, defines creativity as the ability to restructure one’s understanding of a situation in a nonobvious way.

Everyone agrees that no single measure for creativity exists. While I.Q. tests, though controversial, are still considered a reliable test of at least a certain kind of intelligence, there is no equivalent when it comes to creativity — no Creativity Quotient, or C.Q.

Dr. Jung’s lab uses a combination of measures as proxies for creativity. One is the Creativity Achievement Questionnaire, which asks people to report their own aptitude in 10 fields, including the visual arts, music, creative writing, architecture, humor and scientific discovery.

Another is a test for “divergent thinking,” a classic measure developed by the pioneering psychologist J. P. Guilford. Here a person is asked to come up with “new and useful” functions for a familiar object, like a brick, a pencil or a sheet of paper.

Dr. Jung’s team also presents subjects with weird situations. Imagine people could instantly change their sex, or imagine clouds had strings; what would be the implications?

In another assessment, a subject is asked to draw the taste of chocolate or write a caption for a humorous cartoon, as is done in The New Yorker magazine’s weekly contest. “Humor is an important part of creativity,” Dr. Jung said.

{ NY Times | Continue reading }

related { How did one ape 45,000 years ago happen to turn into a planet dominator? The answer lies in an epochal collision of creativity. | Why They Triumphed | full story }

photo { Michael Casker }

Only thing missin is a Missus


What do brains and computer chips have in common? Not that much. Sure both use electricity, but in neurons the origin of electrical pulses is chemical while for computer chips it comes from electrical currents. Neurons are highly plastic, rearranging their connections to adapt to new information while computer chips are locked in their arrangement for their entire existence. But one thing they do share is the pattern of connections in their overall structure, specifically both brains and computer chips use the shortest and most efficient pathway they can to avoid the costs associated with taking long detours for the signals to get to their destination. Evolution and chip designers seemed to have reached the same conclusions when bumping up against the same very basic and very important limits, says a recent research paper from a small international team in PLoS. (…)

First, the human brain, the nematode’s nervous system, and the computer chip all had a Russian doll- like architecture, with the same patterns repeating over and over again at different scales. Second, all three showed what is known as Rent’s scaling, a rule used to describe relationships between the number of elements in a given area and the number of links between them.

The first finding confirms the research being done on intelligence and cognition in insects and mammals. (…) The second finding also seems to confirm something we know about evolution, mainly that natural selection tends to trim down waste and excess if it can and over a long period of trial and error, it will eventually arrive at efficient solutions to basic problems.

{ Greg Fish | Continue reading }

video { Jackson Pollock painting, 1950 | more }

It will only be parlayed into a memory

ik0911.jpg

Dr. Raymond Mar, of On Fiction: An Online Magazine on the Psychology of Fiction, published a research bulletin the other day summarizing a psychological study whose results apparently suggest that, in the words of the blog headline, “words reveal the personality of the writers.” After presenting the background, experimental procedure, and findings, Dr. Mar concludes that “From these findings, it appears that creative writing can indeed reveal aspects of the author’s personality to readers. An encouraging result for those of us who feel we’ve come to know an author by reading his or her books.”

I was excited by the headline. Typically, I approach reading as entering into a relationship with a writer and, when it comes to reading the works of cherished writers, I often work with the fantasy that I’m getting to them better, more intimately. I even once had the experience of hallucinating an encounter with a dead author while visiting with his widow in their apartment in Paris. But as I read through Dr. Mar’s report of the study I was left with some questions and even some objections.

First, the background to the study.  According to Dr. Mar:

A fascinating study currently In Press in the Journal of Research in Personality (Kufner et al., in press), provides evidence that in some ways, we can infer what an author is like based solely on their writing. Although previous studies on inferring personality from written text have been conducted, this was the first study to look at creative writing as opposed to personal essays.

{ Yago Colás/ScientificBlogging | Continue reading | Thanks Emma }

For 40 years, Barnes & Noble has dominated bookstore retailing. In the 1970s it revolutionized publishing by championing discount hardcover best sellers. In the 1990s, it helped pioneer book superstores with selections so vast that they put many independent bookstores out of business.

Today it boasts 1,362 stores, including 719 superstores with 18.8 million square feet of retail space—the equivalent of 13 Yankee Stadiums.

But the digital revolution sweeping the media world is rewriting the rules of the book industry, upending the established players which have dominated for decades. Electronic books are still in their infancy, comprising an estimated 3% to 5% of the market today. But they are fast accelerating the decline of physical books, forcing retailers, publishers, authors and agents to reinvent their business models or be painfully crippled.

“By the end of 2012, digital books will be 20% to 25% of unit sales, and that’s on the conservative side,” predicts Mike Shatzkin, chief executive of the Idea Logical Co., publishing consultants. “Add in another 25% of units sold online, and roughly half of all unit sales will be on the Internet.”

Nowhere is the e-book tidal wave hitting harder than at bricks-and-mortar book retailers. The competitive advantage Barnes & Noble spent decades amassing—offering an enormous selection of more than 150,000 books under one roof—was already under pressure from online booksellers.

It evaporated with the recent advent of e-bookstores, where readers can access millions of titles for e-reader devices.

{ Wall Street Journal | Continue reading }

related { Book sales, frumpy readers, and mental rotation of book titles }

illustration { vb infinite swell in infinite indumentum | Imp Kerr & Associates, NYC }

No, you’re not thinkin’. You’re too busy being a smart aleck to be thinkin’. Now I want ya to “think” and stop bein’ a smart aleck. Can ya try that for me?

1111111.jpg

{ Gilles Deleuze, Dialogues, 1995 | Continue reading | Gilles Deleuze l Wikipedia }

Has her roses probably. Or sitting all day typing.

65656.jpeg

So, this is about the word “so.” (…) “So” may be the new “well,” new “um,” new “oh” and new “like.” No longer content to lurk in the middle of sentences, it has jumped to the beginning, where it can portend many things: transition, certitude, logic, attentiveness, a major insight.

{ NY Times | Continue reading }

Last year, grammatical tragedy struck in the heart of England when Birmingham City Council decreed that apostrophes were to be forever banished from public addresses. To the horror of purists and pedants alike, place names such as St Paul’s Square were banned and unceremoniously replaced with an apostrophe-free version: St Pauls Square.

The council’s reasoning was that nobody understands apostrophes and their misuse was so common in public signs that they were a hindrance to effective navigation. Anecdotes abounded of ambulance drivers puzzling over how to enter St James’s Street into a GPS navigation system while victims of heart attacks, strokes and hit ‘n’ run drivers passed from this world into the (presumably apostrophe-free) next.

Why the confusion? Part of the reason is that apostrophes are not particularly common in the English language: In French they occur at a rate of more than once per sentence on average. In English, they occur about once in every 20 sentences. So English speakers get less practice.

But the rules governing apostrophes are also more complex in English. In both French and English, apostrophes indicate a missing letter, such as the missing i in that’s or the v in e’er. But in English, apostrophes also indicate the possessive (or genitive) case. They are used to show that one noun owns another: St James’s Street is the street belonging to St James.

The complexity is compounded because in English, the plural is often formed by adding an s. So the word boys means more than one boy. How then do you form the possessive to indicate, for example, a ball belonging to the boys? Is it the boy’s ball or the boys’s ball or the boys’ ball?

And then there are the exceptions. Pronouns, for example, do not take a possessive apostrophe: you can’t say I’s ball or me’s bat. The truth is that knowing when to use an apostrophe is not always easy.

{ The Physics arXiv Blog | Continue reading | Mind your p’s and q’s: or the peregrinations of an apostrophe in 17th Century English | PDF }

photo { Marco Ovando }



kerrrocket.svg