nswd

ideas

‘Give me love, give me all that you’ve got.’ –Cerrone

1454.jpg

What is sadness? What is anger? What is fear? Are they just words or is there something more? In principle, sadness, anger, and fear are emotions, and so is love. In general, it is usually considered that emotions are natural body-experiences that are then expressed through language and that language, in turn, is often described as irrational and subjective. That is, what we first feel in our bodies, later comes out of our mouths in the form of a discourse which is, in some way, opposed to reason.

Emotions are also said to be gestated in the unconscious and not in the will. Thus, they are more spontaneous than artificial. They are more “sincere” than “thinking”. Sometimes they are mixed with rational behaviors, whose existential status belongs to the order of the non-emotional. Recently, emotions have been considered not as the exclusive preservation of the individual’s interiority, but as discursive social constructions. Indeed, the social Psychology of emotions has shown that the processes, causes, and consequences of emotions depend on language use.

Thus, we will deal with the close relationship between emotions and language. Especially, we will deal with an emotion that has been, in the history of mankind in the Western culture, really important. We refer to “love”, understood in the broadest sense. Love has helped to define the essence of human beings.

“There are some who never would have loved if they never had heard it spoken of”, said La Rochefoucauld. Without a history of love and lovers, we would know nothing on how to cope with such a fundamental emotion as well as on why this particular emotion has been investigated in its various aspects and the strength of the interest when it comes to the relationship between emotions and language.

{ What is love? Discourse about emotions and social sciences | PDF | Continue reading }

artwork { Ingres, La Grande Odalisque, 1814 }

Still you have to get rid of it someway. They don’t care. Complimented perhaps.

132.jpg

Light swearing at the start or end of a persuasive speech can help influence an audience.

The problem is that we run the risk of losing credibility and appearing unprofessional.

To see whether swearing can help change attitudes, Scherer and Sagarin (2006) divided 88 participants into three groups to watch one of three slightly different speeches. The only difference between the speeches was that one contained a mild swear word at the start: “…lowering of tuition is not only a great idea, but damn it, also the most reasonable one for all parties involved.” The second speech contained the ‘damn it’ at the end and the third had neither.

When participants’ attitudes were measured, they were most influenced by the speeches with the mild obscenity included, either at the beginning or the end.

{ PsyBlog | Continue reading }

Is expressing thanks a powerful motivator or just a social nicety?

According to positive psychologists, saying ‘thank you’ is no longer just good manners, it is also beneficial to the self.

Studies have suggested that being grateful can improve well-being, physical health, can strengthen social relationships, produce positive emotional states and help us cope with stressful times in our lives.

{ PsyBlog | Continue reading }

artwork { Roy Lichtenstein, Grrrrrrrrrrr, 1965 }

Bertha Supple told that once to Edy Boardman, a deliberate lie

The preface paradox was introduced by David Makinson in 1965.

The argument runs along these lines:

It is customary for authors of academic books to include in the preface of their books statements such as “any errors that remain are my sole responsibility.” Occasionally they go further and actually claim there are errors in the books, with statements such as “the errors that are found herein are mine alone”.

(1) Such an author has written a book that contains many assertions, and has factually checked each one carefully, submitted it to reviewers for comment, etc. Thus, he has reason to believe that each assertion he has made is true.

(2) However, he knows, having learned from experience, that, in spite of his best efforts, there are very likely undetected errors in his book. So he also has good reason to believe that there is at least one assertion in his book is not true.

Thus he can rationally believe that the book both does and does not contain at least one error.

{ Wikipedia }

Four dinky sets, three garments and nighties extra, and each set slotted with different coloured ribbons, rosepink, pale blue, mauve and peagreen

1239.jpg

Just ask yourself: Which colour do you prefer ? Have you always preferred it, or did your preference change ? Can you tell why you prefer pink to, let’s say, yellow ? If you have no answer to these questions, you may wonder what’s so interesting about colour preferences. And if you have no answer, or no interest in the questions, it’s perhaps because they are not very well shaped.

Let’s first agree that color preference is an important aspect of human behavior. It influences a large number of decisions people make on a daily basis, including the clothes and make up they wear, the way they decorate their homes, the artifacts they buy or create, to name but a few examples. What is more interesting is that color is, in some sense, a superficial quality that seldom influences the practical function of artifacts. What’s more interesting for psychologists, is that we still know very little on which factors actually determine these preferences. We still don’t have a good grasp on what they are, and how to capture them descriptively: some studies have reported universal preferences (for blue rather than red); others. for highly saturated colors ; some, finally, stress cultural and individual differences.

The problem may be that testing for colour preferences has something to do with colour perception, colour labeling and cultural associations - and all these problems are hard to disconnect. Elderly people for instance tend to change their colour preferences, but this may have to do with visual impairement. (…)

Another theory suggests that women, as caregivers who need to be particularly sensitive to, say, a child flushed with fever, have developed a sensitivity to reddish changes in skin color, a skill that enhances their abilities as the “emphathizer.”

Other arguments for innate colour preferences come from animal studies - with some recent surprising discoveries.  Animal colour preferences from sexual or social contexts are assumed to have arisen owing to preferences for specific kinds of food, representing a sort of sensory bias.

{ Cognition and Culture | Continue reading }

photo { Tim Barber }

She’ll do no jugglywuggly with her war souvenir postcards to help to build me murial, tippers! I’ll trip your traps!

198.jpg

During World War II, Allied forces readily admitted German tanks were superior to their own. The big question for Allied forces, then, was how many tanks Germany was producing. Knowing that would help them counter the threat. Here’s how they reverse-engineered serial numbers to find out.

To solve the problem of determining production numbers, Allied forces initially tried conventional intelligence gathering: spying, intercepting and decoding transmissions and interrogating captured enemies.

Using these methods, the Allies deduced that the German military industrial complex churned out around 1,400 tanks each month from June 1940 through September 1942. That just didn’t seem right.

To put that number in context, Axis forces used 1,200 tanks during the Battle of Stalingrad, an eight month battle that resulted in almost two million casualties. That meant the estimate of 1,400 most likely was too high.

Obviously skeptical of that result, the Allies looked for other methods of estimation. That’s when they found a critical clue: serial numbers.

Allied intelligence noticed each captured tank had a unique serial number. With careful observation, the Allies were able to determine the serial numbers had a pattern denoting the order of tank production. Using this data, the Allies created a mathematical model to determine the rate of German tank production. They used it to estimate that the Germans produced 255 tanks per month between the summer of 1940 and the fall of 1942.

Turns out the serial-number methodology was spot on. After the war, internal German data put der Führer’s production at 256 tanks per month—one more than the estimate.

{ Wired | Continue reading }

‘The shortness of life, so often lamented, may be the best thing about it.’ –Arthur Schopenhauer

14747.jpg

So why do women live longer than men? One idea is that men drive themselves to an early grave with all the hardship and stress of their working lives. If this were so, however, then in these days of greater gender equality, you might expect the mortality gap would vanish or at least diminish. Yet there is little evidence that this is happening. Women today still outlive men by about as much as their stay-at-home mothers outlived their office-going fathers a generation ago.

Furthermore, who truly believes that men’s work lives back then were so much more damaging to their health than women’s home lives? Just think about the stresses and strains that have always existed in the traditional roles of women: a woman’s life in a typical household can be just as hard as a man’s.

Indeed, statistically speaking, men get a much better deal out of marriage than their wives—married men tend to live many years longer than single men, whereas married women live only a little bit longer than single women. So who actually has the easier life?

It might be that women live longer because they develop healthier habits than men—for example, smoking and drinking less and choosing a better diet. But the number of women who smoke is growing and plenty of others drink and eat unhealthy foods. In any case, if women are so healthy, why is it that despite their longer lives, women spend more years of old age in poor health than men do? The lifestyle argument therefore does not answer the question either.

As an experimental gerontologist, I approach this issue from a wider biological perspective, by looking at other animals. It turns out that the females of most species live longer than the males. This phenomenon suggests that the explanation for the difference within humans might lie deep in our biology.

{ Scientific American | Continue reading }

‘Even God can’t change the past.’ –Agathon

32.jpg

In 1904, King Gillette — who names their kid King? — received two patents on razors, blades, and the combination of the two. As the patents make clear, Gillette had a clear vision of the markets that he would create: “Hence,” stated the patent application, “I am able to produce and sell my blades so cheaply that the user may buy them in quantities and throw them away when dull without making the expense … as great as that of keeping the prior blades sharp.”

But Gillette did more than invent a new razor and a new blade. As Chris Anderson notes in his recent business bestseller, Free, Gillette invented an entire business strategy, one that’s still invoked in business schools and implemented today across many industries — from VCRs and DVD players to video game systems like the Xbox and now ebook readers. It’s pretty simple: invest in an installed base by selling a product at low prices or even giving them away, then sell a related product at high prices to recoup the prior investment. King Gillette launched us down this road.

Or did he?

{ Randy Picker/Harvard Business School | Continue reading }

artwork { Roy Lichtenstein, Half Face with Collar, 1963 }

I’m not the guy you kill. I’m the guy you buy.

6542.jpg

In fact, most of the life on the planet is probably composed of bacteria. They have been found making a living in Cretaceous-era sediments below the bottom of the ocean and in ice-covered Antarctic lakes, inside volcanoes, miles high in the atmosphere, teeming in the oceans — and within every other life-form on Earth.

These facts by themselves may trigger existential shock: People are partly made of pond scum. But beyond that psychic trauma, a new and astonishing vista unfolds. In a series of recent findings, researchers describe bacteria that communicate in sophisticated ways, take concerted action, influence human physiology, alter human thinking and work together to bioengineer the environment. These findings may foreshadow new medical procedures that encourage bacterial participation in human health. They clearly set out a new understanding of the way in which life has developed on Earth to date, and of the power microbes have to regulate both the global environment and the internal environment of the human beings they inhabit and influence so profoundly.

Science has determined that life arose and became complex through a process generally known as evolution, but biologists are engaged in an energetic debate about the form of that evolution. In essence, the argument centers on whether the biosphere should be characterized as a tree of life or an interactive web. In the tree construct, every living thing springs from a common ancestor, organisms evolve slowly by means of random mutations, and genes are passed on from parent to offspring (that is to say, vertically). The farther away from the common ancestor, generally speaking, the more complex the life-form, with humans at the apex of complexity.

The tree-of-life notion remains a reasonable fit for the eukaryotes, but emerging knowledge about bacteria suggests that the micro-biosphere is much more like a web, with information of all kinds, including genes, traveling in all directions simultaneously. Microbes also appear to take a much more active role in their own evolution than the so-called “higher” animals. (…)

Recent research has shown that gut microbes control or influence nutrient supply to the human host, the development of mature intestinal cells and blood vessels, the stimulation and maturation of the immune system, and blood levels of lipids such as cholesterol. They are, therefore, intimately involved in the bodily functions that tend to be out of kilter in modern society: metabolism, cardiovascular processes and defense against disease. Many researchers are coming to view such diseases as manifestations of imbalance in the ecology of the microbes inhabiting the human body. If further evidence bears this out, medicine is about to undergo a profound paradigm shift, and medical treatment could regularly involve kindness to microbes.

Still, in practice, the medical notion of friendly microbes has yet to extend much past the idea that eating yogurt is good for you. For most doctors and medical microbiologists, microbes are enemies in a permanent war. Medicine certainly has good reason to view microbes as dangerous, since the germ theory of disease and the subsequent development of antibiotics are two of medical science’s greatest accomplishments.

But there’s a problem: The paradigm isn’t working very well anymore. Not only are bacteria becoming antibiotic-resistant, but antibiotics are creating other problems. Approximately 25 percent of people treated with antibiotics for an infection develop diarrhea. Moreover, people who contract infections just by being hospitalized are at risk of developing chronic infections in the form of biofilms.

{ Miller-McCune | Continue reading | Thanks Constantine }

All these things that nobody wants any more, September’s reminding July

To be wrong is to be surprised, and I think in day-to-day life, we’re often disorientated and upset by that experience. But in literature or art it’s safe to explore that disorientation. 

{ Kathryn Schulz | Five Books | Continue reading }

Where do we go from here, time ain’t nothing but time

1666666.jpg

Strangely, history has never figured into the equation when it comes to exploring why some countries prosper and others don’t. To understand how it might relate, Diego Comin at Harvard Business School, Erick Gong at the University of California, Berkeley, and I started by compiling a list of 11 ancient technologies that were around in 1000 B.C.: Was there written language? The wheel? Agriculture and iron tools? We drew today’s boundaries on the ancient world and assigned each separate technology history to the future country that would form within that territory. Then we expanded the survey to 1500 A.D., looking for the adoption of 24 technologies, including oceangoing ships, paper, printing, firearms, artillery, the magnetic compass, and steel.

We found that there was a remarkably strong association between countries with the most advanced technology in 1500 and countries with the highest per capita income today. Europe already had steel, printed books, and oceangoing ships then, while large parts of Africa did not yet have writing or the wheel. Britain had all 24 of our sample technologies in 1500. The Democratic Republic of the Congo, Papua New Guinea, and Tonga had none of them. But technology also travels. North America, Australia, and New Zealand had among the world’s most backward technology in 1500; today, they are among the wealthiest regions on Earth, reflecting the principle that it’s the people who matter, not the places. As migration has transformed parts of the world that were nearly empty in the Middle Ages, technology has migrated with them.

{ Foreign Policy | Continue reading }

‘Obviously the facts are never just coming at you but are incorporated by an imagination that is formed by your previous experience.’ –Philip Roth

65.jpg

Ask any Ashkenazi American Jew about his family’s arrival in the United States, and you’re likely to hear a certain story. With minor variations, it goes something like this: “My great-grandfather was called Rogarshevsky, but when he arrived at Ellis Island, the immigration officer couldn’t understand his accent. So he just wrote down ‘Rogers,’ and that became my family’s name.”

Most American Jews accept such stories as fact. The truth, however, is that they’re fiction. Ellis Island, New York City’s historic immigrant-absorption center, processed up to 11,000 immigrants daily between 1892 and 1924. Yet despite this incessant flow of newcomers, the highest standards of professionalism were demanded of those who worked there. All inspectors—many of whom were themselves immigrants, or children of immigrants—were required to know at least two languages; many knew far more, and all at the native-speaker level. Add to that the hundreds of auxiliary interpreters, and together you’ve covered nearly every possible language one might hear at Ellis Island. Yiddish, Russian, and Polish, in this context, were a piece of cake.

{ Azure Online | Continue reading }

‘A poet must leave traces of his passage, not proof.’ –René Char

128.jpg

We do hate to give up control over the most important things in our lives. And viewing happiness as subject to external influence limits our control — not just in the sense that whether you get to live happily might depend on how things go, but also in the sense that what happiness is is partly a matter of how things beyond you are. We might do everything we can to live happily — and have everything it takes on our part to be happy, all the right thoughts and feelings — and yet fall short, even unbeknownst to us. That’s a threatening idea.

{ Opiniator/NY Times | Continue reading }

We can actually reprogram our brains to be happier, says Achor. “The brain is like a single processor in a computer.” Someone who is chronically negative or pessimistic is merely scanning first for the stresses and the hassles of life. And because the brain has finite resources, it cannot also scan for the positive elements. As a result, that person continuously reinforces his own negativity, causing himself to feel unhappy.

{ Big Think | Continue reading }

Birds too never find out what they say. Like our small talk.

1321.jpg

Understanding how ant colonies actually function means that we have to abandon explanations based on central control. This takes us into difficult and unfamiliar terrain. We are deeply attached to the idea that any system of interacting agents must be organized through hierarchy. Our metaphors for describing the behavior of such systems are permeated with notions of a chain of command. For example, we explain what our bodies do by talking about genes as “blueprints,” unvarying instructions passed from an architect to a builder. But we know that instructions from genes constantly change, as genes turn off and on in response to local interactions among cells.

Ant colonies, like genes, work without blueprints or programming. No ant understands what needs to be done or what its actions mean for the welfare of the colony. An ant colony has no teams of workers dedicated to fighting or foraging. Although it is still commonly believed that each ant is assigned a task for life, ant biologists now know that ants move from one task to another.

Colonies are regulated by networks of interaction. Ants respond only to their immediate surroundings and to their interactions with the other ants nearby. What matters is the rhythm of interactions, not their meaning. Ants respond to the pattern and rate of their encounters with each other, as well as to the smells they perceive in the world, such as the picnic sandwiches. (…)

A real ant colony is not a society of scheming, self-sacrificing individuals. It is more like an office that communicates by meaningless text messaging in which each worker’s task is determined by how many messages she just received. The colony has no central purpose. Each ant responds to the rate of her brief encounters with other ants and has no sense of the condition or the goals of the whole colony. Unlike the ants in Anthill, no ant really cares if the queen dies.

Ant colonies are not the only complex systems that function without central control. Brains, too, have no chain of command. (…) No one really knows how intelligence is distributed in the human brain. (…) The outstanding scientific questions about ants and brains are the same ones we have about many other biological systems that function without hierarchy, such as the immune system, the communities of bacteria in our bodies, and the patterns we see in the diversity of tropical forests. For all of these systems, we still don’t understand how the parts work together to produce the dynamics, the history, and the development of the whole system.

{ Boston Review | Continue reading }

Yesterday never comes back

00.jpg

{ Michel Foucault, interview, 1982 | Continue reading }

487.jpg

{ Nietzsche, Ecce Homo, 1888 }

111.jpg

{ Friedrich Nietzsche born October 15, 1844 | bio | Michel Foucault born October 15, 1926 | bio }

Le droit de se contredire, et de s’en aller

855109096.jpg

{ Jean Eustache, The Mother and the Whore , 1973 }

Such as she is, from former times, nine hosts in herself

2.jpg

In principle, to be cool means to remain calm even under stress. But this doesn’t explain why there is now a global culture of cool. What is cool?

The aesthetics of cool developed mainly as a behavioral attitude practiced by black men in the United States at the time of slavery. Slavery made necessary the cultivation of special defense mechanisms which employed emotional detachment and irony. A cool attitude helped slaves and former slaves to cope with exploitation or simply made it possible to walk the streets at night. During slavery, and long afterwards, overt aggression by blacks was punishable by death. Provocation had to remain relatively inoffensive, and any level of serious intent had to be disguised or suppressed. So cool represents a paradoxical fusion of submission and subversion. It’s a classic case of resistance to authority through creativity and innovation. (…)

In spite of the ambiguity, it seems that we remain capable of distinguishing cool attitudes from uncool ones. So what is cool? Let me say that cool resists linear structures. Thus a straightforward, linear search for power is not cool. Constant loss of power is not cool either. Winning is cool; but being ready to do anything to win is not. Both moralists and totally immoral people are uncool, while people who maintain moral standards in straightforwardly immoral environments are most likely to be cool. A CEO is not cool, unless he is a reasonable risk-taker and refrains from pursuing success in a predictable fashion. Coolness is a nonconformist balance that manages to square circles and to personify paradoxes. This has been well known since at least the time of cool jazz. This paradoxical nature has much to do with cool’s origins being the fusion of submission and subversion. (…)

In ancient Greece, the Stoic philosophers supported a vision of coolness in a turbulent world. The Stoic indifference to fate can be interpreted as the supreme principle of coolness, and has even been been viewed as such in the context of African American culture. The style of the jazz musician Lester Young, for example, was credible mostly because Young was neither proud nor ashamed.

{ Philosophy Now | Continue reading }

photo { Randy P. Martin }

With a little difference, till the latest up to date so early in the morning

3456879.jpg

1. It’s going to get worse
No silver linings and no lemonade. The elevator only goes down. The bright note is that the elevator will, at some point, stop.

2) The future isn’t going to feel futuristic
It’s simply going to feel weird and out-of-control-ish, the way it does now, because too many things are changing too quickly. The reason the future feels odd is because of its unpredictability. If the future didn’t feel weirdly unexpected, then something would be wrong.

{ Douglas Coupland’s guide to the next 10 years | The Globe and Mail | Continue reading }

photo { Ali Bosworth }

‘The human mind has no knowledge of the body, and does not know it to exist, save through the ideas of the modifications whereby the body is affected.’ –Spinoza

156.jpg

The brain has long enjoyed a privileged status as psychology’s favorite body organ. This is, of course, unsurprising given that the brain instantiates virtually all mental operations, from understanding language, to learning that fire is dangerous, to recalling names, to categorizing fruits and vegetables, to predicting the future. Arguing for the importance of the brain in psychology is like arguing for the importance of money in economics.

More surprising, however, is the role of the entire body in psychology and the capacity for body parts inside and out to influence and regulate the most intimate operations of emotional and social life. The stomach’s gastric activity , for example, corresponds to how intensely people experience feelings such as happiness and disgust. The hands’ manipulation of objects that vary in temperature and texture influences judgments of how “warm” or “rough” people are. And the ovaries and testes’ production of progesterone and testosterone shapes behavior ranging from financial risk-taking to shopping preferences.

Psychology’s recognition of the body’s influence on the mind coincides with a recent focus on the role of the heart in our social psychology. It turns out that the heart is not only critical for survival, but also for how people related to one another.

{ Scientific American | Continue reading }

One of the pressing questions in seventeenth century philosophy, and perhaps the most celebrated legacy of Descartes’s dualism, is the problem of how two radically different substances such as mind and body enter into a union in a human being and cause effects in each other. How can the extended body causally engage the unextended mind, which is incapable of contact or motion, and “move” it, that is, cause mental effects such as pains, sensations and perceptions.

Spinoza, in effect, denies that the human being is a union of two substances. The human mind and the human body are two different expressions—under Thought and under Extension—of one and the same thing: the person. And because there is no causal interaction between the mind and the body, the so-called mind-body problem does not, technically speaking, arise.

{ Stanford Encyclopedia of Philosophy | Continue reading }

photo { Philippe Halsman }

‘A glance, a word from you, gives greater pleasure than all the wisdom of this world.’ –Goethe


The Critique of Judgment, published in 1790, not only closes off Kant’s system as the end toward which Enlightenment thought had always tended, but also, in Deleuze’s interpretation, inaugurates Romanticism, (…) represents nothing less than “the foundation of Romanticism.”

{ Kant, Romantic irony, Unheimlichkeit | Cambridge University Press | Continue reading | PDF }

Kant (1724–1804) defines his theory of perception in his influential 1781 work The Critique of Pure Reason, which has often been cited as the most significant volume of metaphysics and epistemology in modern philosophy. Kant asserts that experience is based both upon the perception of external objects and a priori knowledge.

The Critique of Practical Reason (1788), is the second of Kant’s three critiques, deals with his moral philosophy. While the first Critique suggested that God, freedom, and immortality are unknowable, the second Critique will mitigate this claim.

Kant’s contribution to aesthetic theory is developed in the Critique of Judgment (1790) where he investigates the possibility and logical status of “judgments of taste.” After A. G. Baumgarten, who wrote Aesthetica (1750–58), Kant was one of the first philosophers to develop and integrate aesthetic theory into a unified and comprehensive philosophical system.

{ Wikipedia | Continue reading }

To understand the project of the Critique better, let us consider the historical and intellectual context in which it was written. Kant wrote the Critique toward the end of the Enlightenment, which was then in a state of crisis. Hindsight enables us to see that the 1780’s was a transitional decade in which the cultural balance shifted decisively away from the Enlightenment toward Romanticism, but of course Kant did not have the benefit of such hindsight.

The Enlightenment was a reaction to the rise and successes of modern science in the sixteenth and seventeenth centuries. The spectacular achievement of Newton in particular engendered widespread confidence and optimism about the power of human reason to control nature and to improve human life. One effect of this new confidence in reason was that traditional authorities were increasingly questioned. For why should we need political or religious authorities to tell us how to live or what to believe, if each of us has the capacity to figure these things out for ourselves? (…)

Enlightenment is about thinking for oneself rather than letting others think for you. (…) The Enlightenment was about replacing traditional authorities with the authority of individual human reason, but it was not about overturning traditional moral and religious beliefs. (…)

So modern science, the pride of the Enlightenment, the source of its optimism about the powers of human reason, threatened to undermine traditional moral and religious beliefs that free rational thought was expected to support. This was the main intellectual crisis of the Enlightenment.

The Critique of Pure Reason is Kant’s response to this crisis. Its main topic is metaphysics because, for Kant, metaphysics is the domain of reason – it is “the inventory of all we possess through pure reason, ordered systematically” (Axx) — and the authority of reason was in question. Kant’s main goal is to show that a critique of reason by reason itself, unaided and unrestrained by traditional authorities, establishes a secure and consistent basis for both Newtonian science and traditional morality and religion. (…)

Hindsight enables us to see that the 1780’s was a transitional decade in which the cultural balance shifted decisively away from the Enlightenment toward Romanticism, but of course Kant did not have the benefit of such hindsight.

{ Stanford Encyclopedia of Philosophy | Continue reading }

But just then there was a slight altercation between Master Tommy and Master Jacky

1232.jpg

{ 1. Joseph Jastrow’s Duck-Rabbit, 1899, based on drawing published in German humor magazine Fliegende Blatter, 1892 | 2. Taxidermed rabbit–duck | via Richard Wiseman }

I noticed a depiction of the famous “duck-rabbit” figure, described as an “illusion” and attributed to Wittgenstein (Malach, Levy, & Hasson, 2002).
 
Technically, the duck-rabbit figure is an ambiguous (or reversible, or bistable) figure, not an illusion (Peterson, Kihlstrom, Rose, & Glisky, 1992). The two classes of perceptual phenomena have quite different theoretical implications. From a constructivist point of view, many illusions illustrate the role of unconscious inferences in perception, while the ambiguous figures illustrate the role of expectations, world-knowledge, and the direction of attention (Long & Toppino, 2004).

For example, children tested on Easter Sunday are more likely to see the figure as a rabbit; if tested on a Sunday in October, they tend to see it as a duck or similar bird (Brugger & Brugger, 1993).

But the more important point of this letter concerns attribution: the duck-rabbit was “originally noted” not by Wittgenstein, but rather by the American psychologist Joseph Jastrow in 1899.

{ John F. Kihlstrom, | Continue reading }



kerrrocket.svg