One of the increasingly famous paradoxes in science is named after the German mathematician Dietrich Braess who noted that adding extra roads to a network can lead to greater congestion. Similarly, removing roads can improve travel times.

Traffic planners have recorded many examples of Braess’ paradox in cities such as Seoul, Stuttgart, New York and London. And in recent years, physicists have begun to study how it might be applied in other areas too, such as power transmission, sporting performance where the removal of one player can sometimes improve a team’s performance and materials science where the network of forces within a material can be modified in counterintuitive ways, to make it expand under compression, for example.

Today, Krzysztof Apt at the University of Amsterdam in The Netherlands and a couple of pals reveal an entirely new version of this paradox that occurs in social networks in which people choose products based on the decisions made by their friends.

They show mathematically that adding extra products can reduce the outcome for everyone and that reducing product choice can lead to better outcomes for all. That’s a formal equivalent to Braess’ paradox for consumers.

MATHEMATICS PRIZE: Dorothy Martin of the USA (who predicted the world would end in 1954), Pat Robertson of the USA (who predicted the world would end in 1982), Elizabeth Clare Prophet of the USA (who predicted the world would end in 1990), Lee Jang Rim of KOREA (who predicted the world would end in 1992), Credonia Mwerinde of UGANDA (who predicted the world would end in 1999), and Harold Camping of the USA (who predicted the world would end on September 6, 1994 and later predicted that the world will end on October 21, 2011), for teaching the world to be careful when making mathematical assumptions and calculations.

The temperature of heaven can be rather accurately computed. Our authority is the Bible, Isaiah 30:26 reads,

Moreover, the light of the moon shall be as the light of the sun and the light of the sun shall be sevenfold as the light of seven days.

Thus, heaven receives from the moon as much radiation as the earth does from the sun, and in addition seven times seven (forty nine) times as much as the earth does from the sun, or fifty times in all. The light we receive from the moon is one ten-thousandth of the light we receive from the sun, so we can ignore that. With these data we can compute the temperature of heaven: The radiation falling on heaven will heat it to the point where the heat lost by radiation is just equal to the heat received by radiation. In other words, heaven loses fifty times as much heat as the earth by radiation. Using the Stefan-Boltzmann fourth power law for radiation

(H/E)^{4} = 50

where E is the absolute temperature of the earth, 300°K (273+27). This gives H the absolute temperature of heaven, as 798° absolute (525°C).

The exact temperature of hell cannot be computed but it must be less than 444.6°C, the temperature at which brimstone or sulfur changes from a liquid to a gas. Revelations 21:8: But the fearful and unbelieving… shall have their part in the lake which burneth with fire and brimstone.” A lake of molten brimstone [sulfur] means that its temperature must be at or below the boiling point, which is 444.6°C. (Above that point, it would be a vapor, not a lake.)

We have then, temperature of heaven, 525°C. Temperature of hell, less than 445°C. Therefore heaven is hotter than hell.

Refutation:

In Applied Optics (1972, 11 A14) there appeared a calculation of the respective temperatures of Heaven and Hell. That of Heaven was computed by substituting the values given in Isaiah 30 26 in the Stefan-Boltzman radiation law. […] This is hard to find fault with. The assessment of the temperature of Hell stands, I suggest, on less firm ground.

What number is halfway between 1 and 9? Is it 5 — or 3?

Ask adults from the industrialized world what number is halfway between 1 and 9, and most will say 5. But pose the same question to small children, or people living in some traditional societies, and they’re likely to answer 3.

Cognitive scientists theorize that that’s because it’s actually more natural for humans to think logarithmically than linearly: 3^{0} is 1, and 3^{2} is 9, so logarithmically, the number halfway between them is 3^{1}, or 3. Neural circuits seem to bear out that theory. For instance, psychological experiments suggest that multiplying the intensity of some sensory stimuli causes a linear increase in perceived intensity.

In a paper that appeared online last week in the Journal of Mathematical Psychology, researchers from MIT’s Research Laboratory of Electronics (RLE) use the techniques of information theory to demonstrate that, given certain assumptions about the natural environment and the way neural systems work, representing information logarithmically rather than linearly reduces the risk of error.

Grigori Perelman is one of the greatest mathematicians of our time, a Russian genius who solved the Poincaré Conjecture, which plagued the brightest minds for a century. At the height of his fame, he refused a million-dollar award for his work. Then he disappeared. Our writer hunts him down on the streets of St. Petersburg. […]

Word was that someone had solved an unsolvable math problem. The Poincaré conjecture concerns three-dimensional spheres, and it has broad implications for spatial relations and quantum physics, even helping to explain the shape of the universe. For nearly 100 years the conjecture had confused the sharpest minds in math, many of whom claimed to have proven it, only to have their work discarded upon scrutiny. The problem had broken spirits, wasted lives. By the time Perelman defeated the conjecture, after many years of concentrated exertion, the Poincaré had affected him so profoundly that he appeared broken too.

Perelman, now 46, had a certain flair. When he completed his proof, over a number of months in 2002 and 2003, he did not publish his findings in a peer-reviewed journal, as protocol would suggest. Nor did he vet his conclusions with the mathematicians he knew in Russia, Europe and the U.S. He simply posted his solution online in three parts—the first was named “The Entropy Formula for the Ricci Flow and Its Geometric Applications”—and then e-mailed an abstract to several former associates, many of whom he had not contacted in nearly a decade.

Every day, crucial business and political decisions are made on the basis of numerical data. Only rarely do the key decision makers produce that data; rather they rely on others, not only to produce it, but to present it to them. Yet how many quants – the data producers – know how to present data effectively? To put it another way, how many of them know how to tell a story using numbers?

This is a surprisingly ancient question. It was Aristotle who first introduced a clear distinction to help make sense of it. He distinguished between two varieties of infinity. One of them he called a potential infinity: this is the type of infinity that characterises an unending Universe or an unending list, for example the natural numbers 1,2,3,4,5,…, which go on forever. These are lists or expanses that have no end or boundary: you can never reach the end of all numbers by listing them, or the end of an unending universe by travelling in a spaceship. Aristotle was quite happy about these potential infinities, he recognised that they existed and they didn’t create any great scandal in his way of thinking about the Universe.

Aristotle distinguished potential infinities from what he called actual infinities. These would be something you could measure, something local, for example the density of a solid, or the brightness of a light, or the temperature of an object, becoming infinite at a particular place or time. You would be able to encounter this infinity locally in the Universe. Aristotle banned actual infinities: he said they couldn’t exist. This was bound up with his other belief, that there couldn’t be a perfect vacuum in nature. If there could, he believed you would be able to push and accelerate an object to infinite speed because it would encounter no resistance. […]

But in the world of mathematics things changed towards the end of the 19th century when the mathematician Georg Cantor developed a more subtle way of defining mathematical infinities. Cantor recognised that there was a smallest type of infinity: the unending list of natural numbers 1,2,3,4,5, … . He called this a countable infinity. […] This idea had some funny consequences. For example, the list of all even numbers is also a countable infinity. Intuitively you might think there are only half as many even numbers as natural numbers because that would be true for a finite list. But when the list becomes unending that is no longer true.

There are several trends that might suggest a diminishing role for mathematics in engineering work. First, there is the rise of software engineering as a separate discipline. It just doesn’t take as much math to write an operating system as it does to design a printed circuit board. Programming is rigidly structured and, at the same time, an evolving art form—neither of which is especially amenable to mathematical analysis.

Some scientists think that subway networks are an emergent phenomenon of large cities; each network is the product of hundreds of rational but uncoordinated decisions that take place over many years. And whereas small cities rarely have subway networks, 25 percent of medium-sized cities (with populations between one million and two million) do have them. And all the world’s megacities—those with populations of 10 million or more—have subway systems.

It’s famously tough getting through the Google interview process. But now we can reveal just how strenuous are the mental acrobatics demanded from prospective employees. Job-seekers can expect to face open-ended riddles, seemingly impossible mathematical challenges and mind-boggling estimation puzzles. (…)

1. You are shrunk to the height of a 2p coin and thrown into a blender. Your mass is reduced so that your density is the same as usual. The blades start moving in 60 seconds. What do you do? (…)

3. Design an evacuation plan for San Francisco. (…)

5. Imagine a country where all the parents want to have a boy. Every family keeps having children until they have a boy; then they stop. What is the proportion of boys to girls in this country? (…)

6. Use a programming language to describe a chicken. (…)

7. What is the most beautiful equation you have ever seen? (…) Most would agree this is a lame answer:
E = MC2
It’s like a politician saying his favorite movie is Titanic.
You want Einstein? A better reply is:
G = 8πT (…)

8. You want to make sure that Bob has your phone number. You can’t ask him directly. Instead you have to write a message to him on a card and hand it to Eve, who will act as a go-between. Eve will give the card to Bob and he will hand his message to Eve, who will hand it to you. You don’t want Eve to learn your phone number. What do you ask Bob? (…)

11. How much would you charge to wash all the windows in Seattle? (…)

Reliable and unbiased random numbers are needed for a range of applications spanning from numerical modeling to cryptographic communications. While there are algorithms that can generate pseudo random numbers, they can never be perfectly random nor indeterministic.

Researchers at the ANU are generating true random numbers from a physical quantum source. We do this by splitting a beam of light into two beams and then measuring the power in each beam. Because light is quantised, the light intensity in each beam fluctuates about the mean. Those fluctuations, due ultimately to the quantum vacuum, can be converted into a source of random numbers. Every number is randomly generated in real time and cannot be predicted beforehand.

{ We show that if a car stops at a stop sign, an observer, e.g., a police officer, located at a certain distance perpendicular to the car trajectory, must have an illusion that the car does not stop, if the following three conditions are satisfied: (1) the observer measures not the linear but angular speed of the car; (2) the car decelerates and subsequently accelerates relatively fast; and (3) there is a short-time obstruction of the observer’s view of the car by an external object, e.g., another car, at the moment when both cars are near the stop sign. | Dmitri Krioukov/arXiv | PDF }

Math can be a fun, logic puzzle for some people. But for others, doing math is a headache-inducing experience. Scientists at the Stanford University School of Medicine have recently shown that people who experience math anxiety may have brains that are wired a little differently from those who don’t, and this difference in brain activity may be what’s making people sweat over equations.

Benford’s Law, also known as the rule of first-digits, is a rule that says in data sets borne from real-life (perhaps sales of coffee or payments to a vendor), the number 1 should be the first digit in a series approximately 30% of the time, instead of 11% as would happen had a random number between one and nine been generated.

The rule was first developed by Simon Newcomb, who noticed that in his logarithm books the first pages showed much greater signs of use than those pages at the end. Later the physicist Frank Benford collected some 20,000 observations to test the theory, which he too stumbled upon.

Benford found that the first-digits of a variety of things in nature, like elemental atomic weights, the areas of rivers, and the numbers that appeared on front pages of newspapers, started with a one more often than any other digit.

The reason for that proof is the percentage difference between consecutive single-digit numbers. Say a firm is valued at $1 billion. For the first digit to become a two (or to reach a market cap of $2 billion), the value of the firm will need to increase by 100%. However, once it reaches that $2 billion mark, it only needs to increase by 50% to get to $3 billion. That difference continues to decline as the value increases.

{ Balls in boxes offer a simple system for studying geometry across a series of spatial dimensions. A ball is the solid object bounded by a sphere; the boxes are cubes with sides of length 2, which makes them just large enough to accommodate a ball of radius 1. In one dimension (top left) the ball and the cube have the same shape: a line segment of length 2. In two dimensions (top right) and three dimensions (bottom) the ball and cube are more recognizable. As dimension increases, the ball fills a smaller and smaller fraction of the cube’s internal volume. In three dimensions the filled fraction is about half; in 100-dimensional space, the ball has all but vanished, filling only 1.8 × 10^{–70} of the cube’s volume. | An Adventure in the Nth Dimension | American Scientist | full story }