Monday, March 19, 2007

The Google Hype-Meter

Greetings, West Nile mosquito-swatters.

I hope that by now you will be familiar with my style of putting over-hyped risks in their place. But how do you determine how over-hyped a problem is? Today I'm going to introduce a new metric to assess how out-of-proportion a particular death threat is: the Google Hits per Annual Fatality or GHAF metric.

Google and Hype

First of all, let me admit that reducing such a nebulous idea as "hype" to a number is an inexact science at best. However, I happen to be an inexact scientist: the perfect blogger for the job.

The people who post web content are not representative of the human race as a whole, so if there's something which netizens preferentially talk about, Google is going to reflect that bias. However, in most cases this bias will distort reality by at most about a factor of 10, so any enormous differences in the whole-world hype devoted to certain risk factors should be also present in a subject's Internet chatter. Luckily, some small risks are so enormously exaggerated that even an inexact measure like the GHAF can find them with confidence.

Calculating the GHAF

So, if we're agreed that Google hits will approximate the amount of talk on a subject, we can divide the number of hits by the annual death rate of a scare to get the GHAF, a relative measure of how much that particular problem has been overblown. Let's take a look at a few real-world examples of the GHAF.

Raw Data
  • Malaria in Africa (GHAF = 1.5, 3 million Google hits [1] per approximately 2 million annual deaths [2])
  • Cancer in the United States (GHAF = 94, 54 million hits [3] per 570 280 annual deaths: page 1 of [4], .pdf warning: 6 MB)
  • West Nile Virus in the United States (GHAF = 5 500, 911 000 hits [5] per 165 annual deaths [6])
  • vCJD, the human disease from eating a mad cow, worldwide (GHAF = 81 000, 1.4 million hits [7] for 139 cases over 8 years [8] - see my blog entry for an editorial[9])
  • Alligator Attacks in the United States (GHAF = 293 000, 461 000 hits [10] per 1.57 annual deaths [11] - possibly the fatality rate is underestimated by this list and possibly a lot of the Google hits came from attacks on non-human targets)
Summary

The GHAF hype metric has a huge variability. It is a few thousand time greater for West Nile in the US than for malaria in Africa. Working from the assumption that most human life should be treated with roughly the same degree of care, these wildly differing GHAFs indicate that we spend far too much time worrying about the wrong things. With the GHAF, we can measure just how skewed our fears are.

The above list is far from exhaustive; does anybody want to look into adding traffic deaths or killer bees? I've set up a wiki page to keep track of the GHAFs of various risks. Feel free to add to it!

In any case, there's a huge variation in how much hype a risk gets compared with the actual danger involved. I realize there are only so many articles one can read about a certain risk before becoming inured to it, so one would expect the GHAF to be lower for real risks as not as much press will go to the millionth victim as to the first. However, the number of Google hits a risk gets is not even an increasing function of associated body count, showing that our problems run deeper that just weariness over old news.

Conclusions

I've already introduced two new measures of danger, the life expectancy decrease (LED) and the equivalent driving distance (EDD). However, these measures only ask how dangerous an activity is; they do not report how much that danger has been magnified by the media. With the GHAF, we can quantify just how out-of-proportion the hype is around a certain fear, and perhaps allow this measure of exaggeration to shape policy.

I look forward to your additions to my wiki page. What will my intelligent readers discover?

Tuesday, March 13, 2007

ExChange in the Weather

Greetings, rain-dodgers.

A few posts ago, I made a case for future-proof policy; that is policy which automatically keeps up with the best that today's technology has to offer. I've advocated the use of results-based prizes for rewarding the discovery of useful medical treatments, since they tend to align public and private interests. Today I'm going to talk about another way we could make our policy future-proof by harnessing the free market: have our government meteorological systems switch over to prediction market-based weather prediction.

The Status Quo


Today, typically large institutions or governments hire meteorologists whose full-time job it is to interpret computer models based (largely) on publicly-available data. It takes a relatively long time for new weather-prediction models to gain acceptance: each one must be academically-verified and promoted, and the uptake of better weather-prediction techniques seems to be a patchwork affair.

At the same time, there are hundreds of math and physics geeks with computer power to spare who like to try their hand at predicting just about anything. Even the private sector has been unable to tap this latent talent pool, as is evidenced by the fact that the Netflix prize has been claimed.

Netflix Prize Exhibits Geek Talent

The Netflix Prize rewards people for discovering new ways of predicting the ratings people give their movies based on which other movies they liked. The contest started on October 2nd, 2006, and by October 15th one team had already beaten Netflix's predictions enough to claim a prize. If even a private-sector firm is unable to efficiently harness the best numerical prediction methods out there, what hope does a government agency have of keeping cutting-edge?

Prediction Markets for Weather Prediction

Imagine instead that any math nerd with a computer and an Internet connection could instantaneously profit by predicting weather better than rivals without having to apply for meteorology jobs. There are already small-scale weather prediction contests (like the WxChallenge), but these are still mostly for bragging rights, not for general-purpose weather and climate prediction.

Prediction markets are like stock markets. You can already buy and sell shares in, for example, Hillary Clinton becoming the Democratic nominee, in an online prediction market. The shares are worth $1 if the event takes place, and nothing if it doesn't. The fact that Hillary's shares trade at about 40¢ means there's a market consensus that there's a 40% chance Hillary will be nominated. Prices fluctuate with every factor that may influence her nomination probability.

If we did the same for weather (and maybe even sweetened the pot a bit to provide incentives for high-volume trading and good predictions) we could find the geeks' market consensus over the chances of it raining tomorrow. Other predictions could be made too, like the total rainfall in a season, or any other season-related information which might be economically, socially or environmentally relevant. Storm warnings could be automatically posted through regular weather channels when the price of storm stocks rose above some (low) threshold, like 20¢.

Probably what would happen is that a few centralized weather servers would emerge which would make predictions about weather at many different locations, while local "old salts" who have a sense for the weather could also make a quick buck while letting the world in on their secret, quasi-instinctual privileged weather-sense.

Conclusions

There is no method as efficient as the anarchy of the market to predict the value of a commodity. If we commoditize knowledge about the weather, we will automatically harness all the disparate knowledges about our turbulent atmosphere to reward the weather-seers and keep the rest of us dry under umbrellas when appropriate.

Stay dry!

LeDopore

PS I have to add a final caveat: if some foreign power (like a government) had deep enough pockets and had a desire to manipulate the market (by, for instance, ruining 4th of July plans by buying shares in it raining everywhere), they could do so as long as they were prepared to sustain a virtually unlimited financial loss. Perhaps a good safeguard in the system would be to include automatic "bizarreness detectors" which would sound an alarm if some fishy market activity starts.

Wednesday, March 7, 2007

Consuming to Curb Consumption: the Case for a new Prius

Greetings, fellow humans.

Today's post is by request. One of my readers who is interested in minimizing his environmental impact asked about whether the energy costs of manufacturing a new car outweigh the energy costs of running an older, less fuel-efficient vehicle. The reader in question bikes to his law firm in all weather but snow, so he's already taken the cheapest (and probably most significant) step towards reducing his transportation-related energy consumption. However, many of us need cars at least once in a while, so it will be fun figuring out how many miles of driving you'd have to do to make buying a new car worthwhile.

Manufacturing a New Car

The Internet's too powerful these days. I thought I'd have to sift through details about modern steel-making techniques to get an estimate of how much energy goes into making a new car. It turns out that Google Answers beat me to it though: the average energy consumption associated with making a new car is 73 Gigajoules. Given that a liter of gas has about 32 Megajoules of energy, that means the energy content of manufacturing a new car is equal to the energy content of about 610 US gallons of gas. Since fossil-fuel-burning power plants are only about 40% efficient, the energy cost of making a new car is equivalent to that of burning about 1500 gallons of gas. (Aside: making cars from recycled steel reduces this energy cost by about 20%.)

Comparing Manufacturing Energy to Use Energy


Now that we know how much energy it takes to make a car, let's see how much you would have to drive a new, fuel-efficient car to make up for the extra energy used in producing it. Suppose that your new car gets about 45 miles per gallon while the old one got only 30. Then, for every 90 miles you travel, you'd save one gallon of gas from the fact that you bought a new car. Since making the new car consumed the equivalent of 1500 gallons of gas, you'd have to drive 135 000 miles to get to the break-even point, energy-wise.

Adding Emissions to the Mix

One thing I haven't factored into my account is the fact that power plants tend to have lower emissions than vehicles, since some power plants are zero-emission and others may have scrubbers (i.e., they may clean their exhaust of the worst polluting chemicals before dumping it into the air). In summary, this report says that 68% of the CO2 emissions from the life cycle of a typical car come from fuel consumption, 21% come from fuel processing and only 11% come from vehicle manufacturing, based on a vehicle lifetime of 120 000 miles. That means that, from an emissions standpoint, you have to drive your new hybrid only about 15 000 miles to reduce your net CO2 output.

Conclusions

I guessed that the energy cost of operating an old vehicle would be much greater than the cost of making a new, fuel-efficient one. The marketing behind new, hybrid cars is slick: it had me thinking about ditching old clunkers in the name of environmental responsibility. It's almost as if there's no corporate muscle behind the message "don't buy a new car while your old one still works." I guess that commercial culture will never miss a chance to tell us to buy something new, even when hiding behind the message "consume less!"

It's true that many new cars will probably make it beyond the 135 000 mile mark, meaning that you could ditch your old car for a new hybrid and rest assured that probably your net energy usage would go down eventually. It's also true that if you're worried about emissions as well as consumption, you would have to drive only about 15 000 miles to break even. Still, the environmental impact of buying additional vehicles, even if they're hybrids, is not insignificant, and should be factored into any decision over "going green" by ditching an old but still usable car.