Sunday, December 30, 2007

Getting a D in Health

Greetings, indoor, computer-reader.

The Sunbathing Ape

It's natural for us to feel drawn to amazing photographs of sun-soaked landscapes. We humans evolved as mostly-naked, outdoorsy folks in sunny Africa. Although the cultures (and to some degree the genes) of migrating peoples have done what they can to adapt to more polar environments, if you dig at all deep into our biology you'll see we still aren't fine-tuned to live indoor, boreal lives.

One of the biologically unmet expectations humans in the USA and Canada experience is low sun exposure, leading to Vitamin D deficiency. This article in the Globe and Mail suggests that Canadians typically have about one third of optimal concentrations of vitamin D in their bloodstream.

What's D Good For?

Until recently, it was thought that the major effect of vitamin D deficiency was rickets. Since (last time I checked) rickets wasn't an endemic problem to North Americans, vitamin D was seen as being a non-issue; the mandated additions of D and A to dairy products seemed to be sufficient to keep bone formation normal in children.

However, vitamin D has more functions than merely the regulation of bone density. It's also an important chemical precursor to a lot of important cellular signaling mechanisms: not having sufficient vitamin D is the equivalent of trying to run a government when there's a shortage of notepads to write on.

D and Cancer Rates

One of the worst consequences of screwing up chemical messages is to impede natural anti-cancer cellular mechanisms. Does our D-prived culture result in fact in increased cancer rates? The only way to know for sure is with a double-blind experiment where groups are given vitamin D and placebos at random, and to track prevalence of cancers in the two groups. That's exactly what this study did, and what they found is almost unbelievable. Giving 1.5g supplemental Calcium with 1100 IU per day (about 3 liters of milk worth, but the study used pills) decreased cancer rates by 77% after one year.

Great moons of Neptune! That's a huge decrease! The cautious part of me finds it hard to believe that one factor could be responsible for over half of cancers, and to be fair the study tracked only 1200 women over 4 years, and thus wasn't able to notice enough cases of cancer to have really tight confidence intervals: the range of cancer decreases still consistent with the study is 91%-40% 19 times out of 20. Still, I've started feeding vitamin D to my wife as well as taking it, if not daily, than at least often.

Public Health Consequences

Suppose the study's numbers bear up, and that about half of cancers could be prevented by 1100 IU of vitamin D per day. Would it be a good policy for health insurers (or friendly socialist governments like Canada's) to simply hand out vitamin D supplements? It looks like the cost of vitamin D is pretty much nothing: this bottle of 250 pills (almost a year's supply) with 1000 IU of vitamin D is only $10. On the flipside, the annual cancer rate in the US is 1 in 200. If that could be halved by the D supplement, and if treatment costs on average $40 000, that's an expected savings of $100 per year. Pay one dollar into prevention, get 10 out in unneeded treatment. (Oh, then there's the whole increase in lifespan and quality of life issue too.)

Last Word

I suppose the cautious policy approach would be to conduct a larger study to figure out where within the wide confidence interval the truth lies. However, I'm inclined to start ramping up vitamin D production and consumption programs, maybe even with heavy subsidy by governments, HMOs or otherwise. Let's get a D in health.

Thursday, September 27, 2007

Editing Miranda

Greetings, lawmakers.

When was the last time democracy had a makeover? Arguably, there hasn't been a big change to the way Americans govern themselves for 200 years. Well, except maybe allowing black people to vote in 1969 and women in 1920; OK maybe there have been some 20th-century improvements in terms of who can vote. However, in almost every one of today's democracies, the only governmental venue where the electorate at large expresses their views is at the polling station.

The Checkbox Menace

Yes or no questions can distort your position. Don't believe me? Try:

Does your wife know you have a mistress?
Elections so far allow only really coarse-grained opinions to pass from the people to the law. Politicians don't have a monopoly on good policy ideas, ergo there are some great ideas floating around which will never see the light of day as long as votes are the only way to influence laws.

Until now, this restriction of opinions has been necessary to keep procedures streamlined: there's been no way of having an intelligent exchange of opinions with 200 million voters. Until now?

Kiwi Wiki

That's right, the New Zealand government has opened up a wiki site where they let you draft the law. Its power so far is only advisory (I think that's wise, at least for now), but it allows good ideas and intelligent debate to percolate up from the people without the government doing a thing (apart from setting up MediaWiki or some such).

I'll be interested to see what comes of the New Zealand experiment. Does anyone below care to register their predictions as to if this will be fabulous or a flop?

Keep on wikin',

LeDopore

Thursday, September 20, 2007

Rail Blues

Greetings, Takers of the A-train,

Last night I went to a show in the big city. As you know, I don't own a car, so I took a subway car back, and noticed how very odd it was that about 100 rich patrons waited with me for over 20 minutes for a train. (My metropolitan area has notoriously infrequent trains, especially at night.) It got me wondering: how crazy is it (financially) to run trains infrequently at night?

Extra Costs


It's easy to say "Let's just run trains every minute," but the whole reason why public transit systems work is that many people going the same way can use just one vehicle and driver. Let's figure out the cost of splitting one infrequent but long subway trains into two shorter trains.

If the run lasts two hours and the driver costs $40/hour (including overhead, training, etc.) the personnel expense is $80. The necessary force (hence electricity cost) of pushing an additional train front through the air can be calculated using:

F = .5 A D p v^2,

where A is the area of the train front, D is a drag coefficient (usually about .25 for aerodynamic shapes), p is the density of the medium and v is the speed. Using A D = 4 m^2, p = 2kg/m^3, v = 20 m/s, electricity costs $.15/kWh and the distance traveled is 100 km, we see the extra energy costs is a measly $7. Let's round up and say an extra $100 would be needed to run an extra train.

Extra Benefits

So, how much shorter a commute would there be if an extra train came? The trains come every 24 minutes, and by the time a train came last night there were over 100 people waiting at my station and over 100 at the next station (the two busiest, mind you); let's lowball the estimate and say 250 people would be able to catch a train on average 6 minutes sooner if train frequency were doubled. Doubling train frequency means 25 commuter hours would have been saved at a cost of $100 to the system: $.40 per passenger or $4/hour of passenger time saved.

However, increasing service might increase ridership, so it's possible that the extra train would (at least partially) pay for itself. Increasing ridership eases burdens on parking and roads too; public transit is overall a good bargain.

Summary


It's time commuters got vocal about being willing to pay a fair amount for their time that gets wasted by sluggish schedules. I wish we'd get a consistent metric of how much our time is worth, and use it to make policy decisions. Check out my post on machine wages for equivalent cost-time comparisons; this concept may evolve into a wiki page soon.

Keep on track!

LeDopore

Thursday, September 13, 2007

Life in the Slow Lane

Greetings, road warriors.
Hold infinity in the palm of your hand
And eternity in an hour.
-- William Blake, Auguries of Innocence

This post is about commuting; specifically why I refuse to do long commutes. I don't really understand why people put up with long commuting. Some of my coworkers get up at 4 AM to get to work on time (8 AM), twice a day wasting in traffic more time than it takes to travel by train from Rome to Naples. Where do I start with the problems here? That kind of squandering of human life is so egregious that I don't even know where to start to try to attack it: res ipsa loquitor! If that res still isn't very loquacious, read on!

In an earlier post, I quantified how much you pay labor-saving machines for each hour of chores they save you; here I'm going to figure out how much you're paying yourself to live in a cheap neighborhood and commute to a good job.

Tinned Nation

On average last year, Americans spent 50 minutes per work day commuting to and from their jobs; mostly in a sitting position within oversized metallic cans on wheels. However, like the aforementioned coworkers, over 3.4 million Americans spent more than three hours per day commuting to work. If these extreme commuters value their time at $25/hour and work 20 days per month, they're spending $1500 of their time every month for the privilege of living where it's cheap. Unless housing is drastically more expensive where they work, it's just not worth their time.

Leisure sucker

Commuting is more than just a monetary problem, however: the less free time you have the more precious it becomes. If you're awake 16 hours a day, work 8 hours and spend 2 hours keeping the household together that leaves just 6 hours of discretionary free time per day. If three hours is sucked up in commuting, you have half the time to pursue self-development.

Greeks to the Rescue?

Aristotle thought the best division of waking hours was to spend 6 hours working, 6 hours resting and 6 hours pursuing some leisured activity: being creative and exercising parts of your body and intellect for the shear joy of it. The 8-hour work day already overbalances this ancient ideal; why tip it further into job-is-everything territory?

Personally I'm dismayed with the fact that the remainder of peoples' time usually has to go towards wakeful resting (like watching the tube), and not the active creation of interesting life, tradition and culture. I want life to be more participatory: we should be having a good time with the freeboard that going to work gives, if the work itself isn't fun (a harsh reality I'm trying to avoid).

Staying Un-Canned


What are some ways to keep our commuting hours down? How about the following for a start;
leave comments if you have more ideas.

  • Arrange to spend one day a week telecommuting (if possible)
  • Use a home office
  • Rent an apartment close to your job (and price out the time cost of your commute if you live far from your job - you might consider moving then)
  • Live/work arrangements are also great
I hope I'm going to be able to dodge nasty commutes. We'll have to see if that's going to be possible.

Take care, and stay out those cars as much as possible!

LeDopore

Friday, August 31, 2007

Heating with Flops

Greetings, fellow old-world primates.

Today I'm going to flesh out an idea a dear friend of mine had: that waste heat from high-performance computer facilities could be used to heat cold regions of the world.

We humans evolved our big brains in Africa, where it's nice and warm. For better or for worse, these big brains have allowed us to develop means of keeping our bodies at African temperatures even at polar latitudes, allowing us to conquer the planet.

Technology (be it fire or clothing) has always been a factor in allowing our spread into frozen zones; today I'm going to look into something a little higher-end.

Computing in Vegas

A friend of mine works for Cafe Press, an online clothing-designing company which recently moved their main data centers to Nevada. The reason for this move was an unusual one: proximity to the Hoover Dam means cheap power to run the site's computational muscle.

In fact, data centers can generate an enormous amount of heat. A thousand processors working at 100 W each consume 100 kW of electricity: about 50 households' worth. With electricity costing 15¢ per kWh, that's over $130 000 per year in electricity costs alone, and that's before factoring in cooling costs.

Computing in Siberia


Suppose instead that data centers were built where you want to generate a lot of heat anyway. that same 100 kW data center could potentially provide heat most of the required heat to a shopping mall. How economically feasible is this? Let's look into two possible scenarios. For the sake of simplicity I'm going to assume enough people will soon do the Cafe Press trick that electricity costs even out globally around 15¢/kWh.

Scenario A: 1 fixed data center in a place cold in winters.

Here, you'd build a 1000-CPU data center for $500 000, and half of the year you'd be able to use 80% of the waste heat from the data center to heat a mall. Together, you and the mall would save $52 560 in heating costs per year of operation; more if you can use some waste heat more than half the year. Even in the summer, data centers could be used to provide hot water.

Scenario B: 1 data center in a shipping container, moving from pole to pole

Sun Microsystems' "Project Blackbox" will build you a data center in a shipping container already. Imagine having a deal with two different malls, one in each hemisphere, so that the waste heat of the data center could be used always. You'd have two extra expenses: container shipping expenses (about $10 000 for the round trip) and four weeks annually of down-time, but you'd save $100 000 in heating costs. Overall, you'd have to spend 8% more (or $40 000 as a one-time expense) on your computer hardware to compensate for the downtime, but that should be just-about recouped after the first year of operation.

Three Moore's Laws

Technologically, the use of waste heat is only going to make more sense in the future. There are three relevant Moore's laws here: performance/watt, performance/$ and bandwidth/$.

The most familiar Moore's law statement is that computing power doubles every 18-24 months, but one should also look at power efficiency and bandwidth trends. Power efficiency has been climbing slower than computing efficiency, so today's $1500 PC uses more energy than a $1500 PC from a decade ago. Conversely, bandwidths have been doubling more frequently than CPU speeds. Therefore, in the future, bandwidth (especially on the backbone of the Internet) will be too cheap to matter, and the ratio of power expenditure to computer hardware expenditure is only going to increase.

Therefore re-using computer waste heat is only going to become increasingly lucrative in the future, so it's a technology that should be on the up-and-up.

Conclusions

It's already cheaper to operate data centers where power is cheap. I think now it's also cheaper to coöperate with public buildings (which tend to be big enough to act as thermal flywheels to smooth out diurnal heat supply needs) to supply waste heat from data centers. I haven't worked in the added costs of the data center's floorspace, so it's not yet a complete no-brainer to use computers instead of /along side of traditional furnaces, but future trends certainly seem to be pointing in that direction.

Yoursfor a Greener Cyberspace,

LeDopore

PS This post printed with 100% recycled electrons

Monday, August 27, 2007

How Much do You Pay Your Machines?

Greetings, Robot Lords.

The Pitch

How much do you value your free time? In Stephen King's The Stand, the Walkin' Dude comes by Mother Abigail's place in the guise of a vacuum cleaner salesman, making the pitch that he's not actually selling her labor-saving vacuum cleaners. Instead, he's selling her cool lemonade sipped in the shade on a hot day, time to lazily read a novel, or time to do essentially whatever Mother Abigail likes best. The premise is simple and appealing: we buy (or make) machines which save us drudgery, and then allegedly have more free time.

The Catch


In reality, our time-savers often end up owning us instead. It takes a lot of time, energy and money to maintain every piece of equipment we buy. On the other hand, I have a lot more free time than any subsistence farmer I've heard about, so there must be a good side to this tech too. How do we know if a given piece of labor-saving tech is worthwhile?

The Players' Salaries


One interesting analysis is to figure out what effective wage you're paying your labor saving device for your extra free time. Everything doesn't always boil down to money: it's not as if chores are equally onerous (I enjoy gardening more than cleaning the bathroom), but quantifying the hourly rate of free-time-saving makes for an interesting analysis nonetheless. Here's a summary table, then I'm going to talk a little more about some entries. Here "machine wage" isn't how much you pay per hour of operation - it's how much you pay per every hour of labor it saves you. I've ordered this list in descending order of utility.

ItemPriceCost/yLifetime (y)Hours saved/weekHours savedTotal CostMachine Wage







Dishwasher 500
5 3 780 500 0.64
Non-stick fry pan 100
20 0.05 52 100 1.92
Lawnmower (electric)
4001 10 0.25 130 410 3.15
Newer computer 1500
2 1 104 1500 14.42
Kitchen Mixer200 1 20 0.0096 10 220 22
Melon baller 10
10 0.00064 0.33 10 30
Car 5000 2000 5 1 260 15000 57.69

  • I'm a dishwasher evangelist. I've been responsible for (or at least influential in) the decisions of no fewer than 5 households I know to acquire a dishwasher by hook or by crook. (If you're renting, look into portable dishwashers - that's what I own.) Until today I just always had a hunch that dishwashers were good time-savers, but the hard numbers really nail it for me. Operating a dishwasher (in hot water and dish soap) costs about the same as washing by hand, and by my analysis my $500 dishwasher will save me 780 hours of scrubbing. Since I value my free time more than 64¢/hour, owning a dishwasher is a no-brainer.
  • I just bought a $100 super-high-quality frying pan, with (I kid you not) embedded diamonds as the non-stick coating. So far I have no complaints performance-wise: I get an even heat and the food has been scrumptious every time. As a side effect, I estimate that I spend about 3 minutes per week less cleaning, since now I can use this pan instead of my older stainless steel pan (which was a pain to scrub). Those 3 minutes per week over the 20 years the pan should last amount to 52 total hours saved, so I'm "paying" this machine $1.92/hour for the privilege of not washing dishes.
  • If I were to buy a new computer (something I dream about way too often) I might spend about 1 hour less per week waiting for my numbers to crunch (I'm a "power user": I run intensive numerical operations on a regular basis; for word-processing I doubt a newer computer would save more than a minute or two per week). If the new computer I'd keep for about 2 years, then it would save me about 104 total hours, so upgrading now (for $1500) would be like paying the machine $14.42 for every hour I save not waiting for that progress bar to end.
  • I broke down and bought one of those designer kitchen mix machines the other year. We barely use it, truth be told. If it were to save 15 minutes twice a year, this $200 machine would save us 10 hours of work over its 20-year lifetime. (Inside, I doubt it will save that much time, but the truth hurts sometimes.) It really sucks power too, so I guess its lifetime cost (price + power) will be $220; meaning we're paying it $22 per hour that it saves us. (Note: this mixer brings an invaluable quantity of ancillary joy to my better half merely by gracing our kitchen - which is precisely why this kind of analysis didn't have the last word.)
  • We also own a melon baller! If it saves us 2 minutes a year (we hardly use it), we're paying it $30/hour for the privilege. Maybe single-purpose kitchen gadgets should be contraband.
  • Last (but not least) we don't own a car. I can get to and from work without one (and, considering the parking around where I work, biking is faster), even though it makes doing errands a little more tricky. I estimate I spend about 1 hour more per week doing errands because I can't just hop into a rust bucket. (Aside: if I were to offset time I don't have to spend in the gym because I bike, this 1-hour figure could very well be negative!) In any case, were I to buy a car, over 5 years it could well cost $15,000 in insurance, depreciation, maintenance, parking and fuel. For each of the 260 hours it would save me, I'd be paying it $57.69: a pretty lousy deal. I guess I won't be buying a car until my lifestyle requires one.
The Last Inning

On that note, let me hand it over to you. The numbers I've presented are highly personalized and might not apply to you. I do a lot of serious computing, and even for me a new computer only barely makes sense. I live close to work, so biking is a great option. We do a lot of entertaining, and thus a dishwasher is pretty much essential. If you truly need a car, or if you don't entertain or run scientific computing experiments your personal table is likely to be quite different. Still, I encourage you to do the same sorts of analyses before listenening to the Walkin' Dude.

Rule your 'bots with an iron fist!

LeDopore

Sunday, August 19, 2007

Middle Ground Meat?

Greetings, omnivores.

Today I'm going to talk a little about animal welfare, and how we might improve it without getting angry at anyone.

Lately some animal rights protesters have been harassing someone close to me (I'm not going to go into details), and it started me thinking about ways in which animal rights activists might improve animal welfare most effectively. I think there are two main inefficiencies with current animal rights activity: extremism and lack of perspective.

Problem 1: Animal Rights Activists Tend to be Extremists

"You catch more flies with honey than you do with vinegar...*"

There are three general stances you could take on animal welfare:
  1. Animals suffering is equivalent to human suffering, so feed lots are equivalent to the WWII concentration camps (reflected in some ad campaigns).
  2. Animals are cute but tasty: let's try to make them reasonably happy as long as we can still eat bacon.
  3. Animals are here for human use. Some animals (mosquitoes come to mind) use us without the slightest regard for our well-being, so to hell with our looking after them.
While you can legitimately defend all of the above ethically, those who take stance #1 often feel entitled to, say, bomb the houses of animal researchers. While I acknowledge that you can't "prove" any value system is right or wrong, in general it's a good idea not to espouse any belief system which tells you it's OK to murder, for practical (if not ethical) reasons.

Even if you do believe murder is justified in a few circumstances, it's bad PR to kill your enemies. Homicide undermines your soft power like nothing else. I bet the above bombing did more to inoculate potential animal rights supporters than the combined forces of all the starched-shirted science-defending geeks ever to mumble through a justification of animal use at every cocktail party the world has seen.

Problem 2: Animal Rights Groups Lack Perspective

There are over 250 million egg-laying hens in the United States: almost one per human. There are likewise millions of dairy cows and livestock pigs. Many of these animals suffer for the sake of thrift: it's cheapest to pack animals into the smallest space that won't kill them.

Still, the three biggest animal rights causes I hear about are fur, pâté de foie gras, and medical research. A more quantitative statement: a Google search for "fur activism" turns up more hits than "laying activism."

What do fur, foie gras and medical research have in common? Not everybody comes home from their day quantifying drug toxicity to grab their mink stole on their way to a nice bistro: animal rights activists figure they can win more sympathy from people to counter less-common animal uses. They even turn poverty into a virtue: most of us can't afford blue fox coats while we're starting out, but PETA would have us believe we haven't yet bought fur because we instinctively know it's wrong. Moreover, a lot of rich people feel guilty about being rich (I can treat the root of this problem, incidentally. Please leave your contact info below.), making it easier to attack the morals of self-doubting millionaires.

In short, animal rights groups attack fringe animal usage since these are the issues they think they can "win." If I were them, and if I were really interested in animal welfare, I would recognize that many of us want to reduce animal suffering and would pay to do so (at least a little), so what we really need to do is have animal rights organizations set up a scoring system for farm animal welfare.

A lot of people would pay 20¢ more for eggs from hens which suffered 50% less pain. However, we have no way of really knowing how good each farm is. The time has past (or hasn't yet come) to paint each farmer Joe as a miniature Hitler: what I'd like to see are livestock comfort ratings on beef, milk, pork, chicken and eggs. Make them fair and standardized, and just watch if you don't find a significant minority of consumers support farms that would improve the lives of tens of millions of our fellow creatures.

Conclusions

Polarizing the debate on animal usage is a losing strategy: too many of us won't give up using animals in some form. Many animal activists use terrorist tactics to intimidate minority animal users. Regardless of whether you think animal rights should be equivalent to human rights, it's a better strategy to use market forces to relieve some suffering from mainstream animal uses; that's the easiest way to reduce animal suffering overall.

Chomping tenderly,

LeDopore

Reader poll: Who's a vegetarian, and why? How much extra would you spend on your daily food knowing your meat animals suffered less? How many of you would like the taste of happy meat better, if only through the placebo effect? Please leave me comments.

* Maybe animal rights activists think of the stuck fly's suffering, and so use vinegar deliberately to warn them from the trap?

Tuesday, July 24, 2007

"-", not "/": In Search of Luxury on the Cheap

Greetings, lotus eaters.

It's been a long time since I've done a post! Let's start this one with some raw data.

Item Egg Chocolate Cheese Wine Car /day
Cheap Price/Serving ($)
.15 1 1 3
10
Expensive
Price/Serving ($)
.85 5 7 20 60

Today's post is about how to achieve luxury on the cheap. Some people would approach this subject by talking about secret bargain-hunting techniques: how to obtain the latest and greatest without paying through the nose. However, an often ignored avenue to getting a taste of the "good" life is to spend the money where it counts the most. Something that's worked for me is to start using more of the "-" sign on my mental calculator and less of the "/" sign. Let me give an example to explain what I mean.

Eggsample

Let's talk egg selection. My local supermarkets sell eggs for as little as 79¢ a dozen. These eggs are of decent quality in that they're not risky to your health but, since they're made with such narrow profit margins, every choice the farmer makes has to maximize quantity, not quality.

In the same supermarkets it's possible to buy artisanal eggs for about $5 a dozen. Nestled in post-consumer-recycled cardboard carton packaging on which is printed (in natural, organically-farmed vegetable-based dyes) idyllic odes to the chicken pastoral ideal, these poultry-gems might not quite live up to their billing. However, in this humble blogger's opinion a free range egg is a cut above a factory-farmed egg.

Where Classical Economics Fail

The consumer is left with the following dilemma: should she buy eggs which are six times as expensive, even if they taste better? In an ECON 101 class, the critical question would be "do I enjoy the expensive eggs six times as much?". Since I'd derive less than six times the pleasure from a free-range egg meal than from a factory farmed meal, classical economics says that my utility-to-cost ratio is higher for the cheap eggs, so I should buy those. Many shoppers I see have an ECON 101 attitude as well: they eschew the good eggs because they cost a whopping 6 times as much as utility-grade eggs.

Classical economics however fails as soon as one realizes demand for breakfast is woefully inelastic. In fact, my demand for eggs is met after spending such a minuscule fraction of my capital acquiring them that neglecting the inelasticity of my appetite is absurd. Assuming you have about 2 eggs per serving, the extra cost of having free-range eggs is only 70¢. Is it worth a 70¢ premium to have superior-quality eggs? Absolutely! And yet in the supermarkets I see shoppers clamoring for the best bargain eggs. Why are people being so cheap?

Using the "-" Sign

The root of the problem is the division operator. People balk at the idea of spending six times more, while their finances are affected not by this multiplicative factor but by the arithmetic difference between prices.

From the table at the start of this post, you can see that while the relative price differences between the cheap and expensive items tends to be about 6, the arithmetic differences vary wildly. The premium on buying high quality eggs is two orders of magnitude less than the premium on driving a sports car on a daily basis. Are you willing, therefore, to sacrifice about 100 free range egg-level luxuries each day for the privilege of driving a sports car? If not, you should follow my lead: splurge on the cheap stuff and skimp on the expensive stuff.

Enjoy the good life!

LeDopore

Tuesday, May 22, 2007

Hiatus

Greetings, California Dreamers

I thought I'd let you know that I'm going to be on a trip for the next few weeks. Many Ideas is going to go dormant until mid July, so tune in then when I'll have a new slab of quirky considerations for your surfing pleasure.

Sincerely,

LeDopore

PS If you come up with any more brilliant ideas for posts (the best ones seem to be user-submitted), please leave them in a comment here.

Friday, May 11, 2007

Liberty and Bandwidth for All

Greetings, YouTubers*.

In my last post I outlined how our current system of private Internet Service Providers (ISPs) companies is economically wasteful. Although in general the private sector is better than the public sector at providing higher-quality services at a lower cost, with ISPs the product (Internet bandwidth) is a factor of 1000 times cheaper than what ISPs charge. Almost all the ISPs' operational costs come from advertising, distributing and charging for this cheap-as-dirt Internet backbone bandwidth. In other words IPSs are intrinsically so wasteful that publicly-owned networks make sense. This post is going to tackle how I think we should implement municipal data networks.

Letting Demand Drive Expansion

Internet technology changes so fast that it would be unwieldy for a council to try to have a sane policy of technology roll-out which took advantage of the latest and greatest. It would also be hard to periodically gauge the service levels residents truly want. It's much better to make technology policy future-proof, meaning that no new laws or regulations will be required to implement better technology where it's wanted as soon as it's developed.

Here's an example of a future-proof network-building scheme. Have residents pledge (with holds on their credit cards) that they would be willing to pay X dollars for Y service. As soon as a private company notices that enough residents in a neighborhood have pledged enough money to make granting the service worthwhile, they can install whatever hardware they choose which is able to meet or exceed the bandwidth demands Y of all the people who pledged X dollars. The money from the credit card holds would then go into a trust which would pay the hardware companies annuities for as long as the service works (or maybe the trust should be invested with low risk, and 25% of the total equity should be paid to the hardware builder/maintainer each year; since Internet technology becomes obsolescent so much faster than roads it makes sense to make the payment schedule accelerated).

The city would provide all of the (essentially free) backbone bandwidth in exchange for the fact that all Internet services using that bandwidth must be broadcast over authentication-free wireless Internet or users must be able to plug in wired connections for free in publicly-accessible points. (Perhaps encryption could be optional to prevent people from spying, but it shouldn't be mandatory, and passwords must not be secret. With good crypto you can have every user use a different session key, so that even if they know each others' passwords they can't snoop on each other.)

Miscellaneous Points

Here are a few guidelines for details of the policy which might help:
  • The quality of service could be specified by three numbers: bandwidth, reliability and latency-to-backbone; that way users can communicate what's most important for them to the free market.
  • Perhaps users should pay on a sliding scale, with payments tied to the quality of service received, so that there is always an explicit incentive to provide better Internet service.
  • Assuming only 25% of residents who want a given service would pledge for it, maybe the city should match pledges paid out of property tax.
  • Since optical fiber is cheap but expensive to lay, it's a common practice to lay cables with many more fibers than will be needed in the near future. These "dark" fibers can later be cheaply lit if needed. Policies should probably specify that some percentage (like 95%) of the fiber laid to make a network must be dark.
  • Depending on political will, it might make sense to pay for city-wide phone-level coverage off the bat through taxes, and let people pledge for upgrades as desired.
Conclusions

In the Chicago example of last post we saw that the entire city could have a free data network for a one-time cost of under $15 per person. People are probably willing to pay a lot more for much faster connections; the plan outlined in this post shows a way in which a publicly-owned network can deliver services the public wants as soon as their feasible to deliver without wasting money on advertising and accounting.

This plan isn't anti-business either. The local companies which would spring up to supply the network services asked for by the people would have a leg up spreading to other municipalities where this same incentive policy gets implemented. (I am fairly confident that other municipalities would want to emulate the digital utopias which would come from this type of municipal Internet service.)

With some organization, the people can have cake and eat it too: they can pay a pittance in extra tax in exchange for hassle-free, state-of-the-art Internet connectivity. Everybody wins except old-school ISP shareholders. (Sell!)

*Web 2.0 couch potatoes?

Wednesday, May 9, 2007

The Answer is Blowin' in the Windy City

Greetings, chatterboxes.

Today I'm going to outline why I think municipal wireless networks are a good idea. We depend more and more on Internet connectivity for our everyday lives; it's no longer the case that bandwidth is a luxury item only a small niche desires. However, the way we typically pay for bandwidth (through private Internet Service Providers, or ISPs) is tremendously inefficient. I'm going to outline an estimate of how inefficient privately-owned ISPs are, then in the next few posts I'll talk about a way in which publicly-owned networks can be financially and technologically sustainable.

Getting Hosed by ISPs

Bandwidth at Internet backbones is ridiculously cheap: about $1 per terabyte (TB) and falling fast. (Based on estimates of web-hosting costs which allow 3 TB of transfer per month for $5 per month - the $1 per TB might not be accurate to within more than an order of magnitude. I don't specifically endorse the web hosting company I linked to - it's just an example of how cheap backbone bandwidth can be.) A heavy home user might transfer about 20 GB of bandwidth per month, costing their ISPs no more than a few pennies per customer per month.

However, the rates which ISPs charge their customers is three orders of magnitude higher: $20 per month is considered a good deal. That's a markup factor of at least 1000.

There are at least three main expenses other than backbone bandwidth which contribute to the costs of running ISPs:
  1. The "last mile" connectivity between multiple homes and a backbone connection point
  2. Advertising and promotion
  3. Billing customers
Going Public

If a publicly-operated free (as in beer) municipal Internet network existed, there would be no need for costs # 2 and 3, and I postulate that #1 could take a big hit too by allowing better technology to be used. I think that one of the major reasons private ISPs are scared to deploy city-wide mesh wireless networks is that if users shared their passwords with friends, they could lose customers. Instead they've opted for wired networks (through DSL or cable) which are probably a lot more expensive than wireless mesh networks so they can be sure you don't share your account with friends.

Why do I think mesh networks are cheaper? The City of Chicago plans to roll out a city-wide wireless mesh network for only $18.5 million. A city-wide network would supplant not only ISP communication, but if a few Asterisk servers were part of the setup you could replace aging telephone lines and cellphones with voice over IP (VoIP), obviating the need for phone companies, whose costs are also dominated by the three numbered items above.

Savings

How much do Chicago's 3 million residents currently pay for phone, Internet and cell phones? If we assume one ISP line (at $20/mo.) and one land line (also at $20/mo.) for every 4 residents and one cellphone (at $30/mo.) for every two residents, we'd estimate that Chicago spends $900 million per year on combined data services. Even assuming Chicago's network costs double the estimate with a one-time cost of $40 million, a municipally-funded wireless network is an exceedingly good deal.

If the backbone bandwidth cost were approximately one penny per resident per month it would not be worth the city's while to try to charge people for their individual bandwidth usage, just as we don't try to charge people who use streetlights more for their fair share of electricity costs to the city.

Conclusions

Even if implemented poorly, a publicly-owned data network would give astronomical cost savings over the current arrangement. There are still the potential pitfalls that a publicly-owned network might be terribly cost-inefficient, or that it might not give the quality of service expected by the residents. However, in my next post I will unveil a plan which addresses both of these woes.

Until then!

LeDopore

Saturday, May 5, 2007

Keeping Your Autograph Yours

Greetings, John Hancock.

In today's post I'm going to talk a little about digital signatures and hashes. I'm going to talk about hashes and their use in cryptosystems, and then I'm going to give some crypto “recommendations” for how to stay one step ahead of potential digital signature forgers.

I'm not really sure why, but I just love to learn about security mechanisms, how they can be beat, and how you can really foil intruders. If you don't share my passions for math and security, maybe this post won't hold your attention; I promise I'll post something more sensational soon. If, on the other hand, you read Cryptonomicon and were starved for the numerical details behind the characters' plots, read on, and be satisfied!

Hype Warning

Let me get one thing clear before we start. Current digital cryptography is secure enough for you to rest easy – your weakest link is not going to be that somebody spends thousands of CPU hours to do a direct attack on your data. If somebody wants to steal your information it's much easier to use a “side channel attack,” in other words it's easier for a data thief to push malicious key-logging software onto your Windows computer, infiltrate your organization, or record the sound of your keyboard to get sensitive information than it is to do a brute-force attack on even a relatively weak cryptosystem.

However, I think cryptology is fun, so today I'll talk about a security practice which will keep your digitally-signed documents über-safe. If that appeals, read on!

Digital Signatures


The Internet provides a remarkable degree of anonymity to its users, which can be both a blessing and a curse. (LeDopore isn't my real name, by the way. I enjoy being able to post unfiltered opinions that will never be tied to C.V.-related Google searches. I can prepare a face to meet the faces that [I] meet and have only a select few be able to link my masks.)

The Internet would be much less useful if we couldn't establish the authenticity of any particular source. Because of its decentralized nature (which is one of the reasons it's so robust - 0 seconds downtime since the 1970s is pretty impressive), there's no way to have a trusted path between source and sender; we must let the message itself testify to its authenticity.

Public Key Cryptology

When a message is digitally signed with public key (asymmetric) encryption, you can quickly verify that only a particular sender (actually, a sender with access to the key's corresponding private key) could have sent it. We say "asymmetric" and "public" because for every secure channel, there's one public (in other words, you want everyone to know it) and one private (secret) key; these keys are different (hence "asymmetric").

Let me give a hallway analogy to explain what public key encryption can do. Imagine an apartment building hallway with rows of doors with mail slots, and with glassed-in locked message boards beside every door. You can slip a message into anybody's slot without anyone else being able to read it, and you can post anything you like in your locked message board so that anybody can see it an know you sent it. Slipping a message into others' slots is equivalent to encrypting it with their public key: they need their private key to read it. Posting behind glass is equivalent to encrypting it with your private key: if you need your public key to decrypt it, it's impossible that the message was generated with anything but your unique private key.

For a fully-secure connection, you can encrypt a message first with your private key and then with the receiver's public key; then only they will be able to receive it, and they can be sure that the message came from you. (The hallway metaphor breaks down, since with public key cryptology you can do the equivalent of slipping a message board of yours into someone else's mail slot.) Thus people who have never met can exchange fully private information, which is why you can buy things with your credit card online. (A mixed blessing?)

One of the big potential holes in public key cryptography is that you have to be sure you know what public key to use when sending a message to somebody. The only way around this conundrum is to go through a security broker like VeriSign, whose job it is to physically go to companies to sign hand-delivered public keys with their master VeriSign private key, which your computers are pre-programmed to trust. (Man, talk about a single point-of-failure; if anybody managed to factor VeriSign's product-of-primes it would be "game over" for lots of digital security. If you don't like VeriSign's game, you can always physically share symmetric keys through a trusted, i.e. non-Internet, connection first. That's how I've set up my ssh into work; not that I don't trust VeriSign, but you never know...)

The RSA Algorithm

Signatures typically work through the RSA public key algorithm, which gains its cryptologic strength from the fact that it's easy to check if a number is prime, easy to do modular exponentiation (which I'm not even going to define here), but difficult to factor the product of two large prime numbers. (If you're interested in the math behind RSA, try chapter 1, page 42 of Algorithms, by S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani, freely available online and very well written).

The Need for Hashes

Theoretically it would be possible to sign entire documents with RSA, and to conduct whole conversations by passing messages through public-key cryptosystems. However, although using RSA with the proper keys is orders of magnitude faster than cracking it (which, as far as I know, hasn't been done ever with long enough keys), it still takes quite a few clock cycles. Typically then you don't send your secret info directly through RSA, but you use a block cypher like AES (Advanced Encryption Standard).

Aside: AES

AES takes a secret 128-bit shared number and generates a bitstream of random-looking data. Both the sender and receiver share the same random 128-bit key through a secure method like RSA, then they use AES on the 128-bit key to make a stream of random-looking bits. Since both sender and receiver have the same key, AES is a type of symmetric key encryption.

AES is in many ways like a random-number generator on steroids: the 128-bit number is the random seed, and from it you can generate as long a random-looking bitstream as you like. The sender of messages ("Alice:" in cryptology it's always Alice who has some interesting secret message) then takes her digital message and the random bitstream and performs the exclusive OR operation between them. (I.e., if the random bit in the bitstream is 1, flip the message bit from 1 to 0 or vice versa, but if the random bit is 0 do nothing.) The result is a totally unintelligible to everyone but Bob (the ever-listening, trusted confidant of Alice), with whom Alice has shared the 128-bit key. Since Bob can use AES to make exactly the same bitstream as Alice, he knows which bits have been flipped, and thus can recover the original message by flipping the bits back.

Aside Over

Back to digital signatures. Just as it's impractical to sign everything with RSA directly, it would take a lot of CPU cycles to sign your documents with RSA. Instead, usually you'll sign a hash of the document you want to verify came from you. In the hallway analogy, think of it as distributing a book you liked to everyone, and then slipping the title page into your secure glassed-in message board so that everyone can see that you endorsed it.

Making a Good Hash Function

There's an immediate problem with the title-page strategy: other people could write a different book with the same title page. If the new interloping book contained inflammatory remarks, you could get into a heap of trouble. Ideally, what we want is some digest, or hash, of your book (other than just ripping out the title page) which had the following properties:

  1. Relatively fast to calculate
  2. Sensitive to the entire document, not just one page of it
  3. Small enough to fit into a message box
  4. Nearly impossible to reverse, i.e. find another book with the same hash
If you have a hash function which satisfies all of the above properties, you can speedily sign documents by distributing the bulk of the document through insecure channels, and then making a hash of the entire document and signing just the hash with your private key. Then receivers wanting to check the authenticity of your document can take the insecure copy of the document, hash it in exactly the same way you hashed it (there are publicly available hash algorithms like MD5, SHA-1, and WHIRLPOOL), and then compare the hash the digitally-signed hash you just made (by passing your signed hash through your public key).

Let's go over why each of the four above points is important. #1: if it takes a long time to calculate the hash, you waste time. (That's why we don't sign whole messages with RSA, right?) #2: every bit of the hash must depend on every bit of the original document in a unique way. (This way changing even a single character in the document produces a completely different hash, making forgery difficult.) #3: hashes are typically only a few bytes long. (MD5 is 128 bits long, SHA-1 is 160 bits and WHIRLPOOL is 512 bits - all small enough that signing them is no big deal.) #4 the hashes should be computationally easy only in one direction: so it's hard to make a message with a specified hash. When I say "hard," I mean that ideally it would take about 2^(hash length) tries to find a message which would have a specified hash.

This last point is vital: when you make a digital signature, you're claiming authorship for every message which has that hash, since the hash is the only thing you sign. (If somebody distributed a forged document withe same hash as a document you signed, they could claimed you signed the forgery.)

Hashes are also used to protect passwords. You want programs to be able to identify if a password was correct, but for security reasons it's a bad idea to store the password itself on your disk. To get around this problem, most software stores only a hash of your password on your disk. To check that subsequent entered passwords are correct, programs do a hash of the entered text and compare it to the stored hash. As long as the has has good crypto strength, people with read-only access to the file containing the passwords will not be able to guess the password from the hash. (Aside: Windows by default uses an infamously insecure algorithm for storing password hashes, the LM hash, which requires only about 2^36 operations to crack. Even a general-purpose modern computer can brute-force Windows passwords in a few hours, and you can speed that up to a few minutes by using pre-computed rainbow tables. Insane!)

Potential Pitfall: Birthday Attacks

SHA-1 (Secure Hash Algorithm) is used industry-wide as a purportedly secure hash algorithm (i.e. one that satisfies #4 above). It's still pretty good, but it's starting to show its age. One of the best ways to attack the signed hash cryptosystem is to use what's called a birthday attack, named after the birthday paradox (which says if you have more than about 25 people in a room, chances are that two people will share the same birthday - the trick works because the number of possible birthday collisions is 25 * (25 -1)/2 = 300 - the number of potential pairs goes as the square of the number of people).

To do a birthday attack, the villain chooses a message you'd be happy to sign (A, which could be some innocuous legal document to be signed by a lawyer) and an evil message he wants you to sign (B which can be anything). He then looks for strings of invisible fluff (c and d) which he can append to A and B such that the hash of Ac will be the same as the hash of Bd. (The invisible fluff can be a string of mixed spaces and non-breaking spaces, or comments in an .html document, or tons of other things which won't affect the appearance of A and B but will change their hashes.)

Here's the bad news: although for a good 160-bit hash you'd have to make 2^159 guesses of d such that A and Bd would have the same hash, for a birthday attack the villain generates only about 2^80 fluff strings c and d. Chances are that for one of the 2^160 pairs of c'd and d's, the hash of Ac will be the same as the hash of Bd.

Real World Implications

Already there have been successful birthday attacks against MD5 (the 128-bit hash I mentioned), and SHA-1, which is an industry standard, is starting to show cracks as well. In 2005, Xiaoyun Wang, Andrew Yao and Frances Yao have found a shortcut to do birthday attacks on SHA-1 such that only about 2^63 computations are needed. (If SHA-1 were a better hash function, no attack faster than brute force would be possible, and that would take 2^80 operations). Even today, 2^63 operations is feasible with the right hardware: if a teraflop specialty purpose machine (like the Geforce 8800) costs about $500, then to make a malicious pair of messages Ac and Bd in a year you'd need about a billion dollars in computer resources.

Computers are going to get faster, and cryptanalysts (maybe) are going to find faster-than-2^63 attacks on SHA-1. My prediction is that birthday attacks against SHA-1 are going to become widespread some time within the next 20 years.

Staying One Step Ahead


Replacements for SHA-1 are in the works. There are hashes with longer digests which are already public standards: SHA-256, SHA-512 and WHIRLPOOL (with 256, 512 and 512 bit digest lengths), but they haven't been as widely scrutinized as SHA-1. (I bet they're all pretty good though, but I'm not a pro cryptanalyst.) If you're a programmer, consider coding software in a modular-enough way that you can drop in different hash functions into your code easily, and that different hash lengths don't mess up your program.

Until better software comes along, I'd recommend that people working on big, secret important stuff adopt the following two policies:
  1. Always edit a document sent to you before signing it in some unpredictable way.
  2. Always keep a copy of the document you actually do sign.
Point 1 will protect you from birthday attacks. If you change Ac even slightly, the billion-dollar crack attempt made by the villain will be completely worthless, since he has a Bd which hashes to the unmodified Ac. Point 2 will make sure that even if your signature gets broken and somebody claims you signed Bd, you can whip out the document you actually did sign and show that somebody made a pair of hash-colliding documents. (You won't be able to prove if it was you or the villain, but at least there would be reasonable doubt.)

Back to the Real World

Of course, these day's it's much cheaper to hire a spy to infiltrate your organization than to generate a billion-dollar hash-colliding document pair and hope it's signed without modification. I'm a silly crypto-hobbyist for suggesting you should worry about anything but a side channel attack. And even then, I've found that the vast majority of folks are too concerned with their own business to try to hack yours. I routinely accidentally leave my door not just unlocked but wide open, and I have yet to be stolen from at home. The world's a safe place; you don't need to worry about digital security. I just have a little-kid-in-treehouse mentality when it comes to fancy computational methods for making rock-solid crypto. How about you? Do any of my readers think crypto is fun? Should I blog more about it?

Sunday, April 29, 2007

Cookie Dough: Cold Killer?

Greetings, bowl-lickers.

A friend of mine who enjoys the odd clandestine spoonful of uncooked cookie dough suggested to me last night that I look into the risks involved in his filthy habit. (Just kidding - I regularly eat raw cookie dough by the scoop.)

We're told never to eat cookie dough because raw eggs may contain the bacterium Salmonella enterica, which can make you sick. Despite all the warnings, cookie dough eating is rampant in North America. Does cookie dough cause widespread poisoning deaths, or is it just another paper tiger? Read on to find out.



Salmonellosis: Symptoms and Rates

Any medical condition with a Latin name sounds scary. However, the majority of Salmonella infections cause gastro-intestinal upset and a fever for 4 to 7 days and then go away without formal medical intervention. If you're old, an infant, or have a weak immune system, you could need antibiotics to make your infection go away, and a particularly bad Salmonella infection can cause lasting conditions like arthritis or death. However, these big-ticket fears are relatively uncommon; this CDC study says the ratio of illnesses to hospitalizations to deaths for nontyphoidal salmonellosis is roughly 2,426 to 28 to 1.

The same CDC study estimates that the number of cases of salmonellosis in the United States is about 182 000 per year, or about 1 in 1 500; but since most infections go unreported it's really hard to tell. Its best guess is that salmonellosis from shell eggs causes about 2000 hospitalizations and 70 deaths per year: in other words, salmonella from eggs is about 1000 times less deadly than the flu (from this .pdf, page 2; this comparison is apt since both flu and salmonellosis are grave threats mostly to people with compromised immune systems).

Is Cookie Dough a Big Culprit?

Most of the salmonellosis outbreaks that make the news come from large-scale slip-ups where dozens of people get ill, rather than from small families tasting the occasional batch of cookie dough. Is this just because it takes a certain number of cases before a story is newsworthy, or is there another cause at work?

This CDC page warns that in large-batch recipes where 500 eggs are used the Salmonella risk is greater, since one contaminated egg could taint the whole batch. So what's the risk of getting salmonellosis from eating cookie dough from a two-egg recipe?

This study estimates that only 1 in 30 000 eggs is potentially contaminated with Salmonella, so at most there is a 1 in 15 000 chance that your dough is going to have any Salmonella bacteria. (If the first egg doesn't have Salmonella, the second egg has a smaller than 1 in 30 000 chance of having it too, so 1 in 15 000 is an over-estimate of the risk.) Assuming that it's certain that you will catch an infection from tainted dough, that puts your risk of death from tasting the dough at less than 1 in 36 million; if you have a healthy immune system your risk is considerably smaller. The daily chance of getting a flu as bad as a non-fatal flu-like Salmonella infection are 1 in a few hundred, so you really don't need to worry about salmonella from cookie dough: background risk levels are much higher.

EDD, LED and GHAF

Let's put that 1 in 36 million figure in perspective. The Equivalent Driving Distance (EDD) is just under 2 miles (for those new here, that means a 2-mile car trip is as likely to kill you on average as eating 2 raw eggs) and the Life Expectancy Decrease (LED) is less than 37 seconds (eating 2 raw eggs decreases your life expectancy by only 37 seconds - here I assumed on average my readers might have 42 years left in life and divided by 36 million). For more on the LED and EDD risk metrics, see this introductory blog post and this wiki page for recording risk levels.

So on average the risk of being killed by your baking is negligible. But is the fear over-hyped? Considering there are 294 000 Google hits for "salmonella raw eggs America" and only 70 Americans die of Salmonella from raw eggs, the Google Hits per Annual Fatality (GHAF) hype-metric is 4 200: about as high as for West Nile virus. (See an introduction to the GHAF metric here and a list of GHAFs for various risks here.)

Conclusion: Lick On!


Eating cookie dough gives you a negligible risk unless you have a particularly weak immune system. Whip yourself up a batch and eat it all: it really doesn't matter. Oh, and please save me a spoonful while you're at it.

Bon Appetit!

LeDopore

Tuesday, April 24, 2007

Contest Results

Greetings, Knaldskalle.

Hi all! This is just a quick post to confirm that Knaldskalle has won the metrics contest with his two entries of "terror" and "school shootings," both of which reveal that we over-hype interpersonal violence. As your reward, Knaldskalle, do you have any ideas for a "Many Ideas" blog post?

Cheers,

LeDopore

Saturday, April 21, 2007

Taking Ears Off Your Life

Greetings, colonels.

Today's post is going to look at some of the dietary consequences of US corn subsidies. The United States corn industry is politically untouchable since so many processed foods are made from corn derivatives. (If you're interested in more details about factory foods, I thoroughly recommend Michael Pollan's book The Omnivore's Dilemma.)

While many wary eaters know that corn products like corn-fed beef and high fructose corn syrup (HFCS) are wrecking dietary havoc among the American people, it's difficult to assault the entrenched food industry without convincing facts about just how much direct damage corn subsidies do to our health. In this post I'm going to show that we can blame pretty much all of our HFCS woes on corn subsidies, and I'm going to show how much damage HFCS really does.

Corn Subsidies


Ever since 1975, the United States has been paying farmers to grow corn in excess of the quantities which the market would naturally bear. Taxpayers make up the difference between the market price and a government-guaranteed price, which is often in the neighborhood of twice the buying price of corn. Americans pay over $5 billion per year (about $17 per capita) to keep farmers producing way more corn than we could ever safely consume.

Consequences of Corn Subsidies


Corn farmers aren't the ones getting rich; the net effect of corn subsidies is to ensure a huge surplus of raw biomass to be used to manufacture higher-value food products. From The Omnivore's Dilemma, I learned that about 60% of the corn grown in the United States goes to animal feed, and much of the remainder goes into producing HFCS. If you drink diet soda or if you steer clear of US-grown meat, your taxes are paying for someone else's unhealthy diet. (Show of hands: would anyone out there resent subsidizing tobacco?)

HFCS Created by Corn Subsidies

If I'm going to accuse subsidies for making us eat unhealthy corn and corn-fed meat, I'd better be sure the subsidies are actually to blame. There are three factors which make methink corn subsidies are the root cause of pretty much all the HFCS consumed by Americans. First, HFCS is cheaper than cane sugar in the US due to subsidy. Second, in Europe, where corn isn't favored like it is in North America, HFCS is almost never used as a processed food sweetener. Third, the timing of the introduction of the corn subsidy coincides with the explosive growth of HFCS consumption in the US, as is evident in this graph (from this USDA site):



Corn subsidies were introduced in 1975, before which it's plain that HFCS was a bit player. Also note that soft drinks began phasing in HFCS as a sweetener, a transformation completed by 1984. (I fancy I can see the kink in the HFCS curve around 1984 - I wonder if that's caused by saturating the soda market.)

Fat Caused by HFCS

If HFCS were like normal unhealthy food, at least a calorie of HFCS consumed would displace a calorie from some other source, meaning that HFCS wouldn't be more responsible for today's obesity epidemic than any other unhealthy food. However, as I mentioned in this post, a recent study showed that HFCS doesn't make you feel full, so consuming HFCS will not make you eat less of other things. (The 95% confidence limit to this study was that 100 HFCS calories may displace 24 other food calories, but the study's best estimate is that people actually eat 17 more calories of other foods for every 100 HFCS calories they consume. Also note that other liquefied sugars may be just as bad as HFCS at displacing other calories.)

Even if you take the most charitable view towards HFCS allowed by the study's margin of error, 76% of the HFCS calories consumed by Americans go to fat. The average annual per capita consumption of HFCS in the United States is 59 pounds. Even assuming half of that gets wasted, that means annually an extra 22 lbs of sugar per American is consumed just because HFCS happens to be today's sweetener of choice. According to this publication (page 13 - also interesting because it claims HFCS might be not worse than other liquid sugars), HFCS is about 4/9 as calorie-dense as fat, so the availability of HFCS means that on average Americans gain an extra 10 lbs per year.

Conclusion

On average, $17 of your taxes every year go to a subsidy which causes people to gain an astonishing 10 lbs per year just through the HFCS mechanism I've outlined. (I expect subsidized animal feed also makes Americans fatter, but the story there is harder to untangle.) Moreover, the over-fertilized Iowa corn monocultures are horrible on the environment, and have killed Mexican farms which can't compete with American corn prices. (Those of you who object to Mexican farm labor should throw your lot in with the anti-subsidy crowd: it's just the subsidies which enable Americans to pay migrant workers $4 an hour while just across the boarder no farmer can afford to hire at $1 an hour. It's not something magic in the soil which makes American farms magically 4 times as efficient at turning labor into food - its the subsidies.)

In conclusion, corn subsidies do enormous harm. While I haven't supported every anti-subsidy argument in this post, I've shown that without corn subsidies you'd have the equivalent weight loss of 10 lbs per year. (I suspect many Americans diet more because of their HFCS-related weight gain - imagine if you got an extra 10 lbs of "free" fat per year! Mmmm... what I'd do!)

It's going to be a tough fight against the food industry, but there are lots of good reasons to abandon our current destructive corn-driven Leviathan. Let's ditch the subsidies and let 'em howl.

Wednesday, April 18, 2007

Local Produce vs. International Peace

Greetings, Macaroni Munchers.

A lot of my friends are concerned about buying food from too far away, in the interests of both helping out the local economy and of reducing fossil fuel consumption. It's a scary thought about how much our food supply depends on non-renewable resources like transportation fuel, and it's appealing to have the visceral connection to what you eat that you can get only from being able to visit the place where your food grows.

Agriculture and the Developing World


The unfortunate consequence of favoring domestic produce, however, is that you deprive the developing world of the much-needed foreign exchange which comes from agricultural exports. In fact, in non-industrialized areas of the third world, pretty much the only thing they produce that we consume is food.

A typical Nicaraguan farm worker earns about $.25 an hour (a quarter the minimum wage of neighboring Costa Rica). The cost of living there may be quite low, but still I'm disgusted by the fact that they could pick coffee for 8 hours and not earn enough money for a singe espresso shot in an American café.

By insisting on buying domestic food, we're just driving developing-world wages down farther. There are plenty of options for Americans: they don't all need agricultural work to stave off extreme poverty. Giving meaningful work to developing nations promotes the sense of coöperation which leads to good feelings and peace.

Dependence on developing nations for food can also lead to peace-making policy. You're less likely to invade another country if you need the food they produce to survive.

Aside: I'm being overly-dramatic. Americans consume on average 3790 calories per day (although some of that is spoilage), so losing even a third of food imports wouldn't spell widespread famine. At the same time, you're less likely to go to war with an entrenched trading partner; the European Union may have ushered in an age of post-historicism, now that individual countries are so economically entwined that it would be sillier than ever to go to war.

Fuel Costs by Sea and Land

Trade and peace aside, many of my friends want to consume as little fossil fuel as possible in getting their food delivered, so they're careful to buy only from locally-grown produce. However, raw distance-from-home is a poor tracker of fuel consumed, since freight by sea is so much more efficient than by land. Let's figure out just how much more efficient it is to ship a container one mile by sea than by land.

By land, a typical mileage rating for a semi truck carrying a 53-foot trailer is about 6 miles per gallon. Page 5 of this document has all of the relevant information: an ultra-sized container ship traveling at 22.5 knots burns 180 tonnes of fuel per day, and carries 10 000 twenty foot-equivalent units of cargo. After a little math, we find the ship transports the same 53-ft container at 44 miles per gallon.

A ship coming to the United States from Chile burns about the same amount of fuel per container as a semi truck traveling about 700 miles, and if people drive 8 miles to the grocery store to buy 50 lbs of groceries in a car rated at 30 miles per gallon, they burn as much fuel per grocery item as that container ship from Chile.

Conclusions

Before jumping on the "local food" bandwagon, please consider the impact of shunning the developing world. Also, consider biking, busing or walking to the grocery store when possible if you're really interested in reducing fossil fuel consumption.

Bon Appetit!

Tuesday, April 17, 2007

Reacting to the Virginia Tech Massacre

Greetings, fellow clods.
No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend's or of thine own were: any man's death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bell tolls; it tolls for thee.
John Donne, Meditation XVII
And who by brave assent, who by accident,
who in solitude, who in this mirror,
who by his lady's command, who by his own hand,
who in mortal chains, who in power,
and who shall I say is calling?
Leonard Cohen,"Who By Fire"

It's a terrible thing that every once in a while, a human mind snaps so violently that they take out others in their passing form this world. Shootings such as those at Virginia Tech yesterday, which claimed 33 lives, provoke deep reflection from those who hear about it. We all want to know what could make a person so down on humanity, so destitute that they would deliberately try to do it as much raw damage as possible.

Mental Earthquakes

However, people who decide to run on shooting sprees and then kill themselves are exceedingly rare. Only a tiny fraction of people do one-person massacres; it is usually institutions like governments which do the majority of the killing (see below). Mental breakdowns of different magnitudes might follow the same sort of power law as earthquakes, and perhaps there are some analogous reasons; releasing small amounts of tension in such actions as swearing being much more frequent than going on a shooting spree.

Obligatory Dig at Institutionalized Violence

It's interesting to compare the Virginia Tech massacre to the situation in Iraq in terms of raw mortality. It is tempting to visit sites like "Iraq Body Count" to get raw data. However, Iraq Body Count lists only confirmed dead registered with western-style authorities, and its upper limit on civilian casualties is 67 703 as of today. I'm not sure what their motives are, "never attribute to malice that which can be explained by stupidity," but a Lancet article shows they're off by an order of magnitude.

On October 11, 2006, this article in the Lancet took a different approach: they did a cross-sectional study of 50 clusters in Iraq chosen at random. They essentially asked "who was alive before the invasion" and "who had since died due to the war", and then extrapolated to get an estimate of the true number of Iraqi killed to date. Since the clusters were chosen at random it's possible to get statistical confidence intervals.

Their estimate is that, as of July 2006, 654 965 (95% confidence interval 392 979–942 636) Iraqis have been killed as a consequence of the war. (About 90% of these through direct violence, and not through secondary causes like the famine and health care breakdowns which accompany war.) That's an average of over 1000 per day, or a death rate of more than one Virginia Tech-scale massacre per hour for more than one thousand days straight. In other terms that's 218 times the total death toll of the September 11 2001 attacks. I feel a little hypocritical devoting a whole post to the Virginia Tech massacre when the war in Iraq causes so much more senseless violence.

My Recommendation: Don't Change Domestic Policy

It seems like the appropriate reflex at times like this is to seek some remedy: some change in domestic policy which will ensure that shootings such as these will never happen again. The anti-gun activists will use this massacre to justify harsher restrictions on weapons, while the libertarians will claim that if each student had been armed, one of them would have been able to have dropped the gunman before he had shot too many people. The Virginia governor has declared a state of emergency (as if that's going to help now). People are madly using this event as a fulcrum to leverage their own political agenda, because there's a public consensus that something must be done.

Even if a non-invasive policy could totally eliminate rampage shootings, it wouldn't change life appreciably. However scary rampage shootings are, they kill few people: on the order of 10 per year. In contrast, traffic claims about 44 000 lives a year (US DOT report, page 8 of a .pdf). If on average Americas devoted about 1 hour to thinking about the Virginia shootings, a collective 438 lifetimes would be spent mourning the passing of the 33 victims. Allowing politicians to push through new measures to monitor us under the auspices of keeping us safe is at best a waste of time: we are already safe.

Conclusions

My heart goes out to everyone who has experienced a loss: the Iraqi victims especially (who I'm sure mostly just want a chance to live a life untorn), and to the many fewer traffic casualties, whose deaths are as senseless as any. Let's not allow our fascination with criminal psychology obscure the truth: that the vast majority of Americans live free from the risk (if not the fear) from violence, whereas 2.5% of the Iraqi population has been killed by the invasion. Don't let the deaths of 33 Virginia Tech victims become a political bargaining chip. Keep things in perspective, or we're going to offer up our freedoms and cheerfulness in exchange for the appearance of removing a risk that's not significant in the first place.

Monday, April 16, 2007

Metrics Contest

Greetings, high-rollers!

In today's post, I'm announcing a new, exciting contest for my Many-Ideas readers: the Metrics Contest!

LED, EDD and GHAF Recap

If you're new here, let me fill you in a bit on the history and aims of this blog. I'm interested in putting risks into perspective and in deflating over hyped issues.

But how do we know how risky a certain activity is in a way that's easily understood? And how do you quantify hype?

To give risk probabilities a human touch, I've introduced two new metrics: the life expectancy decrease (LED), which gives you the expected amount of time your lifespan decreases from engaging in the said risky behavior, and the equivalent driving distance (EDD), which finds the distance you would have to drive to accrue a risk comparable to the activity in question. The LED is calculated by multiplying 85 years by the chance the measured risk will kill or seriously maim you, while the EDD is calculated by multiplying the risk by 1 billion (10^9) miles and dividing by 14.6, since in 2005 in the US there were 14.6 fatalities per billion miles driven.

The GHAF measures undue hype, not just pure risk. Bigger risks deserve more attention, but they don't always get it. "GHAF" stands for "Google Hits per Annual Fatality," and measures the ratio of the attention an issue gets to the real threat it poses. It's a very approximate measure, but the GHAF for different risks is so variable (from about 1 to over 100 000) that it's still useful in identifying over hyped issues.

The inaugural LED and EDD post is here, and the post introducing the GHAF is here. If you're new here, check them out to get an idea of how they can be calculated.

Risk and Hype Lists

There's a wiki to keep track of both the risks of certain activities (here) and the GHAF of certain phenomena (here). There are many fascinating cases of hype which these lists miss, which is why I'm holding this contest.

Contest Rules


Calculate the EDD, LED and GHAF for a risk of your choosing, and post it to me along with an idea for a Many Ideas blog post. On April 23rd I'll announce my favorite, and I will do a full investigation and posting on the winner's topic.

Submissions will be rated for originality (5 pts) accuracy (5 pts) and for how much they reveal about out risk biases (15 pts). Entering your EDD, LED and GHAF to the wiki pages earns an extra 2 points.

May the juiciest entry win!

Sunday, April 15, 2007

Killer Cellphones?

"Pronto? MoshiMoshi? Hello?"

I was reading Digg today, which pointed me to an article speculating that cellphones are causing "colony collapse disorder," the name for an alarming phenomenon whereby the majority of honeybees in every colony are mysteriously disappearing. (By the way, this isn't jsut about the honey bees produce. The value of their crop pollination is in the billions per year; would anyone like to post a comment with a more exact figure?) The article sounded interesting until they went off the deep end by vilifying cellphones with a few cherry-picked debunked claims:

Evidence of dangers to people from mobile phones is increasing. But proof is still lacking, largely because many of the biggest perils, such as cancer, take decades to show up.

Most research on cancer has so far proved inconclusive. But an official Finnish study found that people who used the phones for more than 10 years were 40 per cent more likely to get a brain tumour on the same side as they held the handset.

Equally alarming, blue-chip Swedish research revealed that radiation from mobile phones killed off brain cells, suggesting that today's teenagers could go senile in the prime of their lives.

Chilling. Let's go into an account of how much damage a cellphone can do, and let me cite a few studies of my own.

Traffic Dangers


We have lots of evidence that cellphones impair driving ability. A University of Utah study found that cellphone conversations impair about as much as a .08% blood alcohol content, the threshold for the legal drunk-driving limit in many North American states. The World Health Organization says talking on a cellphone while driving increases your risk of accidents by a factor of 3 to 4. Taking a 100-mile drive decreases your life expectancy on average by about one hour, i.e., it has and LED of one hour (see posts with the tags LED and EDD for more, or this one which introduces them). Talking on your cellphone bumps the LED up to three or four hours, meaning that the driving-related risk starts to overcome the old-age-related risk you'd incur anyways if you call people while driving.

Other than impairing driving ability (and repetitive stress injuries from thumb-typing), cellphones aren't going to hurt you. Let's take a look first at the physics of cellphones (which will show them to be benign) and then take a look at the epidemiology of cancer among cellphone users, citing the most thorough study ever done, which happens to be Danish (Long Live Fear-Dispelling Vikings!).

The Physics of Cellphones

Cellphones communicate by broadcasting microwaves to cell towers. They use one of two frequency ranges: either about 850 MHz (the PCS band) or about 1900 MHz (the cell band). The peak power of a cellphone's transmission is about 2 Watts, so the amount it broadcasts into your head isn't more than about 1 Watt.

There are three potential concerns which make cellphones potential health risks: heat, chemical damage, and brain interference. Let's assess each potential risk.

Of Cellphones and Sunbeams

It turns out many of you non-hat wearers heat your head with electromagnetic radiation on a daily basis. A fusion-powered blob of gases over 100 million km away bakes your melon with an intensity of over 1000 Watts per square meter on a cloudless day. If the cross-sectional area of your head is about 3% of a square meter, that means the sun warms your head with over 30 times the power intensity of a cellphone. If cellphone-related heat can cause damage, so can the sun.

Mutagenic Conversations?

The next most commonly-feared etiology of cellphone-related cancer is through the photons in the microwaves causing genetic damage by affecting our DNA. However, the energy in even the highest-energy cellphone photons is far too low: a 1900 MHz photon has an energy of less than 8 microelectronvolts: about 100 000 times less energetic than the kind of photon needed to make any chemical change. At body temperature, random thermal fluctuations give every molecule constant kicks of over 25 millielectron volts: over 1000 times as powerful as a cellphone photon. No cellphone is going to turn you into a toxic avenger.

Nokia Mind Control?

I've seen one more way in which fear-mongers propose that cellphones could harm you:. They think it's possible that the pulses of electromagnetic energy could interfere with brain functions. It's true that neuroscientists use pulses of transcranial magnetic energy to temporarily (and, we hope, reversibly) poke an area of gray matter to try to figure out what it does. Could cellphones be doing the same?

Again, the relative magnitudes are way off: neuroscientists use field strengths around 10 Tesla, while cellphones typically have much smaller magnetic field strengths: around 50 Gauss or 5 mTesla (1 Tesla = 10 000 Gauss: one of those Metric System anomalies). Once again there's a yawning, factor-of-1000 gulf between the strength of a cellphone and the effect size needed to make worrying sane. It's even worse when you take into account the fact that the energy associated with a magnetic field goes as the square of the field strength, so it's more like a factor-of-1-million difference between what a cellphone is and what we'd worry about.

Epidemiology

By now, it shouldn't surprise you to find that the most extensive study done on cellphones (the Danish one I alluded to) "found no evidence for an association between tumor risk and cellular telephone use among either short-term or long-term users." The study followed 420 095 persons for up to 21 years each, and saw that cancer rates were not higher than among the population in general. Breath a sigh of relief, and don't believe the fear-mongers who say cellphones are risky.

What about studies which show a correlation between cancer and cellphone use? There's a dirty little secret in science called publication bias. In a nutshell, it's precisely those stories which seem to defy common thinking which seem most newsworthy, get the most press, and get published. In cases where there's a lot of public interest and attention, it's a good policy to disregard studies with small sample sizes, since there are probably 20 unpublished small studies with null results for every 1 study with a stunning effect that's significant at the 5% level.

Conclusions

I don't know that much about bees, but cellphones are safe to humans, provided that their attention isn't needed elsewhere and that it doesn't over-stress them to have a cellphone. It's not totally outrageous to guess bees might be confused by cellphones, since the Earth's magnetic field is only about 0.3 Gauss. I'm not an expert of bee navigation, but it shouldn't be to hard to experimentally verify the connection between active cellphones and bee death. In the meantime, color me skeptical, especially considering that the article repeats loony fears.