Thursday, May 05, 2005

On the Huddle House Paradox or, the Perils of Progress in Statistical Analysis

Yesterday, a colleague dropped by my office to ask a question. Usually, when folks do this, it's either about judicial politics, statistics, or movies. I can answer questions on these subjects with (respectively) decreasing levels of accuracy, but increasing degrees of confidence.

Anyway, my colleague had a statistical puzzle. He had some old results run a couple of years ago on some data and had just re-run the same estimation on a different computer, using a different (newer) edition of the same software and gotten results that were exactly the same except for one coefficient, which was a different number and had the opposite sign. It was a random effects, time series regression model and everything, even the error component summaries and model fit stats, were precisely the same up to six, seven, eight decimal points except that one variable coefficient (and its standard error.)

I couldn't come up with an explanation, since if it was some kind of conventional error, like changes in the data while transferring computers or somesuch it should change more than just that one coefficient estimate. My guess was it had something to do with the updated software or the computer, because blaming the software or hardware is the easiest thing to do, and recommended he call the software company's help line.

Sure enough, the guy on tech support told him that it was probably due to some change made in subsequent updates to the routine he was using. He didn't say it, but I assume the implication was that the current results were right and the previous results were wrong. This was a bit reassuring, but more than a bit unsettling.

It's unsettling due to what I'm choosing to call, "The Huddle House Paradox." Huddle House, as some may know, is a chain of restaurants spanning the southeast that are sort of like a combo of Waffle House and Shoney's. They serve diner-type food and are open 24 hours. I believe their current motto is "Always Open, Always Fresh" (there's a corporate headquarters near where I live,) but for many years their motto was "Best Food Yet" and that can still be seen on various locations and the headquarters. One way of reading "Best Food Yet" is in its likely physical context, a string of fast food eateries scanned in search of a quick bite, in which case the motto assures the searcher that Huddle House will provide better food than anything he or she has seen so far. Another way of understanding it (more natural, given the language used) is chronologically: that Huddle House's current fare is the best they've yet devised. Taken to its Zeno-esque extreme, this means that Huddle House's food at time T is not as good as its food at time T + 1 (using whatever units of time one wishes, even extremely tiny intervals.)

Without knowing the food quality limit (the highest possible quality that Huddle House food could theoretically achieve) or the functional form of its time-dependent improvement (the impact of an increase in T on food quality) except that it is asymptotically increasing in T toward the FQL, this puts a potential customer in a tough position. Seeking to maximize the quality of one's meal, the rational strategy is always to wait another unit of T in order to enjoy whatever increase results. So, with the constant promise of the "best food yet," one should always defer eating at Huddle House.

Before anyone thinks that this post itself is the peril of statistical analysis, I'd like to point out that I thought of this when I was about 12 years old, although I wouldn't have expressed it in these terms at that time.

You can escape this paradox by adding on conditions, like an increasing hunger function, independently changing prices or satisficing (although, food being an experience good, you can never know where on the food quality function you are without eating it) but taken as is, Huddle House's apparently strong promise is fatal to their enterprise and their potential customer's interest.

My colleague's statistical problem raises another version of the same issue. As successive versions of statistical software correct the estimation procedures of canned routines and introduce more sophisticated versions of familiar models (dealing with more data issues or in a more satisfactory fashion,) one is always advised not to run any particular model until the next version of the software corrects the problems in the last version.

Sure, the improvement function in software quality isn't as smooth as in the Huddle House Paradox, but the basic idea is the same, especially since updating the software online is increasingly common. You could always avoid the problems inherent in using canned estimators by programming your own, but even when I write my own programs for models, I don't actually devise the language its written in or code the Metropolis-Hastings algorithm or whatever it is making my model go. Add in hardware improvements, advances in statistical theory and in the known properties of estimators, and better (more) data collection and you get the same problem compounded in several dimensions.

In short, to maximize the correctness of any statistical analysis, it is always advisable to defer doing it.

That's why I've taken this bit of time to share this with you.

0 Comments:

Post a Comment

<< Home