A recent post by Scientific American writer and blogger John Hogan got me thinking about Bayesian statistics again.
My favorite explanation of Bayesian statistics was by Nate Silver in The Signal and The Noise. The basic approach involves incorporating prior estimates of probability into new measures of probability. The opposing approach, which does not rely on prior knowledge, is termed "Frequentist" statistics and is exemplified Fisher's standard test used with p=0.05 (which implies that a given result would occur "by chance" only 5 in every 100 such tests).
Hogan uses the standard example of cancer tests to illustrate the importance and power of Bayesian thinking, but an astute commenter points out that the real power of Bayesian thinking comes when used in a process that tests, updates probabilities, and tests again, so that each test incorporates the learning from previous tests.
Silver offered a similar example in his book, but a review in the New Yorker points out that Silver got it wrong. In Silver's case, he applies Bayesian statistics to the probability that global warming is occuring. But the prior probability is estimated, and Bayesian approaches only improve on standard statistics when prior probabilities are well known. So while Silver does present a rational means of updating beliefs, since the original belief is not based on statistical data, the resulting analysis cannot be called statistically valid.
Both the New Yorker review and Hogan's thoughts highlight the inherent power of confirmation bias to trump any statistical test, even Bayesian tests.
No comments:
Post a Comment