Does this seem like a wrong conclusion to you? It should - hell, I had trouble even typing it out, since I knew it was so absurd. Of course, blogs catch on and become popular! Sure, some fail, but many do not!
The above example is one of anecdotal evidence. That is, it is a story (or anecdote) that one invokes to try to prove a point. Personally, and as a professional scientist, I absolutely hate this type of argument. Why? Because it doesn't make any sense - it is a failure of logic to deduce a strong conclusion from such little evidence, especially evidence that may be subjective in nature (like my hatred for Background Dominated).
To get a little more political, consider the universal health care debate. (I am not going to take sides in this debate -- this is just an example). I've heard people say "Universal health care is an absolute failure. I have known people to stand in lines for hours, just waiting to be treated". But then I've also heard things like "Universal health care is absolutely necessary. Without it, people will not be able to receive the treatment they need because they cannot afford it".
Who is right in this debate? Well, each person could be telling the absolute truth, but the problem is that these people are relying on only ONE story or ONE point of view in their arguments. To get a true representation of the success of universal health care, we have to look at the big picture - how does universal health care help or harm society as a whole? Overall, does it work?
Moving on from this provocative subject, my point is this: In any complex situation (which is what most things are), there are many many variables that come into play. One way to account for (and in some sense "smooth over") these variables is to do statistical studies. I won't get into statistics here so as to avoid boring you (I hope the word statistics hasn't already scared you away...).
Instead, I will provide you with another example, again based in politics. Consider the 2012 election. There were many many many polls trying to make a prediction for who would win the presidency. Some would show Romney as a clear winner, others would show Obama. Many would show that the race was pretty close. The problem with a lot of these polls is exactly how they were conducted. Maybe some of the polls didn't include as many Republicans as Democrats (just as an example). So, each poll had some error associated with it.
However, people like Nate Silver carried out calculations that would in a sense "average" over all the variations in these different polls. There were other things that went into the calculations too of course (read about his calculations here for more info), but the point is that by taking a large set of data and accounting for all the fluctuations and all the variations in the different polls, Silver was able to absolutely nail the 2012 presidential election. He got EVERY state correct.
So, before coming to any strong conclusion about something, one MUST look at as many variables and as many possibilities as they can. Relying on personal stories or subjective viewpoints is simply not enough!
With that (and a nice picture of Silver's predicted electoral map to be compared with the final result), I conclude my lecture ;) Just be skeptical always... especially with anecdotal evidence!
|Courtesy: Nate Silver, 538 Blog on the NY Times website|