Market researcher Tom Ewing offers some advice that applies equally well to statisticians -- be careful when you use the word "significant" in its technical sense. Depending on the audience, it could lead to misunderstandings:
Non-researchers tend to misread “significant” as “important” or simply “big”. Which isn’t the case - it can be trivial or small, it’s just unlikely to be fluke or coincidence.
Researchers tend to read “significant” as “interesting”. Which isn’t the case either - even big results can be utterly banal, especially if they simply confirm something you could have guessed, or if they repeat information you already have.
For example, suppose we give 1,000 people an IQ test, and we ask if there is a significant difference between male and female scores. The mean score for males is 98 and the mean score for females is 100. We use an independent groups t-test and find that the difference is significant at the .001 level. The big question is, "So what?". The difference between 98 and 100 on an IQ test is a very small difference...so small, in fact, that its not even important.
Then why did the t-statistic come out significant? Because there was a large sample size. When you have a large sample size, very small differences will be detected as significant. This means that you are very sure that the difference is real (i.e., it didn't happen by fluke). It doesn't mean that the difference is large or important. If we had only given the IQ test to 25 people instead of 1,000, the two-point difference between males and females would not have been significant.
Thanks for the link! I think I ought to point out that the IQ example isn't mine, and I agree it's an odd one to choose, because at the very least it begs an explanation!
My point about researchers was more about their tendency to see technical significance as the end of the filtering process rather than the beginning.
Posted by: Tom Ewing | May 06, 2009 at 03:41