« Smoothing time series with R | Main | How ideological is Google? »

March 30, 2010

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a010534b1db25970b01310fa99294970c

Listed below are links to weblogs that reference Scientists misusing Statistics:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I agree with you but in my opinion the big problem lies in the lack of communication between statisticians and biologists. And it's a faulty communications from both sides obviously.

I had to smile a little bit when I read your sentence saying: "But more often than not, these "arcane" issues (which are actually part of any statistical training) go ignored in scientific journals."
Well, try and ask 100 biologist what a Baesyan method is and see how many can at least give you a vague definition. Not many, I can tell you. And most people are not interested (or too scared) to care and read something about it.

"Authors abusing P-values to conflate statistical significance with practical significance. A for example, a drug may uncritically be described as "significantly" reducing the risk of some outcome, but the the actual scale of the statistically significant difference is so small that is has no real clinical implication."

This reads like the issue is simply a misunderstanding of jargon. "Significance" means statistical significance to the academic audience.

The problem is more fundamental: there is a tendency to focus on hypothesis testing rather than parameter estimation.

Regarding multiple comparisons, many corrections could be performed by a reader using the uncorrected p-values, so long as the tests reported are all the tests that were conducted. If you want to be conservative, Bonferroni correction is simple: just count up the number of p-values and multiply them all by that number.

The co-operation of peer reviewers and editorial boards may be disturbing, but it is understandable. Most of the reviewers and editors will have studied statistics many years ago and forgotten more than they remember. The result - stick with what was done before, even if it could actually be wrong. The status quo is self-reinforcing.

I like this account from "The Cult of Statistical Significance" p.112:

We asked William Kruskal a couple of years before his death, "Why did significance testing get so badly mixed up, even in the hands of professional statisticians? ..." "Well," replied Kruskal, smiling sadly, "I guess it's a cheap way to get marketable results."

Hi David,
Good post on an importing subject.

Just one small point: Notice that your sentence:

"... the "false discovery rate" maybe higher than we think "

Is (probably) correct,

But my guess is that what is more probable (and also the type of error people are making) is that they think that the "family wise error" (FWE) in the article is about 5% - when in fact it is not. That is, people might think that each time they see a P.value < .05 , that means that it can be interpreted in the way they would interpret a single P value.

While the FDR of the article might be kept on q<.05, people could (too easily) misinterpret it as if the article's FWE was less then .05.

I am not sure I was clear to whoever is not familiar to the subject, but I hope at least that I was able to raise some question for people to go and find the answers to :)

百草枯是世界上被最广泛应用的非选择性除草剂之一,主要用于可持续农业和保护性耕作。在做整地时,例如采用免耕方法,百草枯(可无踪)使快速除草、防治草甘膦抗性杂草及防止水土流失成为可能。百草枯在土壤中失去活性,没有淋溶

learned a lot

The comments to this entry are closed.


R for the Enterprise

Got comments or suggestions for the blog editor?
Email David Smith.
Follow revodavid on Twitter Follow David on Twitter: @revodavid

Search Revolutions Blog