« A big list of the things R can do | Main | A new open journal on Data Science »

July 03, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.

I read it more as, "They found a trace of the God particle" but did not find the nail it down. Someone of a difference but none the less, they have been making steady improvements during the past few years.

Which is good considering the money they have spent. In the billions.

I really hope that, with all those billions of dollars and cutting-edge technologies, they don't use an outdated/backward statistical approach like null-hypothesis testing, which is what I suspect when I hear them talking about "sigma" levels...

This discovery asssumes that the measurement errors are distributed normally, right? What are the chances that this is not the case (dependent/unbounded errors etc.)?

Let's assume shall we that these people are smart enough to know what kind of statistical methods they should be using

Btw this:

"what you need is a HUGE amount of data"

reminded me of this

“In some ways I think that scientists have
misled themselves into thinking that if you col-
lect enormous amounts of data you are bound
to get the right answer. You are not bound to
get the right answer unless you are enormously
smart. You can narrow down your questions;
but enormous sets of data often consist of
enormous numbers of small sets of data, none
of which by themselves are enough to solve the
thing you are interested in, and they fit together
in some complicated way."

Brad Efron (2010)- The significance magazine

The comments to this entry are closed.

Search Revolutions Blog

Got comments or suggestions for the blog editor?
Email David Smith.
Follow revodavid on Twitter Follow David on Twitter: @revodavid
Get this blog via email with Blogtrottr