Just a couple of quick notes about the first day of talks at useR! 2010. It's been a jam-packed schedule -- so many good talks to see and people to meet, I just wish I had more time for it all!
One stand-out for me so far has been Frank Harrell's keynote lecture Information Allergy, on the dangers of misusing statistics in Medicine, was amazing. (Update: video here.) You know a talk is thought-provoking when you're still thinking about the consequences in free moments the day after. It's worthy of an entire blog post on its own.
I've also been excited to see the number of real-life applications using R presented at the conference. In one session alone, I saw how R is used to precisely locate earthquakes (by comparing actual arrival times of signals in seismograph data to their predicted arrival times); how it's used to measure and report on water quality in Australia; and even how it's used to measure the amount of greenhouse gases leaching out of landfills, from LIDAR measurement data. Really fascinating stuff.
The launch-party for inside-R.org last night was a lot of fun too: having about 150 R users together to drink and chat was a great way to learn lots of new things and meet some great people. Thanks to everyone who came along. (If you're at JSM in Vancouver, we'll be hosting another social event on Tuesday, August 3.)
Overall, so far it's been a really outstanding conference: smooth organization, great people, interesting talks, and a really palpable sense of excitement about R. Anyway, I have to run now to give my talk. I'll write more when I get a free moment.
David,
it was nice talking to you after the talk you gave on R in commercial environment.
One observation I've had over the past few years when bringing up R (or any other new software open-source or not) is that big companies love the status-quo. most of the advantages of OSS for personal computers or even small-mid size companies don't really matter to them. For them change is a big problem since loss of productivity and potential loss in revenue if something new fails can costs millions or billions. This level of risk doesn't exist in small scale settings and is not understood by a lot of open-source evangelicals. I am saying this because I was one of them. In commercial situations trying to suggest an alternative which can do much-much more than the status-quo (eg. R vs S*) back-fires if there is even one tiny bit of process that is not just as good. what I am trying to say is that all the wealth of analysis that R can provide becomes of no-concern, if it can't do some really basic things (for eg out of memory calculations) easily. Yes, there are packages out there and one can do all sorts of wizardry to get around this problem in R, but that just creates a steep learning curve that people with short deadlines will not invest in.
I was hoping to hear from Luke that some out of memory stuff will get into R-base soon.
Posted by: nikhil | July 22, 2010 at 21:00