Earlier this year data scientist Ryan Rosario gave a talk on parellel computing with R to the Los Angeles R User Group, and he recently made the slides from the talk available online. They're a great resource for anyone looking to make use of multi-processor systems a Hadoop based architechure to speed computations with big data. Ryan's talk was divided into three parts:

**Explicit parallelism**has the R programmer responsible for dividing the problem to be solved into independent chunks (to be run in parallel), and also responsible for aggregating the results from each chunk. It's especially suited to "embarassingly parallel" problems like large-scale simulations and by-group analyses. Ryan explains how to use the parallel package in R to perform explicit parallelism, using random cross-subset validation (to train a spam-detection algorithm) as an example.**Implicit parallelism**is easier for programmers than explicit parallelism, because (as Ryan writes), "most of the messy legwork in setting up the system and distributing data is abstracted away." In this section, Ryan shows how to use the mclapply function from the multicore package. It works just like the regular lapply function to iterate across the elemenst of a list, but iterations automatically run in parallel to speed up the computations. In the Appendix at the end of the slides, Ryan also shows how to use Revolution Analytics' foreach package with doMC for parallel programming, with some neat examples of bootsrapping and a parallel implementation of the quicksort algorithm.**Map-Reduce**is a somewhat complex but very powerful paradigm for processing large datastores in parallel. It's best known as the programming framework for Hadoop-based systems, and Ryan shows how to use Revolution Analytics' RHadoop project to implement map-reduce for Hadoop using R. An implementation of K-means clustering with Hadoop is given as an example.

Many thanks to Ryan for sharing these slides, which you can find at the link below.

ByteMining: Parallelization in R, Revisited

## Comments

You can follow this conversation by subscribing to the comment feed for this post.