MapReduce is a powerful programming framework for efficiently processing very large amounts of data stored in the Hadoop distributed filesystem. But while several programming frameworks for Hadoop exist, few are tuned to the needs of data analysts who typically work in the R environment as opposed to general-purpose languages like Java.
That's why the dev team at Revolution Analytics created the RHadoop project, to give R progammers powerful open-source tools to analyze data stored in Hadoop. RHadoop provides a new R package called rmr, whose goals are:
- To provide map-reduce programmers the easiest, most productive, most elegant way to write map reduce jobs. Programs written using the rmr package may need one-two orders of magnitude less code than Java, while being written in a readable, reusable and extensible language.
- To give R programmers a way to access the map-reduce programming paradigm and way to work on big data sets in a way that is natural for data analysts working in R.
Together with its companion packages rhdfs and rhbase (for working with HDFS and HBASE datastores, respectively, in R) the rmr package provides a way for data analysts to access massive, fault tolerant parallelism without needing to master distributed programming. By providing an abstraction layer on top of all of the Hadoop implementation details, the rmr package lets the R programmer focus on the data analysis of very large data sets.
If you want to get started with MapReduce programming in R, this tutorial on rmr shows simple equivalents to the R functions lapply and tapply in map-reduce form. It also gives some simplified, but practical examples of doing linear and logistic regression and k-means clustering via map-reduce. For more advanced map-reduce programmers, these pages on efficient rmr techniques and writing composable mapreduce jobs will also be of interest.
The rmr package is available for download from the github repository under the open-source Apache license, and we encourage other Hadoop developers to get involved with the RHadoop project.
Note: As an introduction to the RHadoop, project lead Antonio Piccolboni will join Revolution Analytics CTO David Champagne for a webinar Wednesday, September 21. Register here for a live introduction to the rmr package and how to use it to analyze big data sets within the map-reduce framework.
githib: Revolution Analytics RHadoop Project
Learn more about Revolution R Enterprise and Hadoop for Big Data Analytics.
This is quite enticing!
Is it compatible with alternaive DFS implementations?
(I guess my question is about rhdfs in particular)
My team uses the DataStax Brisk distribution of Hadoop+Cassandra, which uses a Cassandra CFS implementation of HDFS.
This alternate HDFS implementation makes it tricky to access the DFS from client programs (such as Sqoop, a SQL<->Hadoop bridge)
To connect to a CFS DFS, we configure our client apps like so (in sqoop-site.xml or similar):
fs.cfs.impl = org.apache.cassandra.hadoop.CassandraFileSystem
fs.default.name = cfs://localhot:9160
(XML translated to .properties format for your blog comment system.)
and we drop a jar containing CassandraFileSystem on the client program's CLASSPATH.
Is there a way to configure RHadoop like this to access an alternative DFS implementation?
Posted by: Mike B | September 30, 2011 at 04:39
Dear
Thank you very much for this interesting post. I have installed and tried rmr. It works really fine.
However, I am having some troubles during the execution of fastkmeans on my own data (https://github.com/RevolutionAnalytics/RHadoop/blob/master/rmr/pkg/tests/kmeans.R) on data stored in a data frame type.
For example, if I execute the following lines provided in your code, every think works fine:
input = to.dfs(lapply(1:100, function(i) keyval(NULL, cbind(sample(0:2, recsize, replace = T) + rnorm(recsize, sd = .1),sample(0:3, recsize, replace = T) + rnorm(recsize, sd = .1)))))
kmeans(input, 12, iterations = 5, fast = T)
However, when I try to execute the following line within data stored in HDFS, as follows
kmeans("/tmp/mydata", 12, iterations = 5, fast = T)
I got the error :
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, reduce.on.data.frame = reduce.on.data.frame, :
hadoop streaming failed with error code 256
Could you please give any suggestions that can help me to overcome this problem?
Thank you very much in advance.
Posted by: Clea | July 05, 2012 at 08:41
Dear Mike B.,
unfortunately we are unable to confirm compatibility with Cassandra CFS. If your team experiments with it, please share the results. Thanks
Dear Clea,
you don't say what's in mydata, neither what columns the data frame had nor how it was saved, so I am going to make a guess, and that is that you have to convert it to the format that works, which is the one you report. This is just an example so it has no ambition to work as a library function that can read from a wide range of input sources and what not. It is just an example and works fine as such. As far as debugging, please follow the guidelines in https://github.com/RevolutionAnalytics/RHadoop/wiki/Debugging-rmr-programs Among other things it will tell you that the console output in distributed mode, which is probably what you shared here, is rarely informative. Thanks
Posted by: Antonio Piccolboni | July 05, 2012 at 15:05