R is designed as an in-memory application: all of the data you work with must be hosted in the RAM of the machine you're running R on. This optimizes performance and flexibility, but does place contraints on the size of data you're working with (since it must all work in RAM). When working with large data sets in R, it's important to understand how R allocates, duplicates and consumes memory. This guide to R memory usage in Hadley Wickham's forthcoming book Advanced R Programming is a useful for R developers struggling with R's memory usage. It covers how R allocates memory for objects (and how much), the situations in which R makes copies of data (instead of just passing data by reference), and how to track and control the amount of data that R uses.
By contrast, Revolution R Enterprise ScaleR works with data out-of-memory: either on a file on disk, or stored in a database or other data repository. By streaming data through memory rather than loading it all into memory at once, Revolution R Enterprise calculates descriptive statistics, machine learning models, and statistical models on large data sets without limitations on the available RAM and without having to worry about the details of R memory usage. Learn more about Revolution R Enterprise here.
Advanced R Programming by Hadley Wickham: Memory
Comments
You can follow this conversation by subscribing to the comment feed for this post.