## July 17, 2009

You can follow this conversation by subscribing to the comment feed for this post.

You probably meant to have "1:" before the 1e6 like this

for (i in 1:1e6) do.stuff(i)


and it is "only" 4MB since that makes an int[] and not a double (object.size(1:1e6) returns 4000040 bytes on my machine).

You can of course re-write it as a while() statement without the allocation

i <- 1L; while(i <= 1e6L) { do.stuff(i); i <- i+1L; }


but the control flow statements in R are pitiful so it is good to see you expanding the language. And of course your approach extends to parallel processing.

Aha, good points, thanks Allan. I've made corrections to the original article.

i = 1
while(i < 1e6) { do.stuff(i); i = i + 1 }

Hi.

I executed the following script to try the foreach package, but encountered the error.
> library(foreach)
> foreach(i=1:3) %do% sqrt(i)

This error message means that "out of range of the subscript " in Japanese. My envirnment is as follws.
> sessionInfo()
R version 2.9.1 (2009-06-26)
i386-apple-darwin9.7.0

locale:
ja_JP.UTF-8/ja_JP.UTF-8/C/C/ja_JP.UTF-8/ja_JP.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] foreach_1.2.1 codetools_0.2-2 iterators_1.0.1

loaded via a namespace (and not attached):
[1] tcltk_2.9.1 tools_2.9.1

So I changed the LANG as following.

/Users/syou6162% LANG=C
/Users/syou6162% echo \$LANG
C

And I restarted R and executed the above script. In this time, it went well.

/Users/syou6162% R

R version 2.9.1 (2009-06-26)
Copyright (C) 2009 The R Foundation for Statistical Computing
ISBN 3-900051-07-0

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(foreach)
foreach: simple, scalable parallel programming from REvolution Computing
Use REvolution R for scalability, fault tolerance and more.
http://www.revolution-computing.com
> foreach(i=1:3) %do% sqrt(i)
[[1]]
[1] 1

[[2]]
[1] 1.414214

[[3]]
[1] 1.732051

So I think foreach packages depends on the "locale". Could you research the cause of this?

Thanks for letting us know about that, I agree it looks like a problem with locales. I'll have the developers take a look and get in touch with you directly.

Actually a standard 'while' loop in R does also not consume RAM but is factor 300 (three-hundred) faster then 'foreach with icount'.
Foreach with icount is about factor 2000 (two-thousand) slower than a standard for loop, thus a moderate RAM gain comes at a very expensive speed-loss - too expensive to be healed by parallization.
One of the fastest AND RAM efficient ways of looping is chunked-looping as facilitated by function 'chunk' in package 'bit': same speed like 'for' but factor 1000 less RAM.
The examples below also show that for example calculating a sum with chunked looping can be up to factor 60,000 (sixty-thousand) faster than a 'foreach' implementation employing '.combine="+"' and still saves factor 1000 RAM compared to a simple call to 'sum'.

Cheers
J.O.

> require(bit)
> require(foreach)
> m <- 10000
> k <- 1000
> n <- m*k
> cat("Four ways to loop from 1 to n. Slowest foreach to fastest chunk is 1700:1 on a dual core notebook with 3GB RAM\n")
Four ways to loop from 1 to n. Slowest foreach to fastest chunk is 1700:1 on a dual core notebook with 3GB RAM
> z <- 0L; k*system.time({it <- icount(m); foreach (i = it) %do% { z <- i; NULL }})
User System verstrichen
6180 0 6320
> z <- 0L; system.time({i <- 0L;while (i User System verstrichen
20.25 0.00 20.78
> z <- 0L; system.time(for (i in 1:n) z <- i)
User System verstrichen
3.30 0.02 3.40
> z <- 0L; n <- m*k; system.time(for (ch in chunk(1, n, by=m)){for (i in ch[1]:ch[2])z <- i})
User System verstrichen
3.14 0.00 3.35
> cat("Seven ways to calculate sum(1:n). Slowest foreach to fastest chunk is 61000:1 on a dual core notebook with 3GB RAM\n")
Seven ways to calculate sum(1:n). Slowest foreach to fastest chunk is 61000:1 on a dual core notebook with 3GB RAM
> k*system.time({it <- icount(m); foreach (i = it, .combine="+") %do% { i }})
User System verstrichen
9460 0 9780
> z <- 0; k*system.time({it <- icount(m); foreach (i = it) %do% { z <- z + i; NULL }})
User System verstrichen
6280 0 6410
> z <- 0; system.time({i <- 0L;while (i User System verstrichen
27.06 0.00 28.40
> z <- 0; system.time(for (i in 1:n) z <- z + i)
User System verstrichen
9.47 0.02 9.62
> system.time(sum(as.double(1:n)))
User System verstrichen
0.13 0.01 0.15
> z <- 0; n <- m*k; system.time(for (ch in chunk(1, n, by=m)){for (i in ch[1]:ch[2])z <- z + i})
User System verstrichen
8.92 0.00 9.42
> z <- 0; n <- m*k; system.time(for (ch in chunk(1, n, by=m)){z <- z + sum(as.double(ch[1]:ch[2]))})
User System verstrichen
0.14 0.01 0.16

Jens, thanks for the info about the "bit" package, but I don't think your timings make a lot of sense as the body of each of your loops is trivial; you're really only measuring the overhead of the different kinds of loops, which in any real application (simulations, etc) is a trivial component of the overall execution time.

The real advantage of using foreach comes in the parallelization, where any overhead in the loop is far outweighed by running iterations in parallel (provided they do more than trivial amounts of calculation). In particular, if you're writing code for others to use (say, in a package), it allows OTHERS to run your code in parallel, too. Even if you don't have a parallel system to run on, if you use foreach/%dopar% someone using your code can register multicore or networkspaces as the parallel backend and get significant speedups. (DoMC, the multicore backend, is available on CRAN for all platforms except Windows; DoNWS is available in REvolution R Enterprise and is available for all platforms.)

The comments to this entry are closed.

## Search Revolutions Blog

Got comments or suggestions for the blog editor?
Email David Smith.