Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overhead of future_lapply relative to parLapply/mclapply, etc. #68

Open
traversc opened this issue Nov 13, 2020 · 4 comments
Open

Overhead of future_lapply relative to parLapply/mclapply, etc. #68

traversc opened this issue Nov 13, 2020 · 4 comments

Comments

@traversc
Copy link

I have a number of tasks that look like: lapply(long_list, fast_function) and I'd like to get away from using mclapply (for reasons you've talked about before).

However, in my benchmarks I see that future_apply has a larger overhead comapred to parLapply/mclapply.

Are there parameters I can tune to improve the performance on these types of tasks?

An example:

library(dplyr)
library(parallel)
library(future.apply)
library(microbenchmark)
plan(multisession(workers=4))
cl <- parallel::makeCluster(4)

v <- paste0(paste0("gene", 1:100), "*", 1:3)
v <- sample(v, 10000, replace=T)

parL <- function(v) {
  parallel::clusterExport(cl, varlist = "%>%")
  v <- parallel::parLapply(cl, v, function(.x) {
    gsub("\\*$", "", .x) %>% gsub("\\*.+$", "", .) %>% unique %>% 
      paste0(collapse = ",")
  })
}

serial <- function(v) {
  v <- lapply(v, function(.x) {
    gsub("\\*$", "", .x) %>% gsub("\\*.+$", "", .) %>% unique %>% 
      paste0(collapse = ",")
  })
}

mcl <- function(v) {
  v <- mclapply(v, function(.x) {
    gsub("\\*$", "", .x) %>% gsub("\\*.+$", "", .) %>% unique %>% 
      paste0(collapse = ",")
  }, mc.cores=4)
}


fut <- function(v) {
  v <- future_lapply(v, function(.x) {
    gsub("\\*$", "", .x) %>% gsub("\\*.+$", "", .) %>% unique %>% 
      paste0(collapse = ",")
  })
}

microbenchmark(parL = parL(v), mcl = mcl(v), serial = serial(v), fut = fut(v), times = 5, setup=gc())

Unit: milliseconds
   expr       min        lq      mean    median        uq       max neval cld
   parL  529.5245  534.1097  677.4822  640.9563  746.9266  935.8941     5  a 
    mcl  445.8535  451.9500  464.9154  459.4391  474.9048  492.4295     5  a 
 serial 1339.9738 1451.7585 1467.4781 1461.9080 1517.0687 1566.6813     5   b
    fut 1059.6930 1060.1854 1342.6222 1064.8015 1456.4210 2072.0099     5   b
@HenrikBengtsson
Copy link
Collaborator

There's a fair bit of overhead from the capturing and relaying of conditions, e.g. messages and warnings.

When using the low-level Future API, these can be disabled by:

f <- future(..., conditions = NULL)

where the default is conditions = "condition", which means captures all types of conditions.

For historical reasons due to being able to roll out condition relaying in future.apply, furrr, and doFuture, attempting to do the same there, e.g.

f <- future_lapply(..., future.conditions = NULL)

will end up becoming future.conditions = "condition". In other words, currently, we cannot disable it in these map-reduce APIs.

There's also an overhead from capturing and relaying the standard output. This can indeed be disabled by setting future.stdout = NA.

Now, I've just pushed an update to the develop branch where it's possible to disable the condition relaying mechanism as well. Install that version and try with:

fut2 <- function(v) {
  v <- future_lapply(v, function(.x) {
    gsub("\\*$", "", .x) %>% gsub("\\*.+$", "", .) %>% unique %>% 
      paste0(collapse = ",")
  }, future.stdout = NA, future.conditions = NULL)
}

I think that'll shave off 10-15% of the overhead. You can do manual specification of globals and packages via arguments future.globals and future.packages - but I don't think that will do any dramatic speedups.

There's more optimization that can be done in the future package so you can expect some more improvement in future (pun intended) release - not dramatic but improvements.

cc/ @DavisVaughan

@HenrikBengtsson
Copy link
Collaborator

FYI, I've shaved off some internal overhead that involves R expression manipulations in the develop version of future. You can expect that fut()/fut2() in your specific example will perform ~50% faster than before.

@DavisVaughan
Copy link

@HenrikBengtsson FWIW you can inject NULL into an expression (or list) with [ (not [[!) and list(NULL)

x <- quote(list(1, 2))
x[[2]] <- NULL
x
#> list(2)

x <- quote(list(1, 2))
x[[2]] <- list(NULL)
x
#> list(list(NULL), 2)

x <- quote(list(1, 2))
x[2] <- NULL
x
#> list(2)

# Aha!
x <- quote(list(1, 2))
x[2] <- list(NULL)
x
#> list(NULL, 2)

Created on 2021-03-15 by the reprex package (v1.0.0)

@HenrikBengtsson
Copy link
Collaborator

Hi. I eventually figured it out - it turned out that for certain types of expression one has to do some coersion for that to work, cf. https://github.com/HenrikBengtsson/future/blob/5c52ff365fc2efcdb063e4cc98acd52d441437a5/R/000.bquote.R#L109-L116. (Now when I look at it, I can't recall in exactly what situations the is.call() was needed)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants