You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement reading of files in parallel chunks (most universally via future; e.g. future.apply::future_lapply()). If there are 100s or 1000s of files, the process of reading could for example be distributed over multiple cores to speed up the process. Key points (let's discuss):
Reading single files per core at a given time has too much overhead compared to time it takes to read one file.
Chunking could be done by splitting up the list of files into multiple groups according to numbers of cores registered (user).
Each chunk of files is read separately and then recombined at the end.
The text was updated successfully, but these errors were encountered:
FWIW the current progress bar implementation (pbapply::pblapply) does support parallel processing. But we could equally switch to progressr + future_lapply.
Implement reading of files in parallel chunks (most universally via future; e.g. future.apply::future_lapply()). If there are 100s or 1000s of files, the process of reading could for example be distributed over multiple cores to speed up the process. Key points (let's discuss):
The text was updated successfully, but these errors were encountered: