Skip to content

Commit

Permalink
fix number of chunks (JuliaLang#53413)
Browse files Browse the repository at this point in the history
The manual claims that `a` is split into `nthreads()` chunks, but this
is not true in general. As it was you could get an error, if `length(a)
< nthreads()`, or a number of chunks larger than `nthreads()`, if
`nthreads()` is smaller than `length(a)` but does not divide it. With
`cld`, on the other hand, you always get at most `nthreads()` chunks.
  • Loading branch information
araujoms authored Mar 2, 2024
1 parent 8bf6a07 commit e3b2462
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions doc/src/manual/multi-threading.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,11 +227,11 @@ julia> sum_multi_bad(1:1_000_000)
Note that the result is not `500000500000` as it should be, and will most likely change each evaluation.

To fix this, buffers that are specific to the task may be used to segment the sum into chunks that are race-free.
Here `sum_single` is reused, with its own internal buffer `s`. The input vector `a` is split into `nthreads()`
Here `sum_single` is reused, with its own internal buffer `s`. The input vector `a` is split into at most `nthreads()`
chunks for parallel work. We then use `Threads.@spawn` to create tasks that individually sum each chunk. Finally, we sum the results from each task using `sum_single` again:
```julia-repl
julia> function sum_multi_good(a)
chunks = Iterators.partition(a, length(a) ÷ Threads.nthreads())
chunks = Iterators.partition(a, cld(length(a), Threads.nthreads()))
tasks = map(chunks) do chunk
Threads.@spawn sum_single(chunk)
end
Expand Down

0 comments on commit e3b2462

Please sign in to comment.