From 127de5956ffd709e97d1fdda8fba71541843b4fc Mon Sep 17 00:00:00 2001 From: Peter Andreas Entschev Date: Tue, 2 Apr 2024 20:23:30 +0200 Subject: [PATCH] Fix broken links in docs (#1329) It was brought to our attention that some links in our docs are broken, this change should fix those issues. Closes https://github.com/rapidsai/dask-cuda/issues/1328 Authors: - Peter Andreas Entschev (https://github.com/pentschev) - Ray Douglass (https://github.com/raydouglass) Approvers: - Lawrence Mitchell (https://github.com/wence-) - Ray Douglass (https://github.com/raydouglass) URL: https://github.com/rapidsai/dask-cuda/pull/1329 --- ci/release/update-version.sh | 5 +++++ docs/source/api.rst | 3 +++ docs/source/examples/ucx.rst | 8 ++++---- docs/source/explicit_comms.rst | 4 ++-- docs/source/index.rst | 2 +- docs/source/spilling.rst | 2 +- docs/source/ucx.rst | 8 ++++---- 7 files changed, 20 insertions(+), 12 deletions(-) diff --git a/ci/release/update-version.sh b/ci/release/update-version.sh index 9f40318b2..0d1b8b1a5 100755 --- a/ci/release/update-version.sh +++ b/ci/release/update-version.sh @@ -56,3 +56,8 @@ for FILE in .github/workflows/*.yaml; do sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}" done sed_runner "s/RAPIDS_VERSION_NUMBER=\".*/RAPIDS_VERSION_NUMBER=\"${NEXT_SHORT_TAG}\"/g" ci/build_docs.sh + +# Docs referencing source code +find docs/source/ -type f -name *.rst -print0 | while IFS= read -r -d '' filename; do + sed_runner "s|/branch-[^/]*/|/branch-${NEXT_SHORT_TAG}/|g" "${filename}" +done diff --git a/docs/source/api.rst b/docs/source/api.rst index b9d9d6dfa..1594594cc 100644 --- a/docs/source/api.rst +++ b/docs/source/api.rst @@ -33,3 +33,6 @@ Explicit-comms .. currentmodule:: dask_cuda.explicit_comms.comms .. autoclass:: CommsContext :members: + +.. currentmodule:: dask_cuda.explicit_comms.dataframe.shuffle +.. autofunction:: shuffle diff --git a/docs/source/examples/ucx.rst b/docs/source/examples/ucx.rst index 18c569ff1..7a0651173 100644 --- a/docs/source/examples/ucx.rst +++ b/docs/source/examples/ucx.rst @@ -2,7 +2,7 @@ Enabling UCX communication ========================== A CUDA cluster using UCX communication can be started automatically with LocalCUDACluster or manually with the ``dask cuda worker`` CLI tool. -In either case, a ``dask.distributed.Client`` must be made for the worker cluster using the same Dask UCX configuration; see `UCX Integration -- Configuration <../ucx.html#configuration>`_ for details on all available options. +In either case, a ``dask.distributed.Client`` must be made for the worker cluster using the same Dask UCX configuration; see `UCX Integration -- Configuration <../../ucx/#configuration>`_ for details on all available options. LocalCUDACluster with Automatic Configuration --------------------------------------------- @@ -29,7 +29,7 @@ To connect a client to a cluster with automatically-configured UCX and an RMM po LocalCUDACluster with Manual Configuration ------------------------------------------ -When using LocalCUDACluster with UCX communication and manual configuration, all required UCX configuration is handled through arguments supplied at construction; see `API -- Cluster <../api.html#cluster>`_ for a complete list of these arguments. +When using LocalCUDACluster with UCX communication and manual configuration, all required UCX configuration is handled through arguments supplied at construction; see `API -- Cluster <../../api/#cluster>`_ for a complete list of these arguments. To connect a client to a cluster with all supported transports and an RMM pool: .. code-block:: python @@ -148,7 +148,7 @@ We communicate to the scheduler that we will be using UCX with the ``--protocol` Workers ^^^^^^^ -All UCX configuration options have analogous options in ``dask cuda worker``; see `API -- Worker <../api.html#worker>`_ for a complete list of these options. +All UCX configuration options have analogous options in ``dask cuda worker``; see `API -- Worker <../../api/#worker>`_ for a complete list of these options. To start a cluster with all supported transports and an RMM pool: .. code-block:: bash @@ -163,7 +163,7 @@ To start a cluster with all supported transports and an RMM pool: Client ^^^^^^ -A client can be configured to use UCX by using ``dask_cuda.initialize``, a utility which takes the same UCX configuring arguments as LocalCUDACluster and adds them to the current Dask configuration used when creating it; see `API -- Client initialization <../api.html#client-initialization>`_ for a complete list of arguments. +A client can be configured to use UCX by using ``dask_cuda.initialize``, a utility which takes the same UCX configuring arguments as LocalCUDACluster and adds them to the current Dask configuration used when creating it; see `API -- Client initialization <../../api/#client-initialization>`_ for a complete list of arguments. To connect a client to the cluster we have made: .. code-block:: python diff --git a/docs/source/explicit_comms.rst b/docs/source/explicit_comms.rst index 56ad97758..aecbc1fd9 100644 --- a/docs/source/explicit_comms.rst +++ b/docs/source/explicit_comms.rst @@ -5,7 +5,7 @@ Communication and scheduling overhead can be a major bottleneck in Dask/Distribu The idea is that Dask/Distributed spawns workers and distribute data as usually while the user can submit tasks on the workers that communicate explicitly. This makes it possible to bypass Distributed's scheduler and write hand-tuned computation and communication patterns. Currently, Dask-CUDA includes an explicit-comms -implementation of the Dataframe `shuffle `_ operation used for merging and sorting. +implementation of the Dataframe `shuffle <../api/#dask_cuda.explicit_comms.dataframe.shuffle.shuffle>`_ operation used for merging and sorting. Usage @@ -14,4 +14,4 @@ Usage In order to use explicit-comms in Dask/Distributed automatically, simply define the environment variable ``DASK_EXPLICIT_COMMS=True`` or setting the ``"explicit-comms"`` key in the `Dask configuration `_. -It is also possible to use explicit-comms in tasks manually, see the `API `_ and our `implementation of shuffle `_ for guidance. +It is also possible to use explicit-comms in tasks manually, see the `API <../api/#explicit-comms>`_ and our `implementation of shuffle `_ for guidance. diff --git a/docs/source/index.rst b/docs/source/index.rst index 37ba12139..0d415cb0d 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -11,7 +11,7 @@ While Distributed can be used to leverage GPU workloads through libraries such a - **Automatic instantiation of per-GPU workers** -- Using Dask-CUDA's LocalCUDACluster or ``dask cuda worker`` CLI will automatically launch one worker for each GPU available on the executing node, avoiding the need to explicitly select GPUs. - **Automatic setting of CPU affinity** -- The setting of CPU affinity for each GPU is done automatically, preventing memory transfers from taking suboptimal paths. -- **Automatic selection of InfiniBand devices** -- When UCX communication is enabled over InfiniBand, Dask-CUDA automatically selects the optimal InfiniBand device for each GPU (see `UCX Integration `_ for instructions on configuring UCX communication). +- **Automatic selection of InfiniBand devices** -- When UCX communication is enabled over InfiniBand, Dask-CUDA automatically selects the optimal InfiniBand device for each GPU (see `UCX Integration `_ for instructions on configuring UCX communication). - **Memory spilling from GPU** -- For memory-intensive workloads, Dask-CUDA supports spilling from GPU to host memory when a GPU reaches the default or user-specified memory utilization limit. - **Allocation of GPU memory** -- when using UCX communication, per-GPU memory pools can be allocated using `RAPIDS Memory Manager `_ to circumvent the costly memory buffer mappings that would be required otherwise. diff --git a/docs/source/spilling.rst b/docs/source/spilling.rst index 28f3562b9..a237adf74 100644 --- a/docs/source/spilling.rst +++ b/docs/source/spilling.rst @@ -37,7 +37,7 @@ JIT-Unspill The regular spilling in Dask and Dask-CUDA has some significate issues. Instead of tracking individual objects, it tracks task outputs. This means that a task returning a collection of CUDA objects will either spill all of the CUDA objects or none of them. Other issues includes *object duplication*, *wrong spilling order*, and *non-tracking of sharing device buffers* -(see: https://github.com/dask/distributed/issues/4568#issuecomment-805049321). +(`see discussion `_). In order to address all of these issues, Dask-CUDA introduces JIT-Unspilling, which can improve performance and memory usage significantly. For workloads that require significant spilling diff --git a/docs/source/ucx.rst b/docs/source/ucx.rst index d9cacdc77..cf798e5dc 100644 --- a/docs/source/ucx.rst +++ b/docs/source/ucx.rst @@ -37,7 +37,7 @@ Automatic Beginning with Dask-CUDA 22.02 and assuming UCX >= 1.11.1, specifying UCX transports is now optional. -A local cluster can now be started with ``LocalCUDACluster(protocol="ucx")``, implying automatic UCX transport selection (``UCX_TLS=all``). Starting a cluster separately -- scheduler, workers and client as different processes -- is also possible, as long as Dask scheduler is created with ``dask scheduler --protocol="ucx"`` and connecting a ``dask cuda worker`` to the scheduler will imply automatic UCX transport selection, but that requires the Dask scheduler and client to be started with ``DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True``. See `Enabling UCX communication `_ for more details examples of UCX usage with automatic configuration. +A local cluster can now be started with ``LocalCUDACluster(protocol="ucx")``, implying automatic UCX transport selection (``UCX_TLS=all``). Starting a cluster separately -- scheduler, workers and client as different processes -- is also possible, as long as Dask scheduler is created with ``dask scheduler --protocol="ucx"`` and connecting a ``dask cuda worker`` to the scheduler will imply automatic UCX transport selection, but that requires the Dask scheduler and client to be started with ``DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True``. See `Enabling UCX communication <../examples/ucx/>`_ for more details examples of UCX usage with automatic configuration. Configuring transports manually is still possible, please refer to the subsection below. @@ -79,12 +79,12 @@ However, some will affect related libraries, such as RMM: .. note:: These options can be used with mainline Dask.distributed. However, some features are exclusive to Dask-CUDA, such as the automatic detection of InfiniBand interfaces. - See `Dask-CUDA -- Motivation `_ for more details on the benefits of using Dask-CUDA. + See `Dask-CUDA -- Motivation <../#motivation>`_ for more details on the benefits of using Dask-CUDA. Usage ----- -See `Enabling UCX communication `_ for examples of UCX usage with different supported transports. +See `Enabling UCX communication <../examples/ucx/>`_ for examples of UCX usage with different supported transports. Running in a fork-starved environment ------------------------------------- @@ -97,7 +97,7 @@ this when using Dask-CUDA's UCX integration, processes launched via multiprocessing should use the start processes using the `"forkserver" `_ -method. When launching workers using `dask cuda worker `_, this can be +method. When launching workers using `dask cuda worker <../quickstart/#dask-cuda-worker>`_, this can be achieved by passing ``--multiprocessing-method forkserver`` as an argument. In user code, the method can be controlled with the ``distributed.worker.multiprocessing-method`` configuration key in