Skip to content

Commit

Permalink
Document.
Browse files Browse the repository at this point in the history
  • Loading branch information
trivialfis committed Sep 22, 2024
1 parent fee2665 commit 31b83cf
Show file tree
Hide file tree
Showing 6 changed files with 180 additions and 102 deletions.
8 changes: 4 additions & 4 deletions demo/guide-python/external_memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ def __init__(self, device: str, file_paths: List[Tuple[str, str]]) -> None:

self._file_paths = file_paths
self._it = 0
# XGBoost will generate some cache files under current directory with the prefix
# "cache"
# XGBoost will generate some cache files under the current directory with the
# prefix "cache"
super().__init__(cache_prefix=os.path.join(".", "cache"))

def load_file(self) -> Tuple[np.ndarray, np.ndarray]:
Expand All @@ -81,7 +81,7 @@ def next(self, input_data: Callable) -> int:
# return 0 to let XGBoost know this is the end of iteration
return 0

# input_data is a function passed in by XGBoost who has the similar signature to
# input_data is a function passed in by XGBoost and has the similar signature to
# the ``DMatrix`` constructor.
X, y = self.load_file()
input_data(data=X, label=y)
Expand Down Expand Up @@ -151,7 +151,7 @@ def main(tmpdir: str, args: argparse.Namespace) -> None:
import rmm
from rmm.allocators.cupy import rmm_cupy_allocator

# It's important to use RMM for GPU-based external memory for good performance.
# It's important to use RMM for GPU-based external memory to improve performance.
# If XGBoost is not built with RMM support, a warning will be raised.
mr = rmm.mr.PoolMemoryResource(rmm.mr.CudaAsyncMemoryResource())
rmm.mr.set_current_device_resource(mr)
Expand Down
4 changes: 2 additions & 2 deletions doc/jvm/xgboost_spark_migration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ When submitting the XGBoost application to the Spark cluster, you only need to s
--jars xgboost-spark_2.12-3.0.0.jar \
... \
**************
***************
XGBoost Ranking
**************
***************

Learning to rank using XGBoostRegressor has been replaced by a dedicated `XGBoostRanker`, which is specifically designed
to support ranking algorithms.
Expand Down
10 changes: 5 additions & 5 deletions doc/parameter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -247,17 +247,17 @@ Parameters for Non-Exact Tree Methods

* ``external_memory_concat_pages``, [default = ``false``]

This parameter is only used for the ``hist`` tree method with ``device=cuda``. Before
3.0, pages were always concatenated.
This parameter is only used for the ``hist`` tree method with ``device=cuda`` and
``subsample != 1.0``. Before 3.0, pages were always concatenated.

.. versionadded:: 3.0.0

Whether the GPU-based ``hist`` tree method should concatenate the training data into a
single batch instead of fetching data on-demand when external memory is used. For GPU
devices that don't support address translation services, external memory training can be
devices that don't support address translation services, external memory training is
expensive. This parameter can be used in combination with subsampling to reduce overall
memory usage without significant overhead. See :doc:`/tutorials/external_memory` for
more information
memory usage without significant overhead. See :doc:`/tutorials/external_memory` for
more information.

.. _cat-param:

Expand Down
4 changes: 4 additions & 0 deletions doc/python/python_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ Core Data Structure
:members:
:show-inheritance:

.. autoclass:: xgboost.ExtMemQuantileDMatrix
:members:
:show-inheritance:

.. autoclass:: xgboost.Booster
:members:
:show-inheritance:
Expand Down
Loading

0 comments on commit 31b83cf

Please sign in to comment.