diff --git a/README.md b/README.md index e166b4f53..ffc226268 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ For guided demos and basics walkthroughs, check out the following links: - these demos can be copied into your current working directory when using the `codeflare-sdk` by using the `codeflare_sdk.copy_demo_nbs()` function - Additionally, we have a [video walkthrough](https://www.youtube.com/watch?v=U76iIfd9EmE) of these basic demos from June, 2023 -Full documentation can be found [here](https://project-codeflare.github.io/codeflare-sdk/detailed-documentation) +Full documentation can be found [here](https://project-codeflare.github.io/codeflare-sdk/index.html) ## Installation @@ -32,11 +32,10 @@ It is possible to use the Release Github workflow to do the release. This is gen The following instructions apply when doing release manually. This may be required in instances where the automation is failing. - Check and update the version in "pyproject.toml" file. -- Generate new documentation. -`pdoc --html -o docs src/codeflare_sdk && pushd docs && rm -rf cluster job utils && mv codeflare_sdk/* . && rm -rf codeflare_sdk && popd && find docs -type f -name "*.html" -exec bash -c "echo '' >> {}" \;` (it is possible to install **pdoc** using the following command `poetry install --with docs`) - Commit all the changes to the repository. - Create Github release (). - Build the Python package. `poetry build` - If not present already, add the API token to Poetry. `poetry config pypi-token.pypi API_TOKEN` - Publish the Python package. `poetry publish` +- Trigger the [Publish Documentation](https://github.com/project-codeflare/codeflare-sdk/actions/workflows/publish-documentation.yaml) workflow diff --git a/docs/sphinx/index.rst b/docs/sphinx/index.rst index fdf4c15b0..3c6fe876f 100644 --- a/docs/sphinx/index.rst +++ b/docs/sphinx/index.rst @@ -16,14 +16,16 @@ The CodeFlare SDK is an intuitive, easy-to-use python interface for batch resour modules .. toctree:: - :maxdepth: 2 + :maxdepth: 1 :caption: User Documentation: user-docs/authentication user-docs/cluster-configuration + user-docs/ray-cluster-interaction user-docs/e2e user-docs/s3-compatible-storage user-docs/setup-kueue + user-docs/ui-widgets Quick Links =========== diff --git a/docs/sphinx/user-docs/authentication.rst b/docs/sphinx/user-docs/authentication.rst index d07063d91..82441d564 100644 --- a/docs/sphinx/user-docs/authentication.rst +++ b/docs/sphinx/user-docs/authentication.rst @@ -39,7 +39,7 @@ a login command like ``oc login --token= --server=`` their kubernetes config file should have updated. If the user has not specifically authenticated through the SDK by other means such as ``TokenAuthentication`` then the SDK will try to use their default -Kubernetes config file located at ``"/HOME/.kube/config"``. +Kubernetes config file located at ``"$HOME/.kube/config"``. Method 3 Specifying a Kubernetes Config File -------------------------------------------- @@ -62,5 +62,5 @@ Method 4 In-Cluster Authentication ---------------------------------- If a user does not authenticate by any of the means detailed above and -does not have a config file at ``"/HOME/.kube/config"`` the SDK will try +does not have a config file at ``"$HOME/.kube/config"`` the SDK will try to authenticate with the in-cluster configuration file. diff --git a/docs/sphinx/user-docs/cluster-configuration.rst b/docs/sphinx/user-docs/cluster-configuration.rst index 1fe28c643..6d27b0f41 100644 --- a/docs/sphinx/user-docs/cluster-configuration.rst +++ b/docs/sphinx/user-docs/cluster-configuration.rst @@ -29,13 +29,14 @@ requirements for creating the Ray Cluster. labels={"exampleLabel": "example", "secondLabel": "example"}, )) -Note: ‘quay.io/modh/ray:2.35.0-py39-cu121’ is the default image used by -the CodeFlare SDK for creating a RayCluster resource. If you have your -own Ray image which suits your purposes, specify it in image field to -override the default image. If you are using ROCm compatible GPUs you -can use ‘quay.io/modh/ray:2.35.0-py39-rocm61’. You can also find -documentation on building a custom image -`here `__. +.. note:: + `quay.io/modh/ray:2.35.0-py39-cu121` is the default image used by + the CodeFlare SDK for creating a RayCluster resource. If you have your + own Ray image which suits your purposes, specify it in image field to + override the default image. If you are using ROCm compatible GPUs you + can use `quay.io/modh/ray:2.35.0-py39-rocm61`. You can also find + documentation on building a custom image + `here `__. The ``labels={"exampleLabel": "example"}`` parameter can be used to apply additional labels to the RayCluster resource. @@ -46,7 +47,8 @@ After creating their ``cluster``, a user can call ``cluster.up()`` and Deprecating Parameters ---------------------- -The following parameters of the ``ClusterConfiguration`` are being deprecated. +The following parameters of the ``ClusterConfiguration`` are being +deprecated. .. list-table:: :header-rows: 1 diff --git a/docs/sphinx/user-docs/e2e.rst b/docs/sphinx/user-docs/e2e.rst index e64032e20..846536f11 100644 --- a/docs/sphinx/user-docs/e2e.rst +++ b/docs/sphinx/user-docs/e2e.rst @@ -11,7 +11,7 @@ On KinD clusters Pre-requisite for KinD clusters: please add in your local ``/etc/hosts`` file ``127.0.0.1 kind``. This will map your localhost IP address to the -KinD cluster’s hostname. This is already performed on `GitHub +KinD cluster's hostname. This is already performed on `GitHub Actions `__ If the system you run on contains NVidia GPU then you can enable the GPU @@ -91,7 +91,7 @@ instructions `__. poetry install --with test,docs poetry run pytest -v -s ./tests/e2e/mnist_raycluster_sdk_kind_test.py - - If the cluster doesn’t have NVidia GPU support then we need to + - If the cluster doesn't have NVidia GPU support then we need to disable NVidia GPU tests by providing proper marker: :: @@ -124,8 +124,8 @@ If the system you run on contains NVidia GPU then you can enable the GPU support on OpenShift, this will allow you to run also GPU tests. To enable GPU on OpenShift follow `these instructions `__. -Currently the SDK doesn’t support tolerations, so e2e tests can’t be -executed on nodes with taint (i.e. GPU taint). +Currently the SDK doesn't support tolerations, so e2e tests can't be +executed on nodes with taint (i.e. GPU taint). - Test Phase: @@ -203,8 +203,9 @@ On OpenShift Disconnected clusters AWS_STORAGE_BUCKET= AWS_STORAGE_BUCKET_MNIST_DIR= - Note : When using the Python Minio client to connect to a minio - storage bucket, the ``AWS_DEFAULT_ENDPOINT`` environment - variable by default expects secure endpoint where user can use - endpoint url with https/http prefix for autodetection of - secure/insecure endpoint. + .. note:: + When using the Python Minio client to connect to a minio + storage bucket, the ``AWS_DEFAULT_ENDPOINT`` environment + variable by default expects secure endpoint where user can use + endpoint url with https/http prefix for autodetection of + secure/insecure endpoint. diff --git a/docs/sphinx/user-docs/images/ui-buttons.png b/docs/sphinx/user-docs/images/ui-buttons.png new file mode 100644 index 000000000..a27492920 Binary files /dev/null and b/docs/sphinx/user-docs/images/ui-buttons.png differ diff --git a/docs/sphinx/user-docs/images/ui-view-clusters.png b/docs/sphinx/user-docs/images/ui-view-clusters.png new file mode 100644 index 000000000..259d2dc11 Binary files /dev/null and b/docs/sphinx/user-docs/images/ui-view-clusters.png differ diff --git a/docs/sphinx/user-docs/ray-cluster-interaction.rst b/docs/sphinx/user-docs/ray-cluster-interaction.rst new file mode 100644 index 000000000..8e7929b4d --- /dev/null +++ b/docs/sphinx/user-docs/ray-cluster-interaction.rst @@ -0,0 +1,90 @@ +Ray Cluster Interaction +======================= + +The CodeFlare SDK offers multiple ways to interact with Ray Clusters +including the below methods. + +get_cluster() +------------- + +The ``get_cluster()`` function is used to initialise a ``Cluster`` +object from a pre-existing Ray Cluster/AppWrapper. Below is an example +of it's usage: + +:: + + from codeflare_sdk import get_cluster + cluster = get_cluster(cluster_name="raytest", namespace="example", is_appwrapper=False, write_to_file=False) + -> output: Yaml resources loaded for raytest + cluster.status() + -> output: + 🚀 CodeFlare Cluster Status 🚀 + ╭─────────────────────────────────────────────────────────────────╮ + │ Name │ + │ raytest Active ✅ │ + │ │ + │ URI: ray://raytest-head-svc.example.svc:10001 │ + │ │ + │ Dashboard🔗 │ + │ │ + ╰─────────────────────────────────────────────────────────────────╯ + (, True) + cluster.down() + cluster.up() # This function will create an exact copy of the retrieved Ray Cluster only if the Ray Cluster has been previously deleted. + +| These are the parameters the ``get_cluster()`` function accepts: +| ``cluster_name: str # Required`` -> The name of the Ray Cluster. +| ``namespace: str # Default: "default"`` -> The namespace of the Ray Cluster. +| ``is_appwrapper: bool # Default: False`` -> When set to +| ``True`` the function will attempt to retrieve an AppWrapper instead of a Ray Cluster. +| ``write_to_file: bool # Default: False`` -> When set to ``True`` the Ray Cluster/AppWrapper will be written to a file similar to how it is done in ``ClusterConfiguration``. + +list_all_queued() +----------------- + +| The ``list_all_queued()`` function returns (and prints by default) a list of all currently queued-up Ray Clusters in a given namespace. +| It accepts the following parameters: +| ``namespace: str # Required`` -> The namespace you want to retrieve the list from. +| ``print_to_console: bool # Default: True`` -> Allows the user to print the list to their console. +| ``appwrapper: bool # Default: False`` -> When set to ``True`` allows the user to list queued AppWrappers. + +list_all_clusters() +------------------- + +| The ``list_all_clusters()`` function will return a list of detailed descriptions of Ray Clusters to the console by default. +| It accepts the following parameters: +| ``namespace: str # Required`` -> The namespace you want to retrieve the list from. +| ``print_to_console: bool # Default: True`` -> A boolean that allows the user to print the list to their console. + +.. note:: + + The following methods require a ``Cluster`` object to be + initialized. See :doc:`./cluster-configuration` + +cluster.up() +------------ + +| The ``cluster.up()`` function creates a Ray Cluster in the given namespace. + +cluster.down() +-------------- + +| The ``cluster.down()`` function deletes the Ray Cluster in the given namespace. + +cluster.status() +---------------- + +| The ``cluster.status()`` function prints out the status of the Ray Cluster's state with a link to the Ray Dashboard. + +cluster.details() +----------------- + +| The ``cluster.details()`` function prints out a detailed description of the Ray Cluster's status, worker resources and a link to the Ray Dashboard. + +cluster.wait_ready() +-------------------- + +| The ``cluster.wait_ready()`` function waits for the requested cluster to be ready, up to an optional timeout and checks every 5 seconds. +| It accepts the following parameters: +| ``timeout: Optional[int] # Default: None`` -> Allows the user to define a timeout for the ``wait_ready()`` function. +| ``dashboard_check: bool # Default: True`` -> If enabled the ``wait_ready()`` function will wait until the Ray Dashboard is ready too. diff --git a/docs/sphinx/user-docs/s3-compatible-storage.rst b/docs/sphinx/user-docs/s3-compatible-storage.rst index 60937441b..0ca2cc0d1 100644 --- a/docs/sphinx/user-docs/s3-compatible-storage.rst +++ b/docs/sphinx/user-docs/s3-compatible-storage.rst @@ -82,5 +82,5 @@ Lastly the new ``run_config`` must be added to the Trainer: To find more information on creating a Minio Bucket compatible with RHOAI you can refer to this `documentation `__. -Note: You must have ``sf3s`` and ``pyarrow`` installed in your +Note: You must have ``s3fs`` and ``pyarrow`` installed in your environment for this method. diff --git a/docs/sphinx/user-docs/setup-kueue.rst b/docs/sphinx/user-docs/setup-kueue.rst index 86956e011..1f2bdc041 100644 --- a/docs/sphinx/user-docs/setup-kueue.rst +++ b/docs/sphinx/user-docs/setup-kueue.rst @@ -11,10 +11,9 @@ Kueue resources, namely Cluster Queue, Resource Flavor, and Local Queue. 1. Resource Flavor: ------------------- -Resource Flavors allow the cluster admin to define different types of -resources with specific characteristics, such as CPU, memory, GPU, etc. -These can then be assigned to workloads to ensure they are executed on -appropriate resources. +Resource Flavors allow the cluster admin to reflect differing resource capabilities +of nodes within a clusters, such as CPU, memory, GPU, etc. These can then be assigned +to workloads to ensure they are executed on nodes with appropriate resources. The YAML configuration provided below creates an empty Resource Flavor named default-flavor. It serves as a starting point and does not specify diff --git a/docs/sphinx/user-docs/ui-widgets.rst b/docs/sphinx/user-docs/ui-widgets.rst new file mode 100644 index 000000000..6c797e043 --- /dev/null +++ b/docs/sphinx/user-docs/ui-widgets.rst @@ -0,0 +1,55 @@ +Jupyter UI Widgets +================== + +Below are some examples of the Jupyter UI Widgets that are included in +the CodeFlare SDK. + +.. note:: + To use the widgets functionality you must be using the CodeFlare SDK in a Jupyter Notebook environment. + +Cluster Up/Down Buttons +----------------------- + +The Cluster Up/Down buttons appear after successfully initialising your +`ClusterConfiguration `__. +There are two buttons and a checkbox ``Cluster Up``, ``Cluster Down`` +and ``Wait for Cluster?`` which mimic the +`cluster.up() `__, +`cluster.down() `__ and +`cluster.wait_ready() `__ +functionality. + +After initialising their ``ClusterConfiguration`` a user can select the +``Wait for Cluster?`` checkbox then click the ``Cluster Up`` button to +create their Ray Cluster and wait until it is ready. The cluster can be +deleted by clicking the ``Cluster Down`` button. + +.. image:: images/ui-buttons.png + :alt: An image of the up/down ui buttons + +View Clusters UI Table +---------------------- + +The View Clusters UI Table allows a user to see a list of Ray Clusters +with information on their configuration including number of workers, CPU +requests and limits along with the clusters status. + +.. image:: images/ui-view-clusters.png + :alt: An image of the view clusters ui table + +Above is a list of two Ray Clusters ``raytest`` and ``raytest2`` each of +those headings is clickable and will update the table to view the +selected Cluster's information. There are three buttons under the table +``Cluster Down``, ``View Jobs`` and ``Open Ray Dashboard``. \* The +``Cluster Down`` button will delete the selected Cluster. \* The +``View Jobs`` button will try to open the Ray Dashboard's Jobs view in a +Web Browser. The link will also be printed to the console. \* The +``Open Ray Dashboard`` button will try to open the Ray Dashboard view in +a Web Browser. The link will also be printed to the console. + +The UI Table can be viewed by calling the following function. + +.. code:: python + + from codeflare_sdk import view_clusters + view_clusters() # Accepts namespace parameter but will try to gather the namespace from the current context