Skip to content

Commit

Permalink
fixing broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
bcdurak committed Jan 12, 2024
1 parent 7332c7a commit 9f9c0f8
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion huggingface-sagemaker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ zenml secret create huggingface_creds --username=HUGGINGFACE_USERNAME --token=HU
<details>
<summary><h3>Set up your local stack</h3></summary>

To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guide/starter-guide/understand-stacks) with the required components to run the pipelines.
To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks) with the required components to run the pipelines.

```shell
make install-stack
Expand Down
4 changes: 2 additions & 2 deletions langchain-llamaindex-slackbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat

It is much more ideal to run a pipeline such as the
`zenml_docs_index_generation` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)
and set up a stack that supports
[our scheduling
feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs). If you
wish to deploy the slack bot on GCP Cloud Run as described above, you'll also
need to be using [a Google Cloud Storage Artifact
Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores/gcp). Note that
certain code artifacts like the `Dockerfile` for this project will also need to
be adapted for your own particular needs and requirements. Please check [our docs](https://docs.zenml.io/user-guide/starter-guide/follow-best-practices)
be adapted for your own particular needs and requirements. Please check [our docs](https://docs.zenml.io/user-guide/advanced-guide/best-practices)
for more information.

## Slack Bot In Action!
Expand Down
8 changes: 4 additions & 4 deletions nba-pipeline/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,12 +125,12 @@ python run_pipeline.py infer # Run inference pipeline

## :rocket: From Local to Cloud Stack
In ZenML you can choose to run your pipeline on any infrastructure of your choice.
The configuration of the infrastructure is called a [Stack](https://docs.zenml.io/user-guide/starter-guide/understand-stacks).
The configuration of the infrastructure is called a [Stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks).
By switching the Stack, you can choose to run your pipeline locally or in the cloud.

In any Stack, there must be at least two basic [Stack Components](https://docs.zenml.io/user-guide/starter-guide/understand-stacks#components-of-a-stack):
* [Orchestrator](https://docs.zenml.io/user-guide/starter-guide/understand-stacks#orchestrator) - Coordinates all the steps to run in a pipeline.
* [Artifact Store](https://docs.zenml.io/user-guide/starter-guide/understand-stacks#artifact-store) - Stores all data that pass through the pipeline.
In any Stack, there must be at least two basic [Stack Components](https://docs.zenml.io/user-guide/production-guide/understand-stacks#components-of-a-stack):
* [Orchestrator](https://docs.zenml.io/user-guide/production-guide/understand-stacks#orchestrator) - Coordinates all the steps to run in a pipeline.
* [Artifact Store](https://docs.zenml.io/user-guide/production-guide/understand-stacks#artifact-store) - Stores all data that pass through the pipeline.

ZenML comes with a default local stack with a local orchestrator and local artifact store.
![local](_assets/local_cloud.png)
Expand Down
2 changes: 1 addition & 1 deletion orbit-user-analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ python run.py

It is much more ideal to run a pipeline such as the
`community_analysis_pipeline` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)
and set up a stack that supports
[our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs).
Please check [our docs](https://docs.zenml.io/getting-started/introduction)
Expand Down
6 changes: 3 additions & 3 deletions sign-language-detection-yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,18 +32,18 @@ installed on your local machine:
* [Docker](https://www.docker.com/)
* [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated)
* [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely)
* [Remote ZenML Server](https://docs.zenml.io/user-guide/starter-guide/switch-to-production): a Remote Deployment of the ZenML HTTP server and database
* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database

### :rocket: Remote ZenML Server

For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI
or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your
machine as well as all stack components that may need access to the server.
[Read more information about the use case here](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)
[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)

In order to achieve this there are two different ways to get access to a remote ZenML Server.

1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/starter-guide/switch-to-production)/
1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)/
2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted
version of the ZenML Server with no setup required.

Expand Down
2 changes: 1 addition & 1 deletion supabase-openai-summary/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ pip install -r src/requirements.txt
## Connect to Your Deployed ZenML

In order to run a ZenML pipeline remotely (e.g. on the cloud), we first need to
[deploy ZenML](https://docs.zenml.io/user-guide/starter-guide/switch-to-production). One of the
[deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml). One of the
easiest ways to do this is to [deploy ZenML with HuggingFace spaces](https://docs.zenml.io/deploying-zenml/zenml-self-hosted/deploy-using-huggingface-spaces).

Afterward, establish a connection with your deployed ZenML instance:
Expand Down
2 changes: 1 addition & 1 deletion zen-news-summarization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ and use the `VertexOrchestrator` to schedule the pipeline.

Before you start building the stack, you need to deploy ZenML on GCP. For more
information on how you can achieve do that, please check
[the corresponding docs page](https://docs.zenml.io/user-guide/starter-guide/switch-to-production).
[the corresponding docs page](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml).

## ZenNews Stack

Expand Down

0 comments on commit 9f9c0f8

Please sign in to comment.