From 781c5ae7ec0d33f20dec9b5d94e439090f1a683d Mon Sep 17 00:00:00 2001 From: Hamza Tahir Date: Wed, 24 Jan 2024 14:27:19 +0100 Subject: [PATCH] Removed dead links --- langchain-llamaindex-slackbot/README.md | 8 ++------ llm-finetuning/README.md | 4 ++-- orbit-user-analysis/README.md | 2 +- sign-language-detection-yolov5/README.md | 6 +++--- stack-showcase/run_deploy.ipynb | 2 +- supabase-openai-summary/README.md | 2 +- zen-news-summarization/README.md | 2 +- 7 files changed, 11 insertions(+), 15 deletions(-) diff --git a/langchain-llamaindex-slackbot/README.md b/langchain-llamaindex-slackbot/README.md index b2e403bd..8c766f11 100644 --- a/langchain-llamaindex-slackbot/README.md +++ b/langchain-llamaindex-slackbot/README.md @@ -9,11 +9,7 @@ By addressing data ingestion and indexing, LangChain and LlamaIndex provide a st These tools bridge the gap between external data and LLMs, ensuring seamless integration while maintaining performance. By utilizing LangChain and LlamaIndex, developers can unlock LLMs' true potential and build cutting-edge applications tailored to specific use cases and datasets. 🛣️ The project we built uses both `langchain` and `llama_index` as well as some -extra code for the Slack bot itself. If you want to get your hands dirty -and try out a simpler version, feel free to check out [our Generative Chat -example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat) -that was released previously. - +extra code for the Slack bot itself. ## ZenML 🤝 LLM frameworks There are various terms being tried out to describe this new paradigm — from LLMOps to Big Model Ops. Not only the words used to describe how engineering will work are new, but the underlying structures and frameworks are also being developed from the ground up. We wanted to witness these changes first-hand by participating and getting our hands dirty. @@ -94,7 +90,7 @@ example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat It is much more ideal to run a pipeline such as the `zenml_docs_index_generation` on a regular schedule. In order to achieve that, -you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml) +you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml) and set up a stack that supports [our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs). If you diff --git a/llm-finetuning/README.md b/llm-finetuning/README.md index 4fb395c9..bd146451 100644 --- a/llm-finetuning/README.md +++ b/llm-finetuning/README.md @@ -31,7 +31,7 @@ One of the first jobs of somebody entering MLOps is to convert their manual scri 2. Type annotating the steps properly 3. Connecting the steps together in a pipeline 4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guide/production-guide/configure-pipeline) -5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/user-guide/advanced-guide/environment-management/containerize-your-pipeline). +5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/user-guide/advanced-guide/infrastructure-management/containerize-your-pipeline). Frameworks like [ZenML](https://github.com/zenml-io/zenml) go a long way in alleviating this burden by abstracting much of the complexity away. However, recent advancement in Large Language Model based Copilots offer hope that even more repetitive aspects of this task can be automated. @@ -110,7 +110,7 @@ The [ZenML Cloud](https://zenml.io/cloud) was used to manage the pipelines, mode This project recently did a [call of volunteers](https://www.linkedin.com/feed/update/urn:li:activity:7150388250178662400/). This TODO list can serve as a source of collaboration. If you want to work on any of the following, please [create an issue on this repository](https://github.com/zenml-io/zenml-projects/issues) and assign it to yourself! - [x] Create a functioning data generation pipeline (initial dataset with the core [ZenML repo](https://github.com/zenml-io/zenml) scraped and pushed [here](https://huggingface.co/datasets/htahir1/zenml-codegen-v1)) -- [x] Deploy the model on a [HuggingFace inference endpoint](https://ui.endpoints.huggingface.co/welcome) and use it in the [VS Code Extension](https://github.com/huggingface/llm-vscode#installation) using a deployment pipeline. +- [x] Deploy the model on a HuggingFace inference endpoint and use it in the [VS Code Extension](https://github.com/huggingface/llm-vscode#installation) using a deployment pipeline. - [x] Create a functioning training pipeline. - [ ] Curate a set of 5-10 repositories that are using the ZenML latest syntax and use data generation pipeline to push dataset to HuggingFace. - [ ] Create a Dockerfile for the training pipeline with all requirements installed including ZenML, torch, CUDA etc. CUrrently I am having trouble creating this in this [config file](configs/finetune.yaml). Probably might make sense to create a docker imag with the right CUDA and requirements including ZenML. See here: https://sdkdocs.zenml.io/0.54.0/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings diff --git a/orbit-user-analysis/README.md b/orbit-user-analysis/README.md index 1f00b30b..da01582e 100644 --- a/orbit-user-analysis/README.md +++ b/orbit-user-analysis/README.md @@ -65,7 +65,7 @@ python run.py It is much more ideal to run a pipeline such as the `community_analysis_pipeline` on a regular schedule. In order to achieve that, -you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml) +you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml) and set up a stack that supports [our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs). Please check [our docs](https://docs.zenml.io/getting-started/introduction) diff --git a/sign-language-detection-yolov5/README.md b/sign-language-detection-yolov5/README.md index d7d7e1e9..141131fe 100644 --- a/sign-language-detection-yolov5/README.md +++ b/sign-language-detection-yolov5/README.md @@ -32,18 +32,18 @@ installed on your local machine: * [Docker](https://www.docker.com/) * [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated) * [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely) -* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database +* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database ### :rocket: Remote ZenML Server For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your machine as well as all stack components that may need access to the server. -[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml) +[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml) In order to achieve this there are two different ways to get access to a remote ZenML Server. -1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)/ +1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/ 2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted version of the ZenML Server with no setup required. diff --git a/stack-showcase/run_deploy.ipynb b/stack-showcase/run_deploy.ipynb index 2ae0f9f6..281f7507 100644 --- a/stack-showcase/run_deploy.ipynb +++ b/stack-showcase/run_deploy.ipynb @@ -140,7 +140,7 @@ "This was just the tip of the iceberg of what ZenML can do; check out the [**docs**](https://docs.zenml.io/) to learn more\n", "about the capabilities of ZenML. For example, you might want to:\n", "\n", - "- [Deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml) to collaborate with your colleagues.\n", + "- [Deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml) to collaborate with your colleagues.\n", "- Run the same pipeline on a [cloud MLOps stack in production](https://docs.zenml.io/user-guide/production-guide/cloud-stack).\n", "- Track your metrics in an experiment tracker like [MLflow](https://docs.zenml.io/stacks-and-components/component-guide/experiment-trackers/mlflow).\n", "\n", diff --git a/supabase-openai-summary/README.md b/supabase-openai-summary/README.md index 02f9e09e..aca3b948 100644 --- a/supabase-openai-summary/README.md +++ b/supabase-openai-summary/README.md @@ -29,7 +29,7 @@ pip install -r src/requirements.txt ## Connect to Your Deployed ZenML In order to run a ZenML pipeline remotely (e.g. on the cloud), we first need to -[deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml). One of the +[deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml). One of the easiest ways to do this is to [deploy ZenML with HuggingFace spaces](https://docs.zenml.io/deploying-zenml/zenml-self-hosted/deploy-using-huggingface-spaces). Afterward, establish a connection with your deployed ZenML instance: diff --git a/zen-news-summarization/README.md b/zen-news-summarization/README.md index 3b288ed4..0c045318 100644 --- a/zen-news-summarization/README.md +++ b/zen-news-summarization/README.md @@ -127,7 +127,7 @@ and use the `VertexOrchestrator` to schedule the pipeline. Before you start building the stack, you need to deploy ZenML on GCP. For more information on how you can achieve do that, please check -[the corresponding docs page](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml). +[the corresponding docs page](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml). ## ZenNews Stack