Skip to content

Commit

Permalink
Removed dead links
Browse files Browse the repository at this point in the history
  • Loading branch information
htahir1 committed Jan 24, 2024
1 parent 2f2db2c commit 781c5ae
Show file tree
Hide file tree
Showing 7 changed files with 11 additions and 15 deletions.
8 changes: 2 additions & 6 deletions langchain-llamaindex-slackbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,7 @@ By addressing data ingestion and indexing, LangChain and LlamaIndex provide a st
These tools bridge the gap between external data and LLMs, ensuring seamless integration while maintaining performance. By utilizing LangChain and LlamaIndex, developers can unlock LLMs' true potential and build cutting-edge applications tailored to specific use cases and datasets.

🛣️ The project we built uses both `langchain` and `llama_index` as well as some
extra code for the Slack bot itself. If you want to get your hands dirty
and try out a simpler version, feel free to check out [our Generative Chat
example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat)
that was released previously.

extra code for the Slack bot itself.
## ZenML 🤝 LLM frameworks

There are various terms being tried out to describe this new paradigm — from LLMOps to Big Model Ops. Not only the words used to describe how engineering will work are new, but the underlying structures and frameworks are also being developed from the ground up. We wanted to witness these changes first-hand by participating and getting our hands dirty.
Expand Down Expand Up @@ -94,7 +90,7 @@ example](https://github.com/zenml-io/zenml/tree/develop/examples/generative_chat

It is much more ideal to run a pipeline such as the
`zenml_docs_index_generation` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)
and set up a stack that supports
[our scheduling
feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs). If you
Expand Down
4 changes: 2 additions & 2 deletions llm-finetuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ One of the first jobs of somebody entering MLOps is to convert their manual scri
2. Type annotating the steps properly
3. Connecting the steps together in a pipeline
4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guide/production-guide/configure-pipeline)
5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/user-guide/advanced-guide/environment-management/containerize-your-pipeline).
5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/user-guide/advanced-guide/infrastructure-management/containerize-your-pipeline).

Frameworks like [ZenML](https://github.com/zenml-io/zenml) go a long way in alleviating this burden by abstracting much of the complexity away. However, recent advancement in Large Language Model based Copilots offer hope that even more repetitive aspects of this task can be automated.

Expand Down Expand Up @@ -110,7 +110,7 @@ The [ZenML Cloud](https://zenml.io/cloud) was used to manage the pipelines, mode
This project recently did a [call of volunteers](https://www.linkedin.com/feed/update/urn:li:activity:7150388250178662400/). This TODO list can serve as a source of collaboration. If you want to work on any of the following, please [create an issue on this repository](https://github.com/zenml-io/zenml-projects/issues) and assign it to yourself!

- [x] Create a functioning data generation pipeline (initial dataset with the core [ZenML repo](https://github.com/zenml-io/zenml) scraped and pushed [here](https://huggingface.co/datasets/htahir1/zenml-codegen-v1))
- [x] Deploy the model on a [HuggingFace inference endpoint](https://ui.endpoints.huggingface.co/welcome) and use it in the [VS Code Extension](https://github.com/huggingface/llm-vscode#installation) using a deployment pipeline.
- [x] Deploy the model on a HuggingFace inference endpoint and use it in the [VS Code Extension](https://github.com/huggingface/llm-vscode#installation) using a deployment pipeline.
- [x] Create a functioning training pipeline.
- [ ] Curate a set of 5-10 repositories that are using the ZenML latest syntax and use data generation pipeline to push dataset to HuggingFace.
- [ ] Create a Dockerfile for the training pipeline with all requirements installed including ZenML, torch, CUDA etc. CUrrently I am having trouble creating this in this [config file](configs/finetune.yaml). Probably might make sense to create a docker imag with the right CUDA and requirements including ZenML. See here: https://sdkdocs.zenml.io/0.54.0/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings
Expand Down
2 changes: 1 addition & 1 deletion orbit-user-analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ python run.py

It is much more ideal to run a pipeline such as the
`community_analysis_pipeline` on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)
and set up a stack that supports
[our scheduling feature](https://docs.zenml.io/user-guide/advanced-guide/pipelining-features/schedule-pipeline-runs).
Please check [our docs](https://docs.zenml.io/getting-started/introduction)
Expand Down
6 changes: 3 additions & 3 deletions sign-language-detection-yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,18 +32,18 @@ installed on your local machine:
* [Docker](https://www.docker.com/)
* [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated)
* [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely)
* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database
* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database

### :rocket: Remote ZenML Server

For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI
or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your
machine as well as all stack components that may need access to the server.
[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)
[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)

In order to achieve this there are two different ways to get access to a remote ZenML Server.

1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml)/
1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/
2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted
version of the ZenML Server with no setup required.

Expand Down
2 changes: 1 addition & 1 deletion stack-showcase/run_deploy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@
"This was just the tip of the iceberg of what ZenML can do; check out the [**docs**](https://docs.zenml.io/) to learn more\n",
"about the capabilities of ZenML. For example, you might want to:\n",
"\n",
"- [Deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml) to collaborate with your colleagues.\n",
"- [Deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml) to collaborate with your colleagues.\n",
"- Run the same pipeline on a [cloud MLOps stack in production](https://docs.zenml.io/user-guide/production-guide/cloud-stack).\n",
"- Track your metrics in an experiment tracker like [MLflow](https://docs.zenml.io/stacks-and-components/component-guide/experiment-trackers/mlflow).\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion supabase-openai-summary/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ pip install -r src/requirements.txt
## Connect to Your Deployed ZenML

In order to run a ZenML pipeline remotely (e.g. on the cloud), we first need to
[deploy ZenML](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml). One of the
[deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml). One of the
easiest ways to do this is to [deploy ZenML with HuggingFace spaces](https://docs.zenml.io/deploying-zenml/zenml-self-hosted/deploy-using-huggingface-spaces).

Afterward, establish a connection with your deployed ZenML instance:
Expand Down
2 changes: 1 addition & 1 deletion zen-news-summarization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ and use the `VertexOrchestrator` to schedule the pipeline.

Before you start building the stack, you need to deploy ZenML on GCP. For more
information on how you can achieve do that, please check
[the corresponding docs page](https://docs.zenml.io/user-guide/production-guide/connect-deployed-zenml).
[the corresponding docs page](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml).

## ZenNews Stack

Expand Down

0 comments on commit 781c5ae

Please sign in to comment.