Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PYPI multiple installation options #62

Merged
merged 5 commits into from
Sep 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 58 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,40 +42,50 @@ Reinforcement Learning with Human Feedback (RLHF) is a unique training paradigm


## Installation
To get started with `pykoi`, you can choose to one of following compute options: CPU (e.g. your laptop) or GPU (e.g. EC2).
To get started with pykoi, you can choose from any of the installation options. The choice should be based on the features you need (e.g., RAG, RLHF or all) and the compute resources you have, such as a CPU (e.g., your laptop) or GPU (e.g., AWS EC2 or SageMaker).

### Option 1: CPU (e.g. your laptop)
Installation on a CPU is simple if you have conda. If not, install [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) for your operating system.
### Option 1: RAG (CPU)
This option allows you to run RAG on a CPU using either the OpenAI API or the Anthropic Claude2 API. Installation of RAG (CPU) is simple if you have conda. If not, install [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) for your operating system.

First, create a conda environment on your terminal using:
```
conda create -n pykoi python=3.10 -y
conda activate pykoi
conda activate pykoi # some OS requires `source activate pykoi`
```

Then install `pykoi` and the compatible [pytorch based on your os](https://pytorch.org/get-started)
```
pip3 install pykoi
pip3 install pykoi[rag]
pip3 install torch
```

### Option 2: GPU (e.g. EC2 or SageMaker)
### Option 2: RAG (GPU)
This option allows you to run RAG on a GPU using an open-source LLM from HuggingFace. Here's a quick [tutorial](#ec2-dev-setup) on setting up an EC2 GPU instance for the installation below.

If you are on EC2, you can launch a GPU instance with the following config:
- EC2 `g4dn.xlarge` (if you want to run a pretrained LLM with 7B parameters)
- Deep Learning AMI PyTorch GPU 2.0.1 (Ubuntu 20.04)
<img src="example/image/readme_ec2_ami.jpg" alt="Alt text" width="50%" height="50%"/>
- EBS: at least 100G
<img src="example/image/readme_ec2_storage.png" alt="Alt text" width="50%" height="50%"/>
On your GPU instance terminal, create a conda environment using:
```
conda create -n pykoi python=3.10 -y && source activate pykoi
```

Then install `pykoi` and [pytorch based on your cuda version](https://pytorch.org/get-started). You can find your CUDA version via `nvcc -V`.
```
pip3 install pykoi[huggingface]

# install torch based on cuda (e.g. cu118 means cuda 11.8)
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
```

Next, on your GPU instance terminal, create a conda environment using:
### Option 3: RLHF (GPU)
This option allows you to train LLM via RLHF on a GPU. Here's a quick [tutorial](#ec2-dev-setup) on setting up an EC2 GPU instance for the installation below.

On your GPU instance terminal, create a conda environment using:
```
conda create -n pykoi python=3.10 -y && source activate pykoi
```

Then install `pykoi` and [pytorch based on your cuda version](https://pytorch.org/get-started).
Then install `pykoi` and [pytorch based on your cuda version](https://pytorch.org/get-started). You can find your CUDA version via `nvcc -V`.
```
pip3 install pykoi
pip3 install pykoi[rlhf]

# install torch based on cuda (e.g. cu118 means cuda 11.8)
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
Expand All @@ -92,13 +102,44 @@ conda create -n pykoi python=3.10
conda activate pykoi
cd pykoi
pip3 install poetry
poetry install --no-root
```

Then, based the feature you need to develop, run one or more installation options below. We recommend install all the options below although it may take ~3 minutes longer.

- Option 1: RAG (CPU)
```
poetry install --no-root --extras rag
```
- Option 2: RAG (GPU)
```
poetry install --no-root --extras huggingface
```
- Option 3: RLHF (GPU)
```
poetry install --no-root --extras rlhf
```

Finally, if you are on a GPU, install `pykoi` and [pytorch based on your cuda version](https://pytorch.org/get-started). You can find your CUDA version via `nvcc -V`.
```
pip3 install pykoi[huggingface]

# install torch based on cuda (e.g. cu118 means cuda 11.8)
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
```


### Frontend Dev Setup
Frontend:
```
cd pykoi/pykoi/frontend
npm install
npm run build
```

### EC2 Dev Setup
If you are on EC2, you can launch a GPU instance with the following config:
- EC2 `g4dn.xlarge` (if you want to run a pretrained LLM with 7B parameters)
- Deep Learning AMI PyTorch GPU 2.0.1 (Ubuntu 20.04)
<img src="example/image/readme_ec2_ami.jpg" alt="Alt text" width="50%" height="50%"/>
- EBS: at least 100G
<img src="example/image/readme_ec2_storage.png" alt="Alt text" width="50%" height="50%"/>

34 changes: 22 additions & 12 deletions example/chatbot/demo_launch_app_cpu_openai.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Launch a LLM UI and a Database via OpenAI\n",
"\n",
"`pykoi` provides simple UI to launch a chatbot UI based on your LLMs, including your own finetuned LLM, a pretrained LLM from huggingface, or OpenAI/Anthropic/Bedrock APIs. This demo shows how to create and launch an LLM chatbot UI and database for OpenAI/Anthropic/Bedrock APIs. Let's get started!\n",
"\n",
"\n",
"### Prerequisites\n",
"To run this jupyter notebook, you need a `pykoi` environment with the `rag` option. You can follow [the installation guide](https://github.com/CambioML/pykoi/tree/install#option-1-rag-cpu) to set up the environment. \n",
"\n",
"You may also need `pip install ipykernel` to run the kernel environment.\n",
"\n",
"\n",
"### (Optional) Developer setup\n",
"If you are a normal user of `pykoi`, you can skip this step. However, if you modify the pykoi code and want to test your changes, you can uncomment the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -9,19 +28,11 @@
"# %reload_ext autoreload\n",
"# %autoreload 2\n",
"\n",
"# import os\n",
"# import sys\n",
"\n",
"# # Add the root folder to the module search path\n",
"# # Get the current directory\n",
"# current_directory = os.getcwd()\n",
"\n",
"# # Move two levels up (go to the parent directory of the parent directory)\n",
"# two_levels_up_directory = os.path.dirname(os.path.dirname(current_directory))\n",
"\n",
"# print(two_levels_up_directory)\n",
"\n",
"# sys.path.append(two_levels_up_directory)"
"# sys.path.append(\".\")\n",
"# sys.path.append(\"..\")\n",
"# sys.path.append(\"../..\")"
]
},
{
Expand All @@ -30,7 +41,6 @@
"metadata": {},
"outputs": [],
"source": [
"## pip install ipykernel\n",
"from pykoi import Application\n",
"from pykoi.chat import ModelFactory\n",
"from pykoi.chat import QuestionAnswerDatabase\n",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
"""Demo for the chatbot application using OpenAI endpoint."""
"""
Demo for the chatbot application using OpenAI endpoint.

- Prerequisites:
To run this jupyter notebook, you need a `pykoi` environment with the `rag` option.
You can follow [the installation guide](https://github.com/CambioML/pykoi/tree/install#option-1-rag-cpu)
to set up the environment.
- Run the demo:
1. Enter your OpenAI API key in the `api_key` below.
2. On terminal and `~/pykoi` directory, run
```
python -m example.chatbot.demo_launch_app_cpu_openai
```
"""

from pykoi import Application
from pykoi.chat import ModelFactory
from pykoi.chat import QuestionAnswerDatabase
Expand Down
52 changes: 29 additions & 23 deletions example/chatbot/demo_model_comparator_cpu_openai.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Demo: LLMs Comparison via OpenAI\n",
"\n",
"`pykoi` provides simple API to compare between LLMs, including your own finetuned LLM, a pretrained LLM from huggingface, or OpenAI/Anthropic/Bedrock APIs. This demo shows how to create and launch an LLM comparison app for OpenAI/Anthropic/Bedrock APIs. Let's get started!\n",
"\n",
"### Prerequisites\n",
"To run this jupyter notebook, you need a `pykoi` environment with the `rag` option. You can follow [the installation guide](https://github.com/CambioML/pykoi/tree/install#option-1-rag-cpu) to set up the environment. \n",
"\n",
"You may also need `pip install ipykernel` to run the kernel environment.\n",
"\n",
"\n",
"### (Optional) Developer setup\n",
"If you are a normal user of `pykoi`, you can skip this step. However, if you modify the pykoi code and want to test your changes, you can uncomment the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -9,19 +27,11 @@
"# %reload_ext autoreload\n",
"# %autoreload 2\n",
"\n",
"# import os\n",
"# import sys\n",
"\n",
"# # Add the root folder to the module search path\n",
"# # Get the current directory\n",
"# current_directory = os.getcwd()\n",
"\n",
"# # Move two levels up (go to the parent directory of the parent directory)\n",
"# two_levels_up_directory = os.path.dirname(os.path.dirname(current_directory))\n",
"\n",
"# print(two_levels_up_directory)\n",
"\n",
"# sys.path.append(two_levels_up_directory)"
"# sys.path.append(\".\")\n",
"# sys.path.append(\"..\")\n",
"# sys.path.append(\"../..\")"
]
},
{
Expand All @@ -39,13 +49,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Demo: LLMs Comparison\n",
"\n",
"`pykoi` provides simple API to compare between LLMs, including your own finetuned LLM, a pretrained LLM from huggingface, or OpenAI/Anthropic/Bedrock APIs.\n",
"\n",
"This demo shows how to create and launch an LLM comparison app. Let's get started!\n",
"\n",
"## Load LLMs\n",
"### Load LLMs\n",
"\n",
"#### 1. Creating an OpenAI model (requires an OpenAI API key)"
]
Expand Down Expand Up @@ -108,6 +114,13 @@
"nest_asyncio.apply()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Launch the App"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -137,13 +150,6 @@
" <img src=\"../image/comparisonDemoSmall_2x.gif\" width=\"75%\" height=\"75%\" />\n",
"</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
16 changes: 15 additions & 1 deletion example/chatbot/demo_model_comparator_cpu_openai.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
"""Demo for the chatbot application using multiple model endpoint."""
"""
Demo for the chatbot application using multiple OpenAI models.

- Prerequisites:
To run this jupyter notebook, you need a `pykoi` environment with the `rag` option.
You can follow [the installation guide](https://github.com/CambioML/pykoi/tree/install#option-1-rag-cpu)
to set up the environment.
- Run the demo:
1. Enter your OpenAI API key in the `api_key` below.
2. On terminal and `~/pykoi` directory, run
```
python -m example.chatbot.demo_model_comparator_cpu_openai
```
"""

from pykoi import Application
from pykoi.chat import ModelFactory
from pykoi.component import Compare
Expand Down