Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

polish readme for demo #87

Merged
merged 1 commit into from
Nov 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,11 @@ pip3 uninstall -y torch
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 # cu121 means cuda 12.1
```

Now you are done with installing, try to modify one of the examples and run the below command!

```
python -m example.retrieval_qa.retrieval_qa_huggingface_demo
```

### Frontend Dev Setup
```
Expand All @@ -148,7 +153,8 @@ npm run build
If you are on EC2, you can launch a GPU instance with the following config:
- EC2 `g5.2xlarge` (if you want to run a pretrained LLM with 7B parameters)
- Deep Learning AMI PyTorch GPU 2.0.1 (Ubuntu 20.04)
<img src="example/image/readme_ec2_ami.jpg" alt="Alt text" width="50%" height="50%"/>
<img src="example/image/readme_ec2_ami.jpg" alt="Alt text" width="75%" height="75%"/>
- EBS: at least 100G

<img src="example/image/readme_ec2_storage.png" alt="Alt text" width="50%" height="50%"/>

2 changes: 2 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,8 @@ For example, here is a command to run `cambioml\pykoi` version `0.1_ec2_linux`.
docker run -d -e RETRIEVAL_MODEL=mistralai/Mistral-7B-v0.1 -p 5000:5000 --gpus all --name pykoi_test cambioml/pykoi:0.1_ec2_linux
```

***Note: this command may take a few minutes*** since it's loading a LLM.

If you are running it in the background, with a `-d` tag, you can check the logs using the following command:
```
docker logs [CONTAINER_NAME]
Expand Down