-
Notifications
You must be signed in to change notification settings - Fork 143
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[v0.6] Add Ollama support and update docs.
[v0.6] Add Ollama support and update docs.
- Loading branch information
Showing
11 changed files
with
498 additions
and
26 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,13 @@ | ||
# GitHub Sentinel | ||
|
||
![GitHub stars](https://img.shields.io/github/stars/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub forks](https://img.shields.io/github/forks/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub watchers](https://img.shields.io/github/watchers/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub repo size](https://img.shields.io/github/repo-size/DjangoPeng/GitHubSentinel) | ||
![GitHub language count](https://img.shields.io/github/languages/count/DjangoPeng/GitHubSentinel) | ||
![GitHub top language](https://img.shields.io/github/languages/top/DjangoPeng/GitHubSentinel) | ||
![GitHub last commit](https://img.shields.io/github/last-commit/DjangoPeng/GitHubSentinel?color=red) | ||
|
||
<p align="center"> | ||
<br> English | <a href="README.md">中文</a> | ||
</p> | ||
|
@@ -24,7 +32,7 @@ pip install -r requirements.txt | |
|
||
### 2. Configure the Application | ||
|
||
Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tencent Exmail), subscription file, and update settings: | ||
Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tencent Exmail), subscription file, update settings and LLM settings(both support OpenAI GPT API and Ollama REST API so far): | ||
|
||
|
||
```json | ||
|
@@ -40,7 +48,13 @@ Edit the `config.json` file to set up your GitHub token, Email settings(e.g.Tenc | |
"slack_webhook_url": "your_slack_webhook_url", | ||
"subscriptions_file": "subscriptions.json", | ||
"github_progress_frequency_days": 1, | ||
"github_progress_execution_time":"08:00" | ||
"github_progress_execution_time":"08:00", | ||
"llm": { | ||
"model_type": "openai", | ||
"openai_model_name": "gpt-4o-mini", | ||
"ollama_model_name": "llama3", | ||
"ollama_api_url": "http://localhost:11434/api/chat" | ||
} | ||
} | ||
|
||
``` | ||
|
@@ -53,6 +67,10 @@ export GITHUB_TOKEN="github_pat_xxx" | |
export EMAIL_PASSWORD="password" | ||
``` | ||
|
||
#### Ollama: Installation and Deployment | ||
|
||
[Ollama Installation and Deployment](docs/ollama.md) | ||
|
||
### 3. How to Run | ||
|
||
GitHub Sentinel supports the following three modes of operation: | ||
|
@@ -117,5 +135,23 @@ To run the application with a Gradio interface, allowing users to interact with | |
python src/gradio_server.py | ||
``` | ||
|
||
![gradio_demo](images/gradio_demo.png) | ||
|
||
- This will start a web server on your machine, allowing you to manage subscriptions and generate reports through a user-friendly interface. | ||
- By default, the Gradio server will be accessible at `http://localhost:7860`, but you can share it publicly if needed. | ||
|
||
## Contributing | ||
|
||
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. If you have any suggestions or feature requests, please open an issue first to discuss what you would like to change. | ||
|
||
<a href='https://github.com/repo-reviews/repo-reviews.github.io/blob/main/create.md' target="_blank"><img alt='Github' src='https://img.shields.io/badge/review_me-100000?style=flat&logo=Github&logoColor=white&labelColor=888888&color=555555'/></a> | ||
|
||
## License | ||
|
||
This project is licensed under the terms of the Apache-2.0 License . See the [LICENSE](LICENSE) file for details. | ||
|
||
## Contact | ||
|
||
Django Peng - [email protected] | ||
|
||
Project Link: https://github.com/DjangoPeng/GitHubSentinel |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,13 @@ | ||
# GitHub Sentinel | ||
|
||
![GitHub stars](https://img.shields.io/github/stars/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub forks](https://img.shields.io/github/forks/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub watchers](https://img.shields.io/github/watchers/DjangoPeng/GitHubSentinel?style=social) | ||
![GitHub repo size](https://img.shields.io/github/repo-size/DjangoPeng/GitHubSentinel) | ||
![GitHub language count](https://img.shields.io/github/languages/count/DjangoPeng/GitHubSentinel) | ||
![GitHub top language](https://img.shields.io/github/languages/top/DjangoPeng/GitHubSentinel) | ||
![GitHub last commit](https://img.shields.io/github/last-commit/DjangoPeng/GitHubSentinel?color=red) | ||
|
||
<p align="center"> | ||
<br> <a href="README-EN.md">English</a> | 中文 | ||
</p> | ||
|
@@ -24,7 +32,7 @@ pip install -r requirements.txt | |
|
||
### 2. 配置应用 | ||
|
||
编辑 `config.json` 文件,以设置您的 GitHub Token、Email 设置(以腾讯企微邮箱为例)、订阅文件和更新设置: | ||
编辑 `config.json` 文件,以设置您的 GitHub Token、Email 设置(以腾讯企微邮箱为例)、订阅文件、更新设置,以及大模型服务配置(支持 OpenAI GPT API 和 Ollama 私有化大模型服务): | ||
|
||
```json | ||
{ | ||
|
@@ -39,9 +47,16 @@ pip install -r requirements.txt | |
"slack_webhook_url": "your_slack_webhook_url", | ||
"subscriptions_file": "subscriptions.json", | ||
"github_progress_frequency_days": 1, | ||
"github_progress_execution_time":"08:00" | ||
"github_progress_execution_time":"08:00", | ||
"llm": { | ||
"model_type": "openai", | ||
"openai_model_name": "gpt-4o-mini", | ||
"ollama_model_name": "llama3", | ||
"ollama_api_url": "http://localhost:11434/api/chat" | ||
} | ||
} | ||
``` | ||
|
||
**出于安全考虑:** GitHub Token 和 Email Password 的设置均支持使用环境变量进行配置,以避免明文配置重要信息,如下所示: | ||
|
||
```shell | ||
|
@@ -51,6 +66,12 @@ export GITHUB_TOKEN="github_pat_xxx" | |
export EMAIL_PASSWORD="password" | ||
``` | ||
|
||
#### Ollama 安装与配置 | ||
|
||
|
||
[Ollama 安装部署与服务发布](docs/ollama.md) | ||
|
||
|
||
### 3. 如何运行 | ||
|
||
GitHub Sentinel 支持以下三种运行方式: | ||
|
@@ -115,5 +136,24 @@ python src/command_tool.py | |
python src/gradio_server.py | ||
``` | ||
|
||
![gradio_demo](images/gradio_demo.png) | ||
|
||
- 这将在您的机器上启动一个 Web 服务器,允许您通过用户友好的界面管理订阅和生成报告。 | ||
- 默认情况下,Gradio 服务器将可在 `http://localhost:7860` 访问,但如果需要,您可以公开共享它。 | ||
- 默认情况下,Gradio 服务器将可在 `http://localhost:7860` 访问,但如果需要,您可以公开共享它。 | ||
|
||
## 贡献 | ||
|
||
贡献是使开源社区成为学习、激励和创造的惊人之处。非常感谢你所做的任何贡献。如果你有任何建议或功能请求,请先开启一个议题讨论你想要改变的内容。 | ||
|
||
<a href='https://github.com/repo-reviews/repo-reviews.github.io/blob/main/create.md' target="_blank"><img alt='Github' src='https://img.shields.io/badge/review_me-100000?style=flat&logo=Github&logoColor=white&labelColor=888888&color=555555'/></a> | ||
|
||
## 许可证 | ||
|
||
该项目根据Apache-2.0许可证的条款进行许可。详情请参见[LICENSE](LICENSE)文件。 | ||
|
||
## 联系 | ||
|
||
Django Peng - [email protected] | ||
|
||
项目链接: https://github.com/DjangoPeng/GitHubSentinel | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,195 @@ | ||
## Ollama 安装部署与服务发布 | ||
|
||
### Linux | ||
|
||
```bash | ||
curl -fsSL https://ollama.com/install.sh | sh | ||
``` | ||
|
||
[手动安装说明](https://github.com/ollama/ollama/blob/main/docs/linux.md) | ||
|
||
### macOS | ||
|
||
[下载](https://ollama.com/download/Ollama-darwin.zip) | ||
|
||
### Windows 预览版 | ||
|
||
[下载](https://ollama.com/download/OllamaSetup.exe) | ||
|
||
--- | ||
|
||
## 快速入门 | ||
|
||
要运行并与 [Llama 3.1](https://ollama.com/library/llama3.1) 进行对话: | ||
|
||
```bash | ||
ollama run llama3.1 | ||
``` | ||
|
||
--- | ||
|
||
## 模型库 | ||
|
||
Ollama 支持在 [ollama.com/library](https://ollama.com/library) 上提供的一系列模型。 | ||
|
||
以下是一些可以下载的示例模型: | ||
|
||
| 模型 | 参数 | 大小 | 下载命令 | | ||
| ------------------ | ----- | ----- | ------------------------------ | | ||
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` | | ||
| Llama 3.1 | 70B | 40GB | `ollama run llama3.1:70b` | | ||
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` | | ||
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` | | ||
| Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` | | ||
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` | | ||
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` | | ||
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` | | ||
| Mistral | 7B | 4.1GB | `ollama run mistral` | | ||
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` | | ||
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` | | ||
| Starling | 7B | 4.1GB | `ollama run starling-lm` | | ||
| Code Llama | 7B | 3.8GB | `ollama run codellama` | | ||
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` | | ||
| LLaVA | 7B | 4.5GB | `ollama run llava` | | ||
| Solar | 10.7B | 6.1GB | `ollama run solar` | | ||
|
||
--- | ||
|
||
### 命令行工具 | ||
|
||
#### 创建模型 | ||
|
||
`ollama create` 用于从 Modelfile 创建模型。 | ||
|
||
```bash | ||
ollama create mymodel -f ./Modelfile | ||
``` | ||
|
||
#### 拉取模型 | ||
|
||
```bash | ||
ollama pull llama3.1 | ||
``` | ||
|
||
> 此命令还可用于更新本地模型。仅会拉取差异部分。 | ||
#### 删除模型 | ||
|
||
```bash | ||
ollama rm llama3.1 | ||
``` | ||
|
||
#### 复制模型 | ||
|
||
```bash | ||
ollama cp llama3.1 my-model | ||
``` | ||
|
||
#### 多行输入 | ||
|
||
对于多行输入,可以使用 `"""` 包裹文本: | ||
|
||
```bash | ||
>>> """Hello, | ||
... world! | ||
... """ | ||
``` | ||
这将输出一个包含“Hello, world!”消息的简单程序。 | ||
|
||
#### 多模态模型 | ||
|
||
```bash | ||
ollama run llava "这张图片中有什么? /Users/jmorgan/Desktop/smile.png" | ||
``` | ||
图像中显示的是一个黄色的笑脸,可能是图片的中心焦点。 | ||
|
||
#### 以参数传递提示 | ||
|
||
```bash | ||
$ ollama run llama3.1 "总结此文件: $(cat README.md)" | ||
``` | ||
Ollama 是一个轻量级、可扩展的框架,用于在本地计算机上构建和运行语言模型。 | ||
|
||
--- | ||
|
||
### REST API | ||
|
||
Ollama 提供 REST API 来运行和管理模型。 | ||
|
||
#### 生成响应 | ||
|
||
```bash | ||
curl http://localhost:11434/api/generate -d '{ | ||
"model": "llama3.1", | ||
"prompt":"为什么天空是蓝色的?" | ||
}' | ||
``` | ||
|
||
#### 与模型对话 | ||
|
||
```bash | ||
curl http://localhost:11434/api/chat -d '{ | ||
"model": "llama3.1", | ||
"messages": [ | ||
{ "role": "user", "content": "为什么天空是蓝色的?" } | ||
] | ||
}' | ||
``` | ||
|
||
有关所有端点(Endpoint)的详细信息,请参阅 [API 文档](./docs/api.md)。 | ||
|
||
--- | ||
|
||
### Docker 支持 | ||
|
||
Ollama 官方提供了 Docker 镜像 `ollama/ollama`,可以在 Docker Hub 上找到。 | ||
|
||
#### 使用 CPU 运行 | ||
|
||
```bash | ||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama | ||
``` | ||
|
||
#### 使用 Nvidia GPU 运行 | ||
|
||
要使用 Nvidia GPU,首先需要安装 NVIDIA Container Toolkit: | ||
|
||
```bash | ||
# 配置仓库 | ||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg | ||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list | ||
sudo apt-get update | ||
|
||
# 安装 NVIDIA Container Toolkit 包 | ||
sudo apt-get install -y nvidia-container-toolkit | ||
|
||
# 配置 Docker 使用 Nvidia 驱动 | ||
sudo nvidia-ctk runtime configure --runtime=docker | ||
sudo systemctl restart docker | ||
``` | ||
|
||
启动容器: | ||
|
||
```bash | ||
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama | ||
``` | ||
|
||
#### 使用 AMD GPU 运行 | ||
|
||
要使用 AMD GPU 运行 Ollama,可以使用 `rocm` 标签,并运行以下命令: | ||
|
||
```bash | ||
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm | ||
``` | ||
|
||
### 本地运行模型 | ||
|
||
现在,你可以运行一个模型: | ||
|
||
```bash | ||
docker exec -it ollama ollama run llama3 | ||
``` | ||
|
||
--- | ||
|
||
请根据以上内容进行 Ollama 的安装和配置,使用 CLI 工具和 Docker 镜像来管理和运行各种模型。如需更多信息,请访问 [Ollama GitHub 仓库](https://github.com/ollama/ollama)。 |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.