Skip to content

Commit

Permalink
Simplify readme and vectordb installation steps
Browse files Browse the repository at this point in the history
  • Loading branch information
radare committed May 17, 2024
1 parent a7ca4f6 commit 10c3edd
Show file tree
Hide file tree
Showing 4 changed files with 27 additions and 39 deletions.
20 changes: 14 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -36,21 +36,29 @@ venv:
if [ -z "`find venv | grep llama_cpp`" ]; then . venv/bin/activate ; pip install -r requirements.txt ; fi

deps: venv
test -n "${VIRTUAL_ENV}" || (echo "Run: . venv/bin/activate" ; exit 1)
export CMAKE_ARGS="-DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=ON" && \
#test -n "${VIRTUAL_ENV}" || (echo "Run: . venv/bin/activate" ; exit 1)
. venv/bin/activate && export CMAKE_ARGS="-DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=ON" && \
pip install --force-reinstall -U -r requirements.txt --no-cache-dir
$(MAKE) vectordb

clean:
rm -rf venv
rm -rf venv vectordb vdb

mrproper:
$(MAKE) clean

deps-global:
export CMAKE_ARGS="-DLLAMA_METAL=on -DLLAMA_METAL_EMBED_LIBRARY=ON" && \
$(PIP) install --force-reinstall -U -r requirements.txt --break-system-packages --no-cache-dir

vectordb:
git clone https://github.com/kagisearch/vectordb
cd vectordb && python setup.py build
vdb vectordb:
git clone https://github.com/kagisearch/vectordb vdb
cat vdb/setup.py | grep -v tensorflow_text > .x && mv .x vdb/setup.py
. venv/bin/activate \
&& cd vdb \
&& $(PIP) install setuptools tensorflow_hub \
&& $(PYTHON) setup.py build \
&& $(PYTHON) setup.py install

install user-install:
ln -fs $(PWD)/main.py $(R2_USER_PLUGINS)/r2ai.py
Expand Down
44 changes: 12 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,9 @@ Run a language model in local, without internet, to entertain you or help answer
## Features

* Prompt the language model without internet requirements
* Use local GGUF or remote language models (via http)
* Index large codebases or markdown books using a vector database
* Slurp file contents and make actions on that
* Slurp file and perform actions on that
* Embed the output of an r2 command and resolve questions on the given data
* Define different system-level assistant role
* Set environment variables to provide context to the language model
Expand All @@ -32,29 +33,11 @@ Run a language model in local, without internet, to entertain you or help answer

## Installation

This is optional ans system dependant. but on recent Debian/Ubuntu systems the `pip` tool is no longer working, because it conflicts with the system packages. The best way to do this is with `venv`:
To use `r2ai`, you need to setup a venv and install all the python dependencies inside. Just running `make` or `r2pm -ci r2ai` should be enough.

```bash
python -m venv venv
. venv/bin/activate
pip install -r requirements.txt
```

Optionally if you want better indexer for the data install vectordb.
You can optionally install vectordb to index local data by typing: `make vdb`. If the process fails `rm -rf vdb` and try again

```bash
# on Linux
pip install vectordb2

# on macOS
pip install vectordb2 spacy
python -m spacy download en_core_web_sm
brew install llvm
export PATH=/opt/homebrew/Cellar/llvm/17.0.5/bin/:$PATH
CC=clang CXX=clang++ pip install git+https://github.com/teemupitkanen/mrpt/
```

## r2pm installation
## Running

When running installed via r2pm you can execute it like this:

Expand All @@ -69,17 +52,18 @@ r2pm -i rlang-python
make user-install
```

After this you should get the `r2ai` command inside the radare2 shell. Set the `R2_DEBUG=1` environment to see the reasons why the plugin is not loaded if it's not there.

## Windows

On native Windows follow these instructions (no need to install radare2 or use r2pm), note that you need Python 3.8 or higher:
On Windows you may follow the same instructions, just ensure you have the right python environment ready and create the venv to use

```cmd
git clone https://github.com/radareorg/r2ai
cd r2ai
set PATH=C:\Users\YOURUSERNAME\Local\Programs\Python\Python39\;%PATH%
python -m pip -r requirements.txt
python -m pip install pyreadline3
python main.py
python3 -m pip -r requirements.txt
python3 main.py
```

## Usage
Expand Down Expand Up @@ -117,14 +101,10 @@ You can interact with r2ai from standalone python, from r2pipe via r2 keeping a
### Development/Testing
Just run `make` .. or well `python main.py`
Just run `make` .. or well `python3 main.py`
### TODO
* add "undo" command to drop the last message
* dump / restore conversational states (see -L command)
* Implement `~`, `|` and `>`
### Kudos
The original code of r2ai is based on OpenInterpreter. I want to thanks all the contributors to this project as they made it possible to build r2ai taking their code as source for this. Kudos to Killian and all the contributors.
* Implement `~`, `|` and `>` and other r2shell features
1 change: 0 additions & 1 deletion r2ai/repl.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
from r2ai.models import set_default_model
from .utils import slurp
import traceback
have_readline = False
from .const import R2AI_HISTFILE, R2AI_HOMEDIR, R2AI_RCFILE, R2AI_USERDIR
from .web import start_http_server
import r2ai
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
rich
pyreadline3
r2pipe
inquirer
llama-cpp-python
Expand Down

0 comments on commit 10c3edd

Please sign in to comment.