Welcome to the Offline Wikipedia Text API! This project provides a simple way to search and retrieve Wikipedia articles from an offline dataset using the txtai
library. The API offers three endpoints to get full articles by title, full articles by search prompt, and summary snippets of articles by search prompt.
- Offline Access: All Wikipedia article texts are stored offline, allowing for fast and private access.
- Search Functionality: Uses the powerful
txtai
library to search for articles by prompts.
- This project requires a minimum of 60GB of hard disk space to store the related datasets
- This project utilizes Git to pull down the needed datasets (https://git-scm.com/downloads)
- This can be skipped by downloading the datasets into their respective folders in the project directory.
- "wiki-dataset" folder: https://huggingface.co/datasets/NeuML/wikipedia-20240101
- "txtai-wikipedia" folder: https://huggingface.co/NeuML/txtai-wikipedia
- The existence of the two dataset folders should skip the git calls, bypassing their need.
- This can be skipped by downloading the datasets into their respective folders in the project directory.
- This project is a Python project, and requires Python to run.
There ARE scripts for Mac and Windows, but they are in the "Untested" folder because of two reasons:
- A) On Mac, I ran into an issue with the XCode supplied git that it doesn't handle large files well. The result is that I can't download the wikipedia datasets cleanly in that script. Once the sets are in their respective locations, the script works great. You can find more in the "Untested" folder readme.
- B) I don't have a Linux machine to test with. I've had a couple of people tell me it works fine, so I have an expectation that it will.
During first run, the app will first download about 60GB worth of datasets (see above), and then will take about 10-15 minutes to do some indexing. This will only occur on first run; just let it do its thing. If, for any reason, you kill the process halfway through and need to redo it, you can simply delete the "title_to_index.json" file and it will be recreated. You can also delete the "wiki-dataset" and "txtai-wikipedia" folders to redownload.
If you're dataset savvy and want to make new, more up to date, datasets to use with this- NeuML's Hugging Face repos give instructions on how.
This project relies heavily on txtai, which uses various libraries to download and utilize small models itself for searching. Please see that project for an understanding of what gets downloaded and where.
- Clone the Repository
git clone https://github.com/SomeOddCodeGuy/OfflineWikipediaTextApi cd OfflineWikipediaTextApi
- Run the API
-
For Windows:
run_windows.bat
-
For Linux or MacOS:
- There are currently scripts within "Untested", though there is a known issue for MacOS related to git. A workaround is presented in the README for that folder.
-
- Pull down the code from https://github.com/SomeOddCodeGuy/OfflineWikipediaTextApi
git clone https://github.com/SomeOddCodeGuy/OfflineWikipediaTextApi
- Open command prompt and navigate to the folder containing the code
cd OfflineWikipediaTextApi
- Optional: create a python virtual environment.
- Windows:
python -m venv venv
- MacOS:
python3 -m venv venv
- Linux:
python -m venv venv
- Windows:
- Optional: activate python virtual environment.
- Windows:
venv\Scripts\activate
- MacOS/Linux:
venv/bin/activate
- Fish shell:
venv/bin/activate.fish
- Windows:
- Pip install the requirements from requirements.txt
- Windows:
python -m pip install -r requirements.txt
- MacOS:
python3 -m pip install -r requirements.txt
- Linux:
python -m pip install -r requirements.txt
- Windows:
- Pull down the two needed datasets into the following folders within the project folder:
wiki-dataset
folder: https://huggingface.co/datasets/NeuML/wikipedia-20240901 You would need git-lfs installed to clone it Windows: https://git-lfs.com/ Mac: https://git-lfs.com/ orbrew install git-lfs
Linux Ubuntu/Debian:sudo apt install git-lfs
Then run:git lfs install
git clone https://huggingface.co/datasets/NeuML/wikipedia-20240901
The dataset requieres to be calledwiki-dataset
so rename it:mv wikipedia-20240901 wiki-dataset
txtai-wikipedia
folder: https://huggingface.co/NeuML/txtai-wikipediagit clone https://huggingface.co/NeuML/txtai-wikipedia
- See project structure below to make sure you did it right
- Run start_api.py
- Windows: python start_api.py
- MacOS/Linux: python3 start_api.py
Step 7 will take between 10-15 minutes on the first run only. This is to index some stuff for future runs. After that it should be fast.
Your project should look like this:
- OfflineWikipediaTextApi/
- wiki-dataset/
- train/
- data-00000-of-00044.arrow
- data-00001-of-00044.arrow
- ...
- pageviews.sqlite
- README.md
- txtai-wikipedia
- config.json
- documents
- embeddings
- README.md
- start_api.py
- ...
The API configuration is managed through the config.json
file:
{
"host": "0.0.0.0",
"port": 5728,
"verbose": false
}
The "verbose" is for changing whether the API library uvicorn outputs all logs vs just warning logs. Set to warning by default.
Endpoint: /top_article
curl -G "http://localhost:5728/top_article" --data-urlencode "prompt=Quantum Physics" --data-urlencode "percentile=0.5" --data-urlencode "num_results=10"
NOTE: The num_results for top_article is the number of results to compare to find the top article. This endpoint always returns a single result, but the higher your num_results the more articles it will compare in an attempt to find the top scoring
Endpoint: /articles/{title}
curl -X GET "http://localhost:5728/articles/Applications%20of%20quantum%20mechanics"
Endpoint: /summaries
curl -G "http://localhost:5728/summaries" --data-urlencode "prompt=Quantum Physics" --data-urlencode "percentile=0.5" --data-urlencode "num_results=1"
Endpoint: /articles
curl -G "http://localhost:5728/articles" --data-urlencode "prompt=Artificial Intelligence" --data-urlencode "percentile=0.5" --data-urlencode "num_results=1"
This project is licensed under the Apache 2.0 License. See the LICENSE
file for more details.
This project imports dependencies in the requirements.txt:
Please see ThirdParty-Licenses directory for details on their licenses.
OfflineWikipediaTextApi
Copyright (C) 2024 Christopher Smith