A sophisticated data compression and similarity search system that uses quantum-inspired techniques for efficient data storage and retrieval. The name "Parvus" comes from the Latin word for "small" or "reduced", reflecting the system's primary purpose of data reduction while maintaining semantic meaning.
- Quantum-Inspired Compression: Utilizes advanced dimensionality reduction techniques
- Semantic Search: Performs similarity searches on compressed data
- GPU Acceleration: Supports GPU-accelerated processing for improved performance
- Interactive GUI: Streamlit-based UI for easy data manipulation
- Flexible Input: Supports both JSON and CSV file formats
# Basic installation
pip install parvus
# With GPU support
pip install parvus[gpu]
# For development
pip install parvus[dev]
- Clone the repository:
git clone https://github.com/yourusername/parvus.git
cd parvus
- Install dependencies:
pip install -r requirements.txt
For GPU acceleration (recommended):
- Install CUDA toolkit (if using NVIDIA GPU)
- Install GPU-enabled packages:
conda install -c pytorch faiss-gpu
The package provides a command-line interface for basic operations:
python -m parvus --input data.json --output compressed_output
The input JSON file should have the following format:
{
"messages": [
{
"id": "1",
"content": "Your text content here"
}
]
}
python -m parvus --load compressed_output --query "your search query"
python -m parvus --gui
python -m parvus --server
python -m parvus --gui
# or
python -m streamlit run gui.py
from parvus import ParvusCompressor
# Initialize the system
compressor = ParvusCompressor()
# Load and compress data
embeddings = compressor.load_data_from_json('your_data.json')
compressor.compress(embeddings)
# Perform queries
results, distances = compressor.query("Your search query")
- Python 3.8+
- RAM: 16GB recommended
- GPU: NVIDIA GPU with CUDA support (optional)
- Storage: Depends on dataset size
parvus/
├── parvus.py # Core compression engine
├── gui.py # Streamlit-based interface
├── server.py # Server endpoints and API
├── requirements.txt # Project dependencies
├── README.md # Project documentation
├── CONTRIBUTING.md # Contribution guidelines
├── data/ # Sample and test data
│ ├── large_chat_history.json
│ ├── sample_data.csv
│ ├── sample_data.npy
│ └── test_data.json
├── models/ # Saved model states
│ ├── compressed_data.pkl
│ └── faiss_index.bin
├── tests/ # Test files and artifacts
│ ├── test_compressed.pkl
│ └── test_index.bin
└── archive/ # Archived/deprecated files
-
Core Files:
parvus.py
: Main compression engine implementing quantum-inspired algorithmsgui.py
: Interactive web interface built with Streamlitserver.py
: Server endpoints and API for integrationrequirements.txt
: Project dependencies
-
Data Directory:
- Contains sample data files
- Test datasets
- JSON and CSV examples
-
Models Directory:
- Saved compression states
- FAISS indices
- Serialized model data
-
Tests Directory:
- Test artifacts
- Compressed test data
- Test indices
The system consists of three main components:
-
Core Compression Engine (
parvus.py
)- Handles data compression and decompression
- Manages similarity search operations
- Provides GPU acceleration when available
-
Interactive Interface (
gui.py
)- Web-based user interface
- File upload and management
- Query interface
- Results visualization
-
Server API (
server.py
)- RESTful API endpoints
- Data management
- Remote compression operations
Performance metrics on a typical dataset:
- Compression Ratio: ~5x (varies by data)
- Query Time: <100ms (with GPU)
- Memory Usage: Proportional to dataset size
To publish a new version to PyPI:
-
Update the version number in:
setup.py
parvus/__init__.py
-
Create and push a new tag:
git tag v0.1.x
git push origin v0.1.x
- Go to GitHub and create a new release using the tag. The GitHub Action will automatically build and publish to PyPI.
This project is licensed under the MIT License - see the LICENSE file for details.
Please read CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
For support, please open an issue in the GitHub repository or contact the maintainers.