Assistant is a modular, AI-powered desktop voice assistant that uses speech recognition and natural language processing to execute commands and interact with your computer.
- Voice-activated commands
- AI-powered speech recognition using Whisper
- Natural language processing for command interpretation
- Modular architecture for easy extension and customization
- Desktop integration for various actions (typing, speaking, window information)
trigger-script.py
: Listens for BLE signals to start/stop recordingear.py
: Handles audio recording based on trigger signalswhisper-server.py
: Transcribes audio using the Whisper modelbrain.py
: Processes transcribed commands and executes actionsvoice.py
: Converts text to speech using espeak
-
Clone the repository:
git clone https://github.com/ruapotato/assistant.git cd assistant
-
Create a virtual environment:
python3 -m venv pyenv
-
Activate the virtual environment:
source pyenv/bin/activate
-
Install the required packages:
pip install numpy torch transformers sounddevice bleak flask ollama pip install 'accelerate>=0.26.0'
-
Install system dependencies:
- Install
espeak
for text-to-speech functionality:sudo apt-get install espeak
- Install
xdotool
for simulating keyboard input:sudo apt-get install xdotool
- Ensure you have the necessary Bluetooth libraries installed for BLE functionality
- Install
-
Start each component in a separate terminal window:
python trigger-script.py python ear.py python whisper-server.py python brain.py python voice.py
-
The system will now listen for voice commands through your configured BLE device.
-
When a command is detected, it will be processed, and the appropriate action will be taken.
-
The assistant will respond verbally using the espeak text-to-speech engine.
trigger-script.py
listens for BLE signals and writes "START" or "STOP" to the trigger file.ear.py
monitors the trigger file, starts/stops recording based on its content, and saves the audio.whisper-server.py
transcribes the recorded audio using the Whisper model.brain.py
processes the transcribed commands, interacts with the LLM, and generates responses.voice.py
monitors the voice directory for new text files, converts them to speech using espeak, and removes processed files.
Here's a detailed breakdown of how a command flows through the system:
-
Button Press:
- A button is pressed on the BLE device.
- The BLE device sends a signal to start recording.
-
Trigger Detection:
trigger-script.py
detects the BLE signal.- It writes "START" to the
./trigger
file.
-
Recording Initiation:
ear.py
continuously monitors the./trigger
file.- When it sees "START", it begins recording audio.
-
Audio Capture:
- The user speaks a command.
ear.py
captures the audio and saves it as./audio.raw
.
-
Recording Termination:
- The button is released, sending a stop signal.
trigger-script.py
writes "STOP" to the./trigger
file.ear.py
stops recording and writes the path./audio.raw
to./whisper_input
.
-
Speech-to-Text:
whisper-server.py
monitors the./whisper_input
file.- When a new input is detected, it reads the audio file.
- It uses the Whisper model to transcribe the audio to text.
- The transcription is written to the
./heard
file.
-
Command Processing:
brain.py
monitors the./heard
file for new content.- When new text appears, it reads and processes the command.
- It interacts with the LLM to generate an appropriate response.
- The response is formatted with
<say>
and<type>
tags as needed.
-
Action Execution:
brain.py
executes any actions specified in the response (e.g., typing text).- It writes any speech content to a new text file in the
./voice/
directory.
-
Text-to-Speech:
voice.py
continuously monitors the./voice/
directory.- When a new text file appears, it reads its content.
- It uses espeak to convert the text to speech, speaking the response.
- After speaking, it removes the processed text file.
-
Cycle Completion:
- The system returns to its idle state, ready for the next command.
This process repeats for each new command, allowing for continuous interaction with the assistant.
To add new functionalities:
- Modify
brain.py
to include new actions or tools. - Update the LLM prompts in
brain.py
to handle new types of commands. - If necessary, create new modules and integrate them into the existing architecture.
This project is licensed under the GNU General Public License v3.0 (GPL-3.0).
Copyright (C) 2024 David Hamner
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.
Contributions are welcome! Please feel free to submit a Pull Request.