Releases: Finity-Alpha/OpenVoiceChat
v0.2.4
Twilio and better docs
What's Changed
- Main cpu by @fakhirali in #44
- Twilio support by @fakhirali in #45
- Update docs slightly by @fakhirali in #48
- melo integration by @mohammadOsama141 in #50
New Contributors
- @mohammadOsama141 made their first contribution in #50
Full Changelog: v0.2.0...v0.2.4
v0.2.0
Function calling, better sentence splits and more
- You can now add a starting message for the bot to say to start the conversation.
- xtts improved
- Logging can be done to a particular path
- bugs fixed
What's Changed
- Add Ollama LLM by @Shehryar718 in #40
- Sentence detections by @fakhirali in #41
- Function calling by @fakhirali in #39
Full Changelog: v0.1.7...v0.2
v0.1.7
More stability
OVC is now used internally at FinityAlpha for a product, which means that we'll be discovering and fixing bugs regularly. v1 soon.
What's Changed
- Fix base file import bugs in openvoicechat/openvoicechat/* by @Shehryar718 in #33
- Logging file being created by @fakhirali in #34
- Documentation by @fakhirali in #37
New Contributors
- @Shehryar718 made their first contribution in #33
Full Changelog: v0.1.6...v0.1.7
Timings of each model, Web visualizer, Bug fixes
You can now easily measure the wait time for each model. Here are the times of some of the models:
Model | Avg Time |
---|---|
Deepgram | 0.3 |
GPT-3.5-turbo | 0.85 |
Elevenlabs (stream_latency=4) | 1.4 |
llama2-7b-Q4 (gpu) | 1.8 |
Whisper-small (gpu) | 0.13 |
piper-tts (gpu) | 0.55 |
Good looking web visualizer from here
What's Changed
- new logo in the house by @wisalkhanmv in #26
- Time Tracking by @fakhirali in #29
- Implement fully functional audio visualizer by @haiderrzaidi in #30
New Contributors
- @wisalkhanmv made their first contribution in #26
- @haiderrzaidi made their first contribution in #30
Full Changelog: v0.1.5...v0.1.6
Websockets and better dev ui
Websockets are finally here!
Integrated websockets into the pipeline. Run chat now has stopping criteria. Example added to show how to use OVC with langchain RAG.
What's Changed
- Added vectorstore and vectorizer functionality by @Harras3 in #17
- Web api by @fakhirali in #23
New Contributors
Full Changelog: v0.1.3...v0.1.5
Less dependencies and easier setup
torch and llama-cpp-python removed from dependencies. They will have to be custom installed according to the hardware, see INSTALL
The starting code is now the following. No need to make queues, threads and run the chat loop.
from openvoicechat.tts.tts_piper import Mouth_piper as Mouth
from openvoicechat.llm.llm_gpt import Chatbot_gpt as Chatbot
from openvoicechat.stt.stt_hf import Ear_hf as Ear
from openvoicechat.utils import run_chat
from openvoicechat.llm.prompts import llama_sales
if __name__ == "__main__":
device = 'cuda'
print('loading models... ', device)
ear = Ear(silence_seconds=2, device=device)
john = Chatbot(sys_prompt=llama_sales)
mouth = Mouth(device=device)
mouth.say_text('Good morning!')
run_chat(mouth, ear, john, verbose=True)
First Release!
First Release!
openvoicechat with all its features is available on pip.
pip install openvoicechat
What's Changed
- Silero vad seems good by @fakhirali in #2
- Interruptions by @fakhirali in #3
- Better voice by @fakhirali in #4
- Better abstractions by @fakhirali in #7
- Timing everything. by @fakhirali in #8
- Streaming llm by @fakhirali in #9
- Gpt support by @fakhirali in #10
- Elevenlabs support by @fakhirali in #11
- tts is parallel and no timing by @fakhirali in #12
- Parler TTS integrated (and tested) by @SalehAhmad1 in #13
- Parler tts faster by @fakhirali in #14
- Created the new webpage and uses the tinygrad webpage as reference for it by @SyedAddan in #19
- Bounty - Pip installations by @SalehAhmad1 in #18
- Interruption information to the LLM. by @fakhirali in #16
New Contributors
- @SalehAhmad1 made their first contribution in #13
- @SyedAddan made their first contribution in #19
Full Changelog: https://github.com/fakhirali/OpenVoiceChat/commits/v0.1