Installation: Install and run the ollama application from ollama.ai git clone https://github.com/MikeyBeez/SimpleAgent.git cd SimpleAgent conda create -n sa Python 3.10.9 conda activate sa pip install langchain
In the realm of conversational AI, crafting intelligent agents often presents a daunting task for beginners. Existing projects, such as autogen, while impressive, can overwhelm novices with their sheer complexity. This is where SimpleAgent emerges as a beacon of clarity, providing a simplified learning environment for aspiring agentsmiths.
At the heart of SimpleAgent lies the philosophy of minimalism. By stripping away unnecessary layers of code, we unveil the fundamental principles of conversational reasoning, making them more accessible to those embarking on their AI odyssey.
Conversation5.py serves as a testament to this minimalist approach, showcasing the essence of agent-agent dialogue. Its simplicity belies its potential, empowering learners to grasp the intricacies of conversational AI.
For those seeking a more structured approach, Conversation.py stands as a paragon of simplicity in agent programming. Utilizing a local model, it lays the groundwork for understanding the intricacies of conversational AI.
To further facilitate the learning process, SimpleAgent presents a gradual progression of complexity. Conversation3.py builds upon the foundations established in Conversation2.py, offering a deeper dive into agent interactions.
To orchestrate these conversations, SimpleAgent embraces Ollama, a tool meticulously crafted for novice AI enthusiasts. Its intuitive interface and straightforward setup streamline the learning process, eliminating unnecessary hurdles.
While Ollama serves admirably, SimpleAgent delves into the realm of LM Studio, exploring its capabilities for advanced model training. With LM Studio, users gain the power to fine-tune model parameters, unlocking a new dimension of conversational AI exploration.
SimpleAgent, however, recognizes the imperfections inherent in any learning journey. It acknowledges the presence of errors, encouraging learners to embrace them as opportunities for growth. By analyzing these imperfections, aspiring agentsmiths can refine their understanding and refine their creations.
Initially, SimpleAgent embarked on a quest to integrate with LM Studio, but the ever-evolving nature of the OpenAI API thwarted these efforts. Unfazed, SimpleAgent returned to the familiar embrace of Ollama, ensuring a smooth learning experience.
While SimpleAgent currently leans on Ollama for simplicity's sake, its ultimate aspiration lies in harnessing the power of Huggingface models directly through PyTorch. However, acknowledging the complexity of this endeavor, SimpleAgent prioritizes the needs of novice learners, guiding them towards a seamless entry into the world of conversational AI.
In conclusion, SimpleAgent emerges as a beacon of clarity in the often-murky landscape of conversational AI. Its commitment to simplicity, coupled with its emphasis on gradual learning, empowers novice agentsmiths to embark on their AI journeys with confidence and enthusiasm.
The new program conversation6.py is a functioning version using the requests library with ollama. This solves the latency problem. At this speed, I should be able to use something like the say command to speak the text.
Here's how to run it: python3 conversation6.py --duration 30 --initial_prompt "have a short conversation about the weather."
When you are done try all the ollama examples: https://github.com/jmorganca/ollama/tree/main/examples
I've added two files in the ollama directory to do retrieval augmented generation, testRAG1.py and RAGtool.py. RAGtool.py was written entirely by Microsoft Copilot. Enjoy!
I've switched from Gradio to Streamlit which seems more full functioned. Look at RAGtool6.py.
I just added face.py that can be modified to give an agent emotion and personality.