This is the github repo for the Sensemakers meetups on ollama. You can find information, code examples and tutorials here.
First, download & install ollama from www.ollama.com.
In three meetups we'll be working on building a personalised chatbot. The chatbot is able to answer questions based on your documents. You can also personalize the chatbot.
We have planned 3 meetups each with a seperate topic:
- 17 April: starting with ollama and python
- 15 May: Chat with your documents using Langchain
- 19 June: Personalise your chatbot / use your own model
They will be hosted at the OBA library at Oosterdokseiland from 19 - 21h. See www.oba.nl for location & info. We will meet at the OBA Junior Lab in the basement.
On the first wednesday of the month there will be a 'Do it yourself together' meetup, where you can work on your code. There is some support, but it's mostly DIY. Dates are May 1st and June 5th, also at OBA 19h.
- ollama.ipynb - Chat with an LLM, stream the response, use vision models and create a front end!
- ollama_llava_challenge.ipynb - use LLaVA to describe images
Sam Witteveen's Playlist on ollama Sam Witteveen's Playlist on ollama. 6 video's from intro to hard.
Gradio tutorial This tutorial helps you create chat interface including audio and images using Gradio.
- Ollama cheat sheet (see repo)
- Slides presentation (see repo)
Topics:
- Recap of first meetup
- Downloading models from Huggingface
- A deeper look at Vision Language Models
- Retrieval Augmented Generation from scratch
- Show and tell
All notebooks from first meetup.
- Use ollama.ipynb from first lesson but now with another model
- ollama_llava_self_driving_car.ipynb. See how good Llava is in recognizing traffic situations. It does not make same mistakes as some self driving car. Progress!
- RAG_basics_from_scratch.ipynb
- Bonus: ollama_RAG_exercises.ipynb
(Intro to huggingface)[https://www.youtube.com/watch?v=jBFFUwL0TyY] (Intro to RAG with ollama)[https://www.youtube.com/watch?v=V1Mz8gMBDMo]
Wouldn't it be cool to talk to your device and listen to the answer from ollama? You can do this with speech-to-text (STT) and text-to-speech (TTS).
Here are some recommended libraries in python:
- STT: use OpenAi's Whisper. See https://pypi.org/project/openai-whisper/ for the python package. distil-whisper is a faster and better alternative. You can find it here
- TTS: a new model Parler is making the rounds. I've not yet tested it, but you can find it here. An alternative is pyTTSx3, which you can find here.
If you do this, it would be cool to share your results & code with others so they can use it!
We will talk about more on
- www.sensemakersams.org - Sensemakers website
- www.meetup.com Link to Meetup
- slack channel Sensemakers Slack channel for questions