Prem Labs aims to foster an ecosystem of privacy-preserving applications. We are excited to announce this challenge to incentivize the developers community to build innovative applications, services and solution that put users pricacy first.
Teams participating at the challenge will leverage Prem to deploy AI models and services on-premise. An important requirement is that the submitted applications shouldn't depend on centralized APIs at any point of their stack.
🛠 Teams participating in the challenge will develop Mobile, desktop, or web applications using one or many Prem AI services. Self-hosted AI models provide several benefits, including efficiency, cost optimization, and privacy.
To learn more about the challenge and review some reference implementations, head over to our basic tutorial to get up to speed with Prem.
Prem provides a unified environment to develop AI applications and deploy AI models on your infrastructure. Abstracting away all technical complexities for AI deployment and ushering in a new era of privacy-centric AI applications — users can retain control and ownership of their models.
📆 26th of June - 10th of July
🌍 Anywhere, Virtual
🚶🚶🚶 Both solo and team accepted
Judges will shortlist and reward 3 of the best application submitted. 💰Up to $ 10,000+ will be awarded at the final selected projects.
To qualify for the challenge, your application should:
- Open Source Github Repository As Prem is open-source from the start. Also, apps and services MUST be open-source in all of their parts.
- Use Prem Services From your local machine in development to production remote servers with the same interface, API, and services.
- Don't Log User Data on your server or third parties No centralized API at any point of the stack! Your users want you to keep your data private!
- Prompt utilization We all know open-source models are not there yet, like OpenAI with powerful GPUs at their disposal: from constraints, humans can leverage their creativity to produce unexpected results. Be smart at prompting.
- Commodity Hardware If we want to bring the benefits of AI to billions of people in the global south without locking them in the Big Brother sight, we must work around it and make sure anyone can afford to buy computing resources. Run your apps on cheap VPS without GPU or very inexpensive ones.
- Composability Combine all Prem services: LLMs, Diffusers, Embedding, and Vector stores.
- Production Status How polished is your application? Can my grandma use it?
10th of July - BEFORE 6 PM UTC google form link
Show some love to the Prem App on Github
Follow us on Twitter - https://twitter.com/premai_io
Join us on Discord
You can run Prem in two different ways:
- MacOS: go to https://premai.io and download Prem App.
- Server: run the installer script:
wget -q https://get.prem.ninja/install.sh -O install.sh; sudo bash ./install.sh
When the UI is up and running, you can see all the services available. With just one click you can download the service you are interested in. In the background, the docker image associated with the service will be downloaded based on your hardware requirements.
While waiting for the download to be completed, read more about the service, in the detail view. Just click on the card and you will be redirected to the service page. Each service page is packaged with some general info as well as complete documentation giving more details into the model exposed. When the download has been completed, just click on Open
and the service will start. You can interact with the service from the playground or from APIs.
You can check the port on which the service is running from the service detail view.
Now that the service is running, you can connect to it at http://localhost:8111 and start building. Here is a simple snippet using Langchain to connect to the service.
import os
from langchain.chat_models import ChatOpenAI
from langchain.schema import AIMessage, HumanMessage
os.environ["OPENAI_API_KEY"] = "random-string"
chat = ChatOpenAI(openai_api_base="http://localhost:8111/v1", max_tokens=128)
messages = [
HumanMessage(content="Can you explain what is a large language model?")
]
chat(messages)
messages = [
HumanMessage(content="Write me a story about a superstar.")
]
chat(messages)
You can find more examples at: https://github.com/premAI-io/prem-daemon/blob/main/resources/notebooks/