This project was built for the 2023 Flow Hackathon with a special Thank You to the teams at Next.js, Flow Blockchain, Dapper Wallet, Niftory, and Riffusion.
- Web Framework: Next.js
- Auth Framework: NextAuth
- Graph QL Client: graphql-request
- React Management: urql and SWR
- GraphQL: graphql-codeg-generator
- UI Framework: Chakra UI
- Model Stable Diffusion
- Programing Python
- Virtual Python Environment Conda
- Local Model Hosting Flask
- Cloud Computing Google Colaboratory
- Cloud Deployment Hugging Face
- Demo Hosting Gradio
- Audio Formats FFmpeg
- Local Dev Environment PyTorch
- Local Dev Environment CUDA
cd to the root of the project
Create .env file
touch .env
openssl rand -base64 32
From Niftory: https://docs.niftory.com/
From Dapper Wallet: https://docs.meetdapper.com/developing-for-the-platform
NEXT_PUBLIC_API_KEY="insert_your_api_key_here"
NEXT_PUBLIC_CLIENT_ID="insert_your_client_id_here"
CLIENT_SECRET="insert_your_super_secret_client_secret_that_is_unique_to_your_app_DO_NOT_SHARE_OR_COMMIT"
NIFTORY_AUTH_ISSUER="https://auth.staging.niftory.com"
NEXT_PUBLIC_API_PATH="https://graphql.api.staging.niftory.com"
NEXTAUTH_URL="http://localhost:insert_your_port"
NEXTAUTH_SECRET="insert_your_secret_here"
NEXT_PUBLIC_FLOW_ACCESS_API="https://access-testnet.onflow.org"
NEXT_PUBLIC_WALLET_API="https://staging.accounts.meetdapper.com"
NEXT_PUBLIC_FLOW_SCAN_URL="https://testnet.flowscan.org"
NEXT_PUBLIC_NFT_ADDRESS="0x631e88ae7f1d7c20"
NEXT_PUBLIC_NIFTORY_ADDRESS="0x04f74f0252479aed"
NEXT_PUBLIC_REGISTRY_ADDRESS="0x6085ae87e78e1433"
NEXT_PUBLIC_MERCHANT_ACCOUNT_ADDRESS="insert_your_dapper_organization_address_here"
NEXT_PUBLIC_ENV="development"
NEXT_PUBLIC_VERCEL_URL="insert.your.public.application.url"
FLASK_URL="http://127.0.0.1:3013/run_inference" (only use if you are runing AI server locally)
yarn install
yarn dev
Follow instructions in my Google Colab page here: Meet the Music
Deploy your model to Hugging Face with Gradio. View my deployment here: https://huggingface.co/spaces/goudastudios/text-to-music
Go to settings, then under "space hardware" select "T4 medium" or above to run the model. Switch back to the free CPU basic space hardware when not demoing your model to save on costs. The T4 model costs about $1 an hour to run.
Note this model will not run if you have it on the Free plan. Please reach out to me on twitter at MarkObonna and I will switch my model from free to T4 plan so you can use the demo.
Use this if your computer has enough GPU to run Stable Diffusion locally.
Install python virtual environment
conda create --name riffusion python=3.9
conda activate riffusion
Install python dependencies
python -m pip install -r requirements.txt
Install ffmpeg
brew install ffmpeg
Test your CUDA-Torch availability
import torch
torch.cuda.is_available()
If using MPS on a mac wilapple silicon chips, test MPS availability
import torch
torch.backends.mps.is_available()
Run the model server
python -m riffusion.server --host 127.0.0.1 --port 3013