Goal: To enable folks to rapidly build complex workflows with LLMs
☠️ This is experimental and not recommended to be used in production environments. Have fun playing around!
- Supports OpenAI, Anthropic models
- Supports Image inputs for Multi-Modal LLMs
- Supports Jinja Templates with Input Variables for building complex prompting
- Clone ComfyUI:
git clone https://github.com/comfyanonymous/ComfyUI.git
- Follow the ComfyUI setup instructions
- Go to
custom_nodes
directory - Clone ComfyUI-LLMs inside ComfyUI/custom_nodes:
git clone https://github.com/adityathiru/ComfyUI-LLMs.git
- Install the requirements:
pip install -r requirements.txt
- Go back to the root of ComfyUI ; Start the ComfyUI server:
python main.py
- Go to the ComfyUI interface and add a new flow
- Right-click anywhere in the flow and select "Add Node"
- Select "LLM" from the list of options to find the LLM nodes
- Increased LLM Support and Validations
- Stateful LLMs: For maintaing message history
- Agentic LLMs: ReACT-like architecture nodes
- ComfyUI + Airflow for Job Execution
- Customized ComfyUI for LLMs
- ComfyUI-Documents: For loading PDFs, converting it into images, and injecting them into ComfyUI-LLMs
- ComfyUI-Crystools: For various utilities like Displaying Text, Displaying Images, String Manipulation etc.