diff --git a/README.md b/README.md
index 7c44670..e910362 100644
--- a/README.md
+++ b/README.md
@@ -2,385 +2,386 @@
Here we will keep track of the AI Game Development Tools, including programming, animation, effects, modeling, audio, music and more. 🔥
-* [Tool (AI LLM)](#tool)
-* [Game](#game)
-* [Code](#code)
-* [Framework](#framework)
-* [Writer](#writer)
-* [Image](#image)
-* [Texture](#texture)
-* [Shader](#shader)
-* [3D Model](#model)
-* [Avatar](#avatar)
-* [Animation](#animation)
-* [Video](#video)
-* [Audio](#audio)
-* [Music](#music)
-* [Voice](#voice)
-* [Speech](#speech)
-* [Analytics](#analytics)
+- [Tool (AI LLM)](#tool)
+- [Game](#game)
+- [Code](#code)
+- [Framework](#framework)
+- [Writer](#writer)
+- [Image](#image)
+- [Texture](#texture)
+- [Shader](#shader)
+- [3D Model](#model)
+- [Avatar](#avatar)
+- [Animation](#animation)
+- [Video](#video)
+- [Audio](#audio)
+- [Music](#music)
+- [Voice](#voice)
+- [Speech](#speech)
+- [Analytics](#analytics)
## Tool (AI LLM)
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [AICommand](https://github.com/keijiro/AICommand) | ChatGPT integration with Unity Editor. | Unity | Tool |
-| [Hugging Face API Unity Integration](https://github.com/huggingface/unity-api) | This Unity package provides an easy-to-use integration for the Hugging Face Inference API, allowing developers to access and use Hugging Face AI models within their Unity projects. | Unity | Tool |
-| [Sanity AI Engine](https://github.com/tosos/SanityEngine) | Sanity AI Engine for the Unity Game Development Tool. | Unity | Tool |
-| [Unity ChatGPT](https://github.com/dilmerv/UnityChatGPT) | Unity ChatGPT Experiments. | Unity | Tool |
-| [ChatGPT-API-unity](https://github.com/mochi-neko/ChatGPT-API-unity) | Binds ChatGPT chat completion API to pure C# on Unity. | Unity | Tool |
-| [ChatGPTForUnity](https://github.com/sunsvip/ChatGPTForUnity) | ChatGPT for unity. | Unity | Tool |
-| [Unity OpenAI-API Integration](https://github.com/himanshuskyrockets/Unity_OpenAI) | Integrate openai GPT-3 language model and ChatGPT API into a Unity project. | Unity | Tool |
-| [InteractML-Unity](https://github.com/Interactml/iml-unity) | InteractML, an Interactive Machine Learning Visual Scripting framework for Unity3D. | Unity | Tool |
-| [InteractML-Unreal Engine](https://github.com/Interactml/iml-ue4) | Bringing Machine Learning to Unreal Engine. | Unreal Engine | Tool |
-| [UnrealGPT](https://github.com/TREE-Ind/UnrealGPT) | A collection of Unreal Engine 5 Editor Utility widgets powered by GPT3/4. | Unreal Engine | Tool |
-| [Unreal Engine 5 Llama LoRA](https://github.com/bublint/ue5-llama-lora) | A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools. | Unreal Engine | Tool |
-| [HuggingChat](https://huggingface.co/chat/) | Making the community's best AI chat models available to everyone. | | Tool |
-| [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. | | Tool |
-| [GPT4All](https://github.com/nomic-ai/gpt4all) | A chatbot trained on a massive collection of clean assistant data including code, stories and dialogue. | | Tool |
-| [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | An Instruction-following LLaMA Model. | | Tool |
-| [Text generation web UI](https://github.com/oobabooga/text-generation-webui) | A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, OPT, and GALACTICA. | | Tool |
-| [MOSS](https://github.com/OpenLMLab/MOSS) | An open-source tool-augmented conversational language model from Fudan University. | | Tool |
-| [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | A large-scale 7B pretraining language model developed by Baichuan. | | Tool |
-| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | A 13B large language model developed by Baichuan Intelligent Technology. | | Tool |
-| [Baichuan 2](https://github.com/baichuan-inc/Baichuan2) | A series of large language models developed by Baichuan Intelligent Technology. | | Tool |
-| [Qwen-7B](https://github.com/QwenLM/Qwen-7B) | The official repo of Qwen-7B (通义千问-7B) chat & pretrained large language model proposed by Alibaba Cloud. | | Tool |
-| [InternLM](https://github.com/InternLM/InternLM) | InternLM has open-sourced a 7 billion parameter base model, a chat model tailored for practical scenarios and the training system. | | Tool |
-| [Yi](https://github.com/01-ai/Yi) | A series of large language models trained from scratch by developers. | | Tool |
-| [Skywork](https://github.com/SkyworkAI/Skywork) | Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. | | Tool |
-| [ChatYuan](https://github.com/clue-ai/ChatYuan) | Large Language Model for Dialogue in Chinese and English. | | Tool |
-| [ChatRWKV](https://github.com/BlinkDL/ChatRWKV) | ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. | | Tool |
-| [Panda](https://github.com/dandelionsllm/pandallm) | Overseas Chinese open source large language model, based on Llama-7B, -13B, -33B, -65B for continuous pre-training in the Chinese field. | | Tool |
-| [LangChain](https://github.com/hwchase17/langchain) | LangChain is a framework for developing applications powered by language models. | | Tool |
-| [LangFlow](https://github.com/logspace-ai/langflow) | ⛓️ LangFlow is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows. | | Tool |
-| [BabyAGI](https://github.com/yoheinakajima/babyagi) | This Python script is an example of an AI-powered task management system. | | Tool |
-| [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) | An experimental open-source attempt to make GPT-4 fully autonomous. | | Tool |
-| [AgentGPT](https://github.com/reworkd/AgentGPT) | 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser. | | Tool |
-| [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4) | Enhancing Vision-language Understanding with Advanced Large Language Models. | | Tool |
-| [StableLM](https://github.com/Stability-AI/StableLM) | Stability AI Language Models. | | Tool |
-| [WebGPT](https://github.com/0hq/WebGPT) | Run GPT model on the browser with WebGPU. | | Tool |
-| [LaMini-LM](https://github.com/mbzuai-nlp/LaMini-LM) | LaMini-LM is a collection of small-sized, efficient language models distilled from ChatGPT and trained on a large-scale dataset of 2.58M instructions. | | Tool |
-| [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Enable everyone to develop, optimize and deploy AI models natively on everyone's devices. | | Tool |
-| [Lamini](https://github.com/lamini-ai/lamini) | Lamini allows any engineering team to outperform general purpose LLMs through RLHF and fine- tuning on their own data. | | Tool |
-| [Lit-LLaMA](https://github.com/Lightning-AI/lit-llama) | Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. | | Tool |
-| [llama2-webui](https://github.com/liltom-eth/llama2-webui) | Run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). | | Tool |
-| [mPLUG-Owl🦉](https://github.com/X-PLUG/mPLUG-Owl) | Modularization Empowers Large Language Models with Multimodality. | | Tool |
-| [👶🤖🖥️ BabyAGI UI](https://github.com/miurla/babyagi-ui) | BabyAGI UI is designed to make it easier to run and develop with babyagi in a web app, like a ChatGPT. | | Tool |
-| [Pi](https://heypi.com/talk) | AI chatbot designed for personal assistance and emotional support. | | Tool |
-| [Flowise](https://github.com/FlowiseAI/Flowise) | Drag & drop UI to build your customized LLM flow using LangchainJS. | | Tool |
-| [Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | An AutoGPT agent that controls Chrome on your desktop. | | Tool |
-| [Assistant CLI](https://github.com/diciaup/assistant-cli) | A comfortable CLI tool to use ChatGPT service🔥 | | Tool |
-| [SearchGPT](https://github.com/tobiasbueschel/search-gpt) | 🌳 Connecting ChatGPT with the Internet | | Tool |
-| [WordGPT](https://github.com/filippofinke/WordGPT) | 🤖 Bring the power of ChatGPT to Microsoft Word | | Tool |
-| [DemoGPT](https://github.com/melih-unsal/DemoGPT) | Auto Gen-AI App Generator with the Power of Llama 2 | | Tool |
-| [Web3-GPT](https://github.com/Markeljan/Web3GPT) | Deploy smart contracts with AI | | Tool |
-| [MetaGPT](https://github.com/geekan/MetaGPT) | The Multi-Agent Framework | | Tool |
-| [ChatDev](https://github.com/OpenBMB/ChatDev) | Communicative Agents for Software Development. | | Tool |
-| [Dora](https://www.dora.run/ai) | Generating powerful websites, one prompt at a time. | | Tool |
-| [ImageBind](https://github.com/facebookresearch/ImageBind) | ImageBind One Embedding Space to Bind Them All. | | Tool |
-| [ToolBench](https://github.com/openbmb/toolbench) | An open platform for training, serving, and evaluating large language model for tool learning. | | Tool |
-| [LogicGamesSolver](https://github.com/fabridigua/LogicGamesSolver) | A Python tool to solve logic games with AI, Deep Learning and Computer Vision. | | Tool |
-| [NExT-GPT](https://github.com/NExT-GPT/NExT-GPT) | Any-to-Any Multimodal Large Language Model. | | Tool |
-| [MiniGPT-5](https://github.com/eric-ai-lab/MiniGPT-5) | Interleaved Vision-and-Language Generation via Generative Vokens. | | Tool |
-| [Lemur](https://github.com/OpenLemur/Lemur) | Open Foundation Models for Language Agents. | | Tool |
-| [XAgent](https://github.com/OpenBMB/XAgent) | An Autonomous LLM Agent for Complex Task Solving. | | Tool |
-| [Character-LLM](https://github.com/choosewhatulike/trainable-agents) | A Trainable Agent for Role-Playing. | | Tool |
+| Source | Description | Game Engine | Type |
+| :--------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :--: |
+| [AICommand](https://github.com/keijiro/AICommand) | ChatGPT integration with Unity Editor. | Unity | Tool |
+| [Hugging Face API Unity Integration](https://github.com/huggingface/unity-api) | This Unity package provides an easy-to-use integration for the Hugging Face Inference API, allowing developers to access and use Hugging Face AI models within their Unity projects. | Unity | Tool |
+| [Sanity AI Engine](https://github.com/tosos/SanityEngine) | Sanity AI Engine for the Unity Game Development Tool. | Unity | Tool |
+| [Unity ChatGPT](https://github.com/dilmerv/UnityChatGPT) | Unity ChatGPT Experiments. | Unity | Tool |
+| [ChatGPT-API-unity](https://github.com/mochi-neko/ChatGPT-API-unity) | Binds ChatGPT chat completion API to pure C# on Unity. | Unity | Tool |
+| [ChatGPTForUnity](https://github.com/sunsvip/ChatGPTForUnity) | ChatGPT for unity. | Unity | Tool |
+| [Unity OpenAI-API Integration](https://github.com/himanshuskyrockets/Unity_OpenAI) | Integrate openai GPT-3 language model and ChatGPT API into a Unity project. | Unity | Tool |
+| [InteractML-Unity](https://github.com/Interactml/iml-unity) | InteractML, an Interactive Machine Learning Visual Scripting framework for Unity3D. | Unity | Tool |
+| [InteractML-Unreal Engine](https://github.com/Interactml/iml-ue4) | Bringing Machine Learning to Unreal Engine. | Unreal Engine | Tool |
+| [UnrealGPT](https://github.com/TREE-Ind/UnrealGPT) | A collection of Unreal Engine 5 Editor Utility widgets powered by GPT3/4. | Unreal Engine | Tool |
+| [Unreal Engine 5 Llama LoRA](https://github.com/bublint/ue5-llama-lora) | A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools. | Unreal Engine | Tool |
+| [HuggingChat](https://huggingface.co/chat/) | Making the community's best AI chat models available to everyone. | | Tool |
+| [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. | | Tool |
+| [GPT4All](https://github.com/nomic-ai/gpt4all) | A chatbot trained on a massive collection of clean assistant data including code, stories and dialogue. | | Tool |
+| [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | An Instruction-following LLaMA Model. | | Tool |
+| [Text generation web UI](https://github.com/oobabooga/text-generation-webui) | A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, OPT, and GALACTICA. | | Tool |
+| [MOSS](https://github.com/OpenLMLab/MOSS) | An open-source tool-augmented conversational language model from Fudan University. | | Tool |
+| [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | A large-scale 7B pretraining language model developed by Baichuan. | | Tool |
+| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | A 13B large language model developed by Baichuan Intelligent Technology. | | Tool |
+| [Baichuan 2](https://github.com/baichuan-inc/Baichuan2) | A series of large language models developed by Baichuan Intelligent Technology. | | Tool |
+| [Qwen-7B](https://github.com/QwenLM/Qwen-7B) | The official repo of Qwen-7B (通义千问-7B) chat & pretrained large language model proposed by Alibaba Cloud. | | Tool |
+| [InternLM](https://github.com/InternLM/InternLM) | InternLM has open-sourced a 7 billion parameter base model, a chat model tailored for practical scenarios and the training system. | | Tool |
+| [Yi](https://github.com/01-ai/Yi) | A series of large language models trained from scratch by developers. | | Tool |
+| [Skywork](https://github.com/SkyworkAI/Skywork) | Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. | | Tool |
+| [ChatYuan](https://github.com/clue-ai/ChatYuan) | Large Language Model for Dialogue in Chinese and English. | | Tool |
+| [ChatRWKV](https://github.com/BlinkDL/ChatRWKV) | ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source. | | Tool |
+| [Panda](https://github.com/dandelionsllm/pandallm) | Overseas Chinese open source large language model, based on Llama-7B, -13B, -33B, -65B for continuous pre-training in the Chinese field. | | Tool |
+| [LangChain](https://github.com/hwchase17/langchain) | LangChain is a framework for developing applications powered by language models. | | Tool |
+| [LangFlow](https://github.com/logspace-ai/langflow) | ⛓️ LangFlow is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows. | | Tool |
+| [BabyAGI](https://github.com/yoheinakajima/babyagi) | This Python script is an example of an AI-powered task management system. | | Tool |
+| [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) | An experimental open-source attempt to make GPT-4 fully autonomous. | | Tool |
+| [AgentGPT](https://github.com/reworkd/AgentGPT) | 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser. | | Tool |
+| [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4) | Enhancing Vision-language Understanding with Advanced Large Language Models. | | Tool |
+| [StableLM](https://github.com/Stability-AI/StableLM) | Stability AI Language Models. | | Tool |
+| [WebGPT](https://github.com/0hq/WebGPT) | Run GPT model on the browser with WebGPU. | | Tool |
+| [LaMini-LM](https://github.com/mbzuai-nlp/LaMini-LM) | LaMini-LM is a collection of small-sized, efficient language models distilled from ChatGPT and trained on a large-scale dataset of 2.58M instructions. | | Tool |
+| [MLC LLM](https://github.com/mlc-ai/mlc-llm) | Enable everyone to develop, optimize and deploy AI models natively on everyone's devices. | | Tool |
+| [Lamini](https://github.com/lamini-ai/lamini) | Lamini allows any engineering team to outperform general purpose LLMs through RLHF and fine- tuning on their own data. | | Tool |
+| [Lit-LLaMA](https://github.com/Lightning-AI/lit-llama) | Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. | | Tool |
+| [llama2-webui](https://github.com/liltom-eth/llama2-webui) | Run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). | | Tool |
+| [mPLUG-Owl🦉](https://github.com/X-PLUG/mPLUG-Owl) | Modularization Empowers Large Language Models with Multimodality. | | Tool |
+| [👶🤖🖥️ BabyAGI UI](https://github.com/miurla/babyagi-ui) | BabyAGI UI is designed to make it easier to run and develop with babyagi in a web app, like a ChatGPT. | | Tool |
+| [Pi](https://heypi.com/talk) | AI chatbot designed for personal assistance and emotional support. | | Tool |
+| [Flowise](https://github.com/FlowiseAI/Flowise) | Drag & drop UI to build your customized LLM flow using LangchainJS. | | Tool |
+| [Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | An AutoGPT agent that controls Chrome on your desktop. | | Tool |
+| [Assistant CLI](https://github.com/diciaup/assistant-cli) | A comfortable CLI tool to use ChatGPT service🔥 | | Tool |
+| [SearchGPT](https://github.com/tobiasbueschel/search-gpt) | 🌳 Connecting ChatGPT with the Internet | | Tool |
+| [WordGPT](https://github.com/filippofinke/WordGPT) | 🤖 Bring the power of ChatGPT to Microsoft Word | | Tool |
+| [DemoGPT](https://github.com/melih-unsal/DemoGPT) | Auto Gen-AI App Generator with the Power of Llama 2 | | Tool |
+| [Web3-GPT](https://github.com/Markeljan/Web3GPT) | Deploy smart contracts with AI | | Tool |
+| [MetaGPT](https://github.com/geekan/MetaGPT) | The Multi-Agent Framework | | Tool |
+| [ChatDev](https://github.com/OpenBMB/ChatDev) | Communicative Agents for Software Development. | | Tool |
+| [Dora](https://www.dora.run/ai) | Generating powerful websites, one prompt at a time. | | Tool |
+| [ImageBind](https://github.com/facebookresearch/ImageBind) | ImageBind One Embedding Space to Bind Them All. | | Tool |
+| [ToolBench](https://github.com/openbmb/toolbench) | An open platform for training, serving, and evaluating large language model for tool learning. | | Tool |
+| [LogicGamesSolver](https://github.com/fabridigua/LogicGamesSolver) | A Python tool to solve logic games with AI, Deep Learning and Computer Vision. | | Tool |
+| [NExT-GPT](https://github.com/NExT-GPT/NExT-GPT) | Any-to-Any Multimodal Large Language Model. | | Tool |
+| [MiniGPT-5](https://github.com/eric-ai-lab/MiniGPT-5) | Interleaved Vision-and-Language Generation via Generative Vokens. | | Tool |
+| [Lemur](https://github.com/OpenLemur/Lemur) | Open Foundation Models for Language Agents. | | Tool |
+| [XAgent](https://github.com/OpenBMB/XAgent) | An Autonomous LLM Agent for Complex Task Solving. | | Tool |
+| [Character-LLM](https://github.com/choosewhatulike/trainable-agents) | A Trainable Agent for Role-Playing. | | Tool |
## Game
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Opus](https://opus.ai/) | An AI app that turns text into a video game. | | Game |
-| [AgentBench](https://github.com/thudm/agentbench) | A Comprehensive Benchmark to Evaluate LLMs as Agents. | | Agent |
-| [Generative Agents](https://github.com/joonspk-research/generative_agents) | Interactive Simulacra of Human Behavior. | | Agent |
-| [AI Town](https://github.com/a16z-infra/ai-town) | AI Town is a virtual town where AI characters live, chat and socialize. | | Agent |
-| [Cat Town](https://github.com/ykhli/cat-town) | A C(h)atGPT-powered simulation with cats. | | Agent |
-| [AgentSims](https://github.com/py499372727/AgentSims/) | An Open-Source Sandbox for Large Language Model Evaluation. | | Agent |
-| [Biomes](https://github.com/ill-inc/biomes-game) | Biomes is an open source sandbox MMORPG built for the web using web technologies such as Next.js, Typescript, React and WebAssembly. | | Game |
-| [MuG Diffusion](https://github.com/Keytoyze/Mug-Diffusion) | MuG Diffusion is a charting AI for rhythm games based on Stable Diffusion (one of the most powerful AIGC models) with a large modification to incorporate audio waves. | | Game |
-| [StoryGames.ai](https://storygames.buildbox.com/) | AI for Dreamers Make Games. | | Game |
-| [ChatDev](https://github.com/OpenBMB/ChatDev) | Communicative Agents for Software Development. | | Agent |
-| [AutoGen](https://github.com/microsoft/autogen) | Enable Next-Gen Large Language Model Applications. | | Agent |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------: | :---: |
+| [Opus](https://opus.ai/) | An AI app that turns text into a video game. | | Game |
+| [AgentBench](https://github.com/thudm/agentbench) | A Comprehensive Benchmark to Evaluate LLMs as Agents. | | Agent |
+| [Generative Agents](https://github.com/joonspk-research/generative_agents) | Interactive Simulacra of Human Behavior. | | Agent |
+| [AI Town](https://github.com/a16z-infra/ai-town) | AI Town is a virtual town where AI characters live, chat and socialize. | | Agent |
+| [Cat Town](https://github.com/ykhli/cat-town) | A C(h)atGPT-powered simulation with cats. | | Agent |
+| [AgentSims](https://github.com/py499372727/AgentSims/) | An Open-Source Sandbox for Large Language Model Evaluation. | | Agent |
+| [Biomes](https://github.com/ill-inc/biomes-game) | Biomes is an open source sandbox MMORPG built for the web using web technologies such as Next.js, Typescript, React and WebAssembly. | | Game |
+| [MuG Diffusion](https://github.com/Keytoyze/Mug-Diffusion) | MuG Diffusion is a charting AI for rhythm games based on Stable Diffusion (one of the most powerful AIGC models) with a large modification to incorporate audio waves. | | Game |
+| [StoryGames.ai](https://storygames.buildbox.com/) | AI for Dreamers Make Games. | | Game |
+| [ChatDev](https://github.com/OpenBMB/ChatDev) | Communicative Agents for Software Development. | | Agent |
+| [AutoGen](https://github.com/microsoft/autogen) | Enable Next-Gen Large Language Model Applications. | | Agent |
## Code
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [CodeGeeX](https://github.com/THUDM/CodeGeeX) | An Open Multilingual Code Generation Model. | | Code |
-| [CodeGeeX2](https://github.com/THUDM/CodeGeeX2) | A More Powerful Multilingual Code Generation Model. | | Code |
-| [Cursor](https://www.cursor.so/) | Write, edit, and chat about your code with GPT-4 in a new type of editor. | | Code |
-| [OpenAI Codex](https://openai.com/blog/openai-codex) | OpenAI Codex is a descendant of GPT-3. | | Code |
-| [CodeGen](https://github.com/salesforce/CodeGen) | CodeGen is an open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex. | | Code |
-| [CodeGen2](https://github.com/salesforce/CodeGen2) | CodeGen2 models for program synthesis. | | Code |
-| [UnityGen AI](https://github.com/himanshuskyrockets/UnityGen-AI) | UnityGen AI is an AI-powered code generation plugin for Unity. | Unity | Code |
-| [bloop](https://github.com/BloopAI/bloop) | bloop is a fast code search engine written in Rust. | | Code |
-| [AI Code Translator](https://github.com/mckaywrigley/ai-code-translator) | Use AI to translate code from one language to another. | | Code |
-| [CodeT5](https://github.com/salesforce/codet5) | Open Code LLMs for Code Understanding and Generation. | | Code |
-| [StarCoder](https://github.com/bigcode-project/starcoder) | 💫 StarCoder is a language model (LM) trained on source code and natural language text. | | Code |
-| [CodeTF](https://github.com/salesforce/codetf) | One-stop Transformer Library for State-of-the-art Code LLM. | | Code |
-| [RobloxScripterAI](https://www.haddock.ai/search?platform=Roblox) | RobloxScripterAI is an AI-powered code generation tool for Roblox. | Roblox | Code |
-| [PandasAI](https://github.com/gventuri/pandas-ai) | Pandas AI is a Python library that integrates generative artificial intelligence capabilities into Pandas, making dataframes conversational. | | Code |
-| [Scikit-LLM](https://github.com/iryna-kondr/scikit-llm) | Seamlessly integrate powerful language models like ChatGPT into scikit-learn for enhanced text analysis tasks. | | Code |
-| [Chapyter](https://github.com/chapyter/chapyter) | ChatGPT Code Interpreter in Jupyter Notebooks. | | Code |
-| [Code Llama](https://github.com/facebookresearch/codellama) | Code Llama is a large language models for code based on Llama 2. | | Code |
-| [SoTaNa](https://github.com/DeepSoftwareAnalytics/SoTaNa) | The Open-Source Software Development Assistant. | | Code |
+| Source | Description | Game Engine | Type |
+| :----------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------- | :---------: | :--: |
+| [CodeGeeX](https://github.com/THUDM/CodeGeeX) | An Open Multilingual Code Generation Model. | | Code |
+| [CodeGeeX2](https://github.com/THUDM/CodeGeeX2) | A More Powerful Multilingual Code Generation Model. | | Code |
+| [Cursor](https://www.cursor.so/) | Write, edit, and chat about your code with GPT-4 in a new type of editor. | | Code |
+| [OpenAI Codex](https://openai.com/blog/openai-codex) | OpenAI Codex is a descendant of GPT-3. | | Code |
+| [CodeGen](https://github.com/salesforce/CodeGen) | CodeGen is an open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex. | | Code |
+| [CodeGen2](https://github.com/salesforce/CodeGen2) | CodeGen2 models for program synthesis. | | Code |
+| [UnityGen AI](https://github.com/himanshuskyrockets/UnityGen-AI) | UnityGen AI is an AI-powered code generation plugin for Unity. | Unity | Code |
+| [bloop](https://github.com/BloopAI/bloop) | bloop is a fast code search engine written in Rust. | | Code |
+| [AI Code Translator](https://github.com/mckaywrigley/ai-code-translator) | Use AI to translate code from one language to another. | | Code |
+| [CodeT5](https://github.com/salesforce/codet5) | Open Code LLMs for Code Understanding and Generation. | | Code |
+| [StarCoder](https://github.com/bigcode-project/starcoder) | 💫 StarCoder is a language model (LM) trained on source code and natural language text. | | Code |
+| [CodeTF](https://github.com/salesforce/codetf) | One-stop Transformer Library for State-of-the-art Code LLM. | | Code |
+| [RobloxScripterAI](https://www.haddock.ai/search?platform=Roblox) | RobloxScripterAI is an AI-powered code generation tool for Roblox. | Roblox | Code |
+| [PandasAI](https://github.com/gventuri/pandas-ai) | Pandas AI is a Python library that integrates generative artificial intelligence capabilities into Pandas, making dataframes conversational. | | Code |
+| [Scikit-LLM](https://github.com/iryna-kondr/scikit-llm) | Seamlessly integrate powerful language models like ChatGPT into scikit-learn for enhanced text analysis tasks. | | Code |
+| [Chapyter](https://github.com/chapyter/chapyter) | ChatGPT Code Interpreter in Jupyter Notebooks. | | Code |
+| [Code Llama](https://github.com/facebookresearch/codellama) | Code Llama is a large language models for code based on Llama 2. | | Code |
+| [SoTaNa](https://github.com/DeepSoftwareAnalytics/SoTaNa) | The Open-Source Software Development Assistant. | | Code |
## Framework
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Moonlander.ai](https://www.moonlander.ai/) | Start building 3D games without any coding using generative AI. | | Framework |
-| [GameAISDK](https://github.com/Tencent/GameAISDK) | Image-based game AI automation framework. | | Framework |
-| [behaviac](https://github.com/Tencent/behaviac) | Behaviac is a framework of the game AI development. | | Framework |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------ | :-------------------------------------------------------------- | :---------: | :-------: |
+| [Sprite Fusion](https://www.spritefusion.com/) | A free, web-based 2D tilemap editor. | | Framework |
+| [Moonlander.ai](https://www.moonlander.ai/) | Start building 3D games without any coding using generative AI. | | Framework |
+| [GameAISDK](https://github.com/Tencent/GameAISDK) | Image-based game AI automation framework. | | Framework |
+| [behaviac](https://github.com/Tencent/behaviac) | Behaviac is a framework of the game AI development. | | Framework |
## Writer
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Notebook.ai](https://github.com/indentlabs/notebook) | Notebook.ai is a set of tools for writers, game designers, and roleplayers to create magnificent universes – and everything within them. | | Writer |
-| [NovelAI](https://novelai.net/) | Driven by AI, painlessly construct unique stories, thrilling tales, seductive romances, or just fool around. | | Writer |
-| [AI-Writer](https://github.com/BlinkDL/AI-Writer) | AI writes novels, generates fantasy and romance web articles, etc. Chinese pre-trained generative model. | | Writer |
-| [Novel](https://github.com/steven-tey/novel) | Notion-style WYSIWYG editor with AI-powered autocompletions. | | Writer |
+| Source | Description | Game Engine | Type |
+| :---------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | :---------: | :----: |
+| [Notebook.ai](https://github.com/indentlabs/notebook) | Notebook.ai is a set of tools for writers, game designers, and roleplayers to create magnificent universes – and everything within them. | | Writer |
+| [NovelAI](https://novelai.net/) | Driven by AI, painlessly construct unique stories, thrilling tales, seductive romances, or just fool around. | | Writer |
+| [AI-Writer](https://github.com/BlinkDL/AI-Writer) | AI writes novels, generates fantasy and romance web articles, etc. Chinese pre-trained generative model. | | Writer |
+| [Novel](https://github.com/steven-tey/novel) | Notion-style WYSIWYG editor with AI-powered autocompletions. | | Writer |
## Image
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [DALL·E 2](https://openai.com/product/dall-e-2) | DALL·E 2 is an AI system that can create realistic images and art from a description in natural language. | | Image |
-| [Stable Diffusion](https://github.com/CompVis/stable-diffusion) | A latent text-to-image diffusion model. | | Image |
-| [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | A browser interface based on Gradio library for Stable Diffusion. | | Image |
-| [Stable Diffusion web UI](https://github.com/Sygil-Dev/sygil-webui) | Web-based UI for Stable Diffusion. | | Image |
-| [Stable Diffusion WebUI Chinese](https://github.com/VinsonLaro/stable-diffusion-webui-chinese) | Chinese version of stable-diffusion-webui. | | Image |
-| [Stable Diffusion XL](https://clipdrop.co/stable-diffusion) | Generate images from text. | | Image |
-| [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) | Stable Diffusion in pure C/C++. | | Image |
-| [Stable Doodle](https://clipdrop.co/stable-doodle) | Stable Doodle is a sketch-to-image tool that converts a simple drawing into a dynamic image. | | Image |
-| [Stable.art](https://github.com/isekaidev/stable.art) | Photoshop plugin for Stable Diffusion with Automatic1111 as backend (locally or with Google Colab). | | Image |
-| [StableStudio](https://github.com/Stability-AI/StableStudio) | StableStudio by Stability AI | | Image |
-| [Unity ML Stable Diffusion](https://github.com/keijiro/UnityMLStableDiffusion) | Core ML Stable Diffusion on Unity. | Unity | Image |
-| [Disco Diffusion](https://github.com/alembics/disco-diffusion) | A frankensteinian amalgamation of notebooks, models and techniques for the generation of AI Art and Animations. | | Image |
-| [ControlNet](https://github.com/lllyasviel/ControlNet) | ControlNet is a neural network structure to control diffusion models by adding extra conditions. | | Image |
-| [Blender-ControlNet](https://github.com/coolzilj/Blender-ControlNet) | Using ControlNet right in Blender. | Blender | Image |
-| [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) | WebUI extension for ControlNet. | | Image |
-| [Openpose Editor](https://github.com/fkunn1326/openpose-editor) | Openpose Editor for AUTOMATIC1111's stable-diffusion-webui. | | Image |
-| [Midjourney](https://www.midjourney.com/) | Midjourney is an independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species. | | Image |
-| [Imagen](https://imagen.research.google/) | Imagen is an AI system that creates photorealistic images from input text. | | Image |
-| [Lexica](https://lexica.art/) | A Stable Diffusion prompts search engine. | | Image |
-| [Img2Prompt](https://www.img2prompt.io/) | Get prompts from stable diffusion generated images. | | Image |
-| [Prompt.Art](https://prompt.art/) | The Generators Hub. | | Image |
-| [Hua](https://github.com/BlinkDL/Hua) | Hua is an AI image editor with Stable Diffusion (and more). | | Image |
-| [CLIPasso](https://github.com/yael-vinker/CLIPasso) | A method for converting an image of an object to a sketch, allowing for varying levels of abstraction. | | Image |
-| [DeepAI](https://deepai.org/) | DeepAI offers a suite of tools that use AI to enhance your creativity. | | Image |
-| [Photoroom](https://www.photoroom.com/backgrounds) | AI Background Generator. | | Image |
-| [Plask](https://plask.ai/) | AI image generation in the cloud. | | Image |
-| [MetaShoot](https://metashoot.vinzi.xyz/) | MetaShoot is a digital twin of a photo studio, developed as a plugin for Unreal Engine that gives any creator the ability to produce highly realistic renders in the easiest and quickest way. | Unreal Engine | Image |
-| [Rich-Text-to-Image](https://github.com/SongweiGe/rich-text-to-image) | Expressive Text-to-Image Generation with Rich Text. | | Image |
-| [ClipDrop](https://clipdrop.co/) | Create stunning visuals in seconds. | | Image |
-| [ComfyUI](https://github.com/comfyanonymous/ComfyUI) | A powerful and modular stable diffusion GUI with a graph/nodes interface. | | Image |
-| [Draw Things](https://drawthings.ai/) | AI- assisted image generation in Your Pocket. | | Image |
-| [DeepFloyd IF](https://github.com/deep-floyd/IF) | IF by DeepFloyd Lab at StabilityAI. | | Image |
-| [StyleDrop](https://styledrop.github.io/) | Text-To-Image Generation in Any Style. | | Image |
-| [Depth map library and poser](https://github.com/jexom/sd-webui-depth-lib) | Depth map library for use with the Control Net extension for Automatic1111/stable-diffusion-webui. | | Image |
-| [Segment Anything](https://segment-anything.com/) | Segment Anything Model (SAM): a new AI model from Meta AI that can "cut out" any object , in any image , with a single click. | | Image |
-| [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) | Automatically Detect , Segment and Generate Anything with Image, Text, and Audio Inputs. | | Image |
-| [DragGAN](https://github.com/XingangPan/DragGAN) | Interactive Point-based Manipulation on the Generative Image Manifold. | | Image |
-| [BriVL](https://github.com/BAAI-WuDao/BriVL) | Bridging Vision and Language Model. | | Image |
-| [ConceptLab](https://github.com/kfirgoldberg/ConceptLab) | Creative Generation using Diffusion Prior Constraints. | | Image |
-| [DWPose](https://github.com/idea-research/dwpose) | Effective Whole-body Pose Estimation with Two-stages Distillation. | | Image |
-| [GIFfusion](https://github.com/DN6/giffusion) | Create GIFs and Videos using Stable Diffusion. | | Image |
-| [Fooocus](https://github.com/lllyasviel/Fooocus) | Focus on prompting and generating. | | Image |
-| [Ideogram](https://ideogram.ai/login) | Helping people become more creative. | | Image |
-| [SyncDreamer](https://github.com/liuyuan-pal/SyncDreamer) | Generating Multiview-consistent Images from a Single-view Image. | | Image |
-| [EasyPhoto](https://github.com/aigc-apps/sd-webui-EasyPhoto) | Your Smart AI Photo Generator. | | Image |
-| [KREA](https://www.krea.ai/) | Generate images and videos with a delightful AI-powered design tool. | | Image |
-| [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion) | A Pipeline-Level Solution for Real-Time Interactive Generation. | | Image |
+| Source | Description | Game Engine | Type |
+| :--------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [DALL·E 2](https://openai.com/product/dall-e-2) | DALL·E 2 is an AI system that can create realistic images and art from a description in natural language. | | Image |
+| [Stable Diffusion](https://github.com/CompVis/stable-diffusion) | A latent text-to-image diffusion model. | | Image |
+| [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | A browser interface based on Gradio library for Stable Diffusion. | | Image |
+| [Stable Diffusion web UI](https://github.com/Sygil-Dev/sygil-webui) | Web-based UI for Stable Diffusion. | | Image |
+| [Stable Diffusion WebUI Chinese](https://github.com/VinsonLaro/stable-diffusion-webui-chinese) | Chinese version of stable-diffusion-webui. | | Image |
+| [Stable Diffusion XL](https://clipdrop.co/stable-diffusion) | Generate images from text. | | Image |
+| [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) | Stable Diffusion in pure C/C++. | | Image |
+| [Stable Doodle](https://clipdrop.co/stable-doodle) | Stable Doodle is a sketch-to-image tool that converts a simple drawing into a dynamic image. | | Image |
+| [Stable.art](https://github.com/isekaidev/stable.art) | Photoshop plugin for Stable Diffusion with Automatic1111 as backend (locally or with Google Colab). | | Image |
+| [StableStudio](https://github.com/Stability-AI/StableStudio) | StableStudio by Stability AI | | Image |
+| [Unity ML Stable Diffusion](https://github.com/keijiro/UnityMLStableDiffusion) | Core ML Stable Diffusion on Unity. | Unity | Image |
+| [Disco Diffusion](https://github.com/alembics/disco-diffusion) | A frankensteinian amalgamation of notebooks, models and techniques for the generation of AI Art and Animations. | | Image |
+| [ControlNet](https://github.com/lllyasviel/ControlNet) | ControlNet is a neural network structure to control diffusion models by adding extra conditions. | | Image |
+| [Blender-ControlNet](https://github.com/coolzilj/Blender-ControlNet) | Using ControlNet right in Blender. | Blender | Image |
+| [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) | WebUI extension for ControlNet. | | Image |
+| [Openpose Editor](https://github.com/fkunn1326/openpose-editor) | Openpose Editor for AUTOMATIC1111's stable-diffusion-webui. | | Image |
+| [Midjourney](https://www.midjourney.com/) | Midjourney is an independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species. | | Image |
+| [Imagen](https://imagen.research.google/) | Imagen is an AI system that creates photorealistic images from input text. | | Image |
+| [Lexica](https://lexica.art/) | A Stable Diffusion prompts search engine. | | Image |
+| [Img2Prompt](https://www.img2prompt.io/) | Get prompts from stable diffusion generated images. | | Image |
+| [Prompt.Art](https://prompt.art/) | The Generators Hub. | | Image |
+| [Hua](https://github.com/BlinkDL/Hua) | Hua is an AI image editor with Stable Diffusion (and more). | | Image |
+| [CLIPasso](https://github.com/yael-vinker/CLIPasso) | A method for converting an image of an object to a sketch, allowing for varying levels of abstraction. | | Image |
+| [DeepAI](https://deepai.org/) | DeepAI offers a suite of tools that use AI to enhance your creativity. | | Image |
+| [Photoroom](https://www.photoroom.com/backgrounds) | AI Background Generator. | | Image |
+| [Plask](https://plask.ai/) | AI image generation in the cloud. | | Image |
+| [MetaShoot](https://metashoot.vinzi.xyz/) | MetaShoot is a digital twin of a photo studio, developed as a plugin for Unreal Engine that gives any creator the ability to produce highly realistic renders in the easiest and quickest way. | Unreal Engine | Image |
+| [Rich-Text-to-Image](https://github.com/SongweiGe/rich-text-to-image) | Expressive Text-to-Image Generation with Rich Text. | | Image |
+| [ClipDrop](https://clipdrop.co/) | Create stunning visuals in seconds. | | Image |
+| [ComfyUI](https://github.com/comfyanonymous/ComfyUI) | A powerful and modular stable diffusion GUI with a graph/nodes interface. | | Image |
+| [Draw Things](https://drawthings.ai/) | AI- assisted image generation in Your Pocket. | | Image |
+| [DeepFloyd IF](https://github.com/deep-floyd/IF) | IF by DeepFloyd Lab at StabilityAI. | | Image |
+| [StyleDrop](https://styledrop.github.io/) | Text-To-Image Generation in Any Style. | | Image |
+| [Depth map library and poser](https://github.com/jexom/sd-webui-depth-lib) | Depth map library for use with the Control Net extension for Automatic1111/stable-diffusion-webui. | | Image |
+| [Segment Anything](https://segment-anything.com/) | Segment Anything Model (SAM): a new AI model from Meta AI that can "cut out" any object , in any image , with a single click. | | Image |
+| [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) | Automatically Detect , Segment and Generate Anything with Image, Text, and Audio Inputs. | | Image |
+| [DragGAN](https://github.com/XingangPan/DragGAN) | Interactive Point-based Manipulation on the Generative Image Manifold. | | Image |
+| [BriVL](https://github.com/BAAI-WuDao/BriVL) | Bridging Vision and Language Model. | | Image |
+| [ConceptLab](https://github.com/kfirgoldberg/ConceptLab) | Creative Generation using Diffusion Prior Constraints. | | Image |
+| [DWPose](https://github.com/idea-research/dwpose) | Effective Whole-body Pose Estimation with Two-stages Distillation. | | Image |
+| [GIFfusion](https://github.com/DN6/giffusion) | Create GIFs and Videos using Stable Diffusion. | | Image |
+| [Fooocus](https://github.com/lllyasviel/Fooocus) | Focus on prompting and generating. | | Image |
+| [Ideogram](https://ideogram.ai/login) | Helping people become more creative. | | Image |
+| [SyncDreamer](https://github.com/liuyuan-pal/SyncDreamer) | Generating Multiview-consistent Images from a Single-view Image. | | Image |
+| [EasyPhoto](https://github.com/aigc-apps/sd-webui-EasyPhoto) | Your Smart AI Photo Generator. | | Image |
+| [KREA](https://www.krea.ai/) | Generate images and videos with a delightful AI-powered design tool. | | Image |
+| [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion) | A Pipeline-Level Solution for Real-Time Interactive Generation. | | Image |
## Texture
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Polycam](https://poly.cam/material-generator) | Create your own 3D textures just by typing. | | Texture |
-| [With Poly](https://withpoly.com/browse/textures) | Create Textures With Poly. Generate 3D materials with AI in a free online editor, or search our growing community library. | | Texture |
-| [Dream Textures](https://github.com/carson-katri/dream-textures) | Stable Diffusion built-in to Blender. Create textures, concept art, background assets, and more with a simple text prompt. | Blender | Texture |
-| [Text2Tex](https://daveredrum.github.io/Text2Tex/) | Text-driven texture Synthesis via Diffusion Models. | | Texture |
-| [Texture Lab](https://www.texturelab.xyz/) | AI-generated texures. You can generate your own with a text prompt. | | Texture |
-| [Neuralangelo](https://github.com/NVlabs/neuralangelo) | High-Fidelity Neural Surface Reconstruction. | | Texture |
-| [TexFusion](https://research.nvidia.com/labs/toronto-ai/texfusion/) | Synthesizing 3D Textures with Text-Guided Image Diffusion Models. | | Texture |
-| [DreamSpace](https://github.com/ybbbbt/dreamspace) | Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation. | | Texture |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------------------------ | :------------------------------------------------------------------------------------------------------------------------- | :---------: | :-----: |
+| [Polycam](https://poly.cam/material-generator) | Create your own 3D textures just by typing. | | Texture |
+| [With Poly](https://withpoly.com/browse/textures) | Create Textures With Poly. Generate 3D materials with AI in a free online editor, or search our growing community library. | | Texture |
+| [Dream Textures](https://github.com/carson-katri/dream-textures) | Stable Diffusion built-in to Blender. Create textures, concept art, background assets, and more with a simple text prompt. | Blender | Texture |
+| [Text2Tex](https://daveredrum.github.io/Text2Tex/) | Text-driven texture Synthesis via Diffusion Models. | | Texture |
+| [Texture Lab](https://www.texturelab.xyz/) | AI-generated texures. You can generate your own with a text prompt. | | Texture |
+| [Neuralangelo](https://github.com/NVlabs/neuralangelo) | High-Fidelity Neural Surface Reconstruction. | | Texture |
+| [TexFusion](https://research.nvidia.com/labs/toronto-ai/texfusion/) | Synthesizing 3D Textures with Text-Guided Image Diffusion Models. | | Texture |
+| [DreamSpace](https://github.com/ybbbbt/dreamspace) | Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation. | | Texture |
## Shader
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [AI Shader](https://github.com/keijiro/AIShader) | ChatGPT-powered shader generator for Unity. | Unity | Shader |
+| Source | Description | Game Engine | Type |
+| :----------------------------------------------- | :------------------------------------------ | :---------: | :----: |
+| [AI Shader](https://github.com/keijiro/AIShader) | ChatGPT-powered shader generator for Unity. | Unity | Shader |
## 3D Model
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [BlenderGPT](https://github.com/gd3kr/BlenderGPT) | Use commands in English to control Blender with OpenAI's GPT-4. | Blender | Model |
-| [Blender-GPT](https://github.com/TREE-Ind/Blender-GPT) | An all-in-one Blender assistant powered by GPT3/4 + Whisper integration. | Blender | Model |
-| [chatGPT-maya](https://github.com/LouisRossouw/chatGPT-maya) | Simple Maya tool that utilizes open AI to perform basic tasks based on descriptive instructions. | Maya | Model |
-| [Sloyd](https://www.sloyd.ai/) | 3D modelling has never been easier. | | Model |
-| [Point·E](https://github.com/openai/point-e) | Point cloud diffusion for 3D model synthesis. | | Model |
-| [Shap-E](https://github.com/openai/shap-e) | Generate 3D objects conditioned on text or images. | | Model |
-| [Stable Dreamfusion](https://github.com/ashawkey/stable-dreamfusion) | A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. | | Model |
-| [LION](https://github.com/nv-tlabs/LION) | Latent Point Diffusion Models for 3D Shape Generation. | | Model |
-| [NVIDIA Instant NeRF](https://github.com/NVlabs/instant-ngp) | Instant neural graphics primitives: lightning fast NeRF and more. | | Model |
-| [Instruct-NeRF2NeRF](https://instruct-nerf2nerf.github.io/) | Editing 3D Scenes with Instructions. | | Model |
-| [Zero-1-to-3](https://github.com/cvlab-columbia/zero123) | Zero-shot One Image to 3D Object. | | Model |
-| [PAniC-3D](https://github.com/shuhongchen/panic3d-anime-reconstruction) | Stylized Single-view 3D Reconstruction from Portraits of Anime Characters. | | Model |
-| [Make-It-3D](https://github.com/junshutang/Make-It-3D) | High-Fidelity 3D Creation from A Single Image with Diffusion Prior. | | Model |
-| [Blockade Labs](https://www.blockadelabs.com/) | Digital alchemy is real with Skybox Lab - the ultimate AI-powered solution for generating incredible 360° skybox experiences from text prompts. | | Model |
-| [Spline AI](https://spline.design/ai) | The power of AI is coming to the 3rd dimension. Generate objects, animations, and textures using prompts. | | Model |
-| [Luma AI](https://lumalabs.ai/) | Capture in lifelike 3D. Unmatched photorealism, reflections, and details. The future of VFX is now, for everyone! | | Model |
-| [Anything-3D](https://github.com/Anything-of-anything/Anything-3D) | Segment-Anything + 3D. Let's lift the anything to 3D. | | Model |
-| [threestudio](https://github.com/threestudio-project/threestudio) | A unified framework for 3D content generation. | | Model |
-| [3Dpresso](https://3dpresso.ai/) | Extract a 3D model of an object, captured on a video. | | Model |
-| [ProlificDreamer](https://ml.cs.tsinghua.edu.cn/prolificdreamer/) | High-Fidelity and diverse Text-to-3D generation with Variational score Distillation. | | Model |
-| [HiFA](https://hifa-team.github.io/HiFA-site/) | High-fidelity Text-to-3D with advance Diffusion guidance. | | Model |
-| [Tafi](https://maketafi.com/ai) | AI text to 3D character engine. | | Model |
-| [Infinigen](https://github.com/princeton-vl/infinigen) | Infinite Photorealistic Worlds using Procedural Generation. | | 3D |
-| [One-2-3-45](https://one-2-3-45.github.io/) | Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization. | | Model |
-| [3D-LLM](https://github.com/UMass-Foundation-Model/3D-LLM) | Injecting the 3D World into Large Language Models. | | 3D |
-| [Dash](https://www.polygonflow.io/) | Your Copilot for World Building in Unreal Engine. | Unreal Engine | 3D |
-| [MVDream](https://github.com/MV-Dream/MVDream) | Multi-view Diffusion for 3D Generation. | | 3D |
-| [CityDreamer](https://github.com/hzxie/city-dreamer) | Compositional Generative Model of Unbounded 3D Cities. | | 3D |
-| [Meshy](https://www.meshy.ai/) | Create Stunning 3D Game Assets with AI. | | 3D |
-| [CSM](https://www.csm.ai/) | Generate 3D worlds from images and videos. | | 3D |
-| [GenieLabs](https://www.genielabs.tech/) | Empower your game with AI-UGC. | | 3D |
-| [GaussianDreamer](https://github.com/hustvl/GaussianDreamer) | Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors. | | 3D |
-| [3D-GPT](https://chuny1.github.io/3DGPT/3dgpt.html) | Procedural 3D MODELING WITH LARGE LANGUAGE MODELS. | | 3D |
-| [Wonder3D](https://github.com/xxlong0/Wonder3D) | Single Image to 3D using Cross-Domain Diffusion. | | 3D |
-| [lumine AI](https://ilumine.ai/) | AI-Powered Creativity. | | 3D |
-| [UnityGaussianSplatting](https://github.com/aras-p/UnityGaussianSplatting) | Toy Gaussian Splatting visualization in Unity. | Unity | 3D |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [BlenderGPT](https://github.com/gd3kr/BlenderGPT) | Use commands in English to control Blender with OpenAI's GPT-4. | Blender | Model |
+| [Blender-GPT](https://github.com/TREE-Ind/Blender-GPT) | An all-in-one Blender assistant powered by GPT3/4 + Whisper integration. | Blender | Model |
+| [chatGPT-maya](https://github.com/LouisRossouw/chatGPT-maya) | Simple Maya tool that utilizes open AI to perform basic tasks based on descriptive instructions. | Maya | Model |
+| [Sloyd](https://www.sloyd.ai/) | 3D modelling has never been easier. | | Model |
+| [Point·E](https://github.com/openai/point-e) | Point cloud diffusion for 3D model synthesis. | | Model |
+| [Shap-E](https://github.com/openai/shap-e) | Generate 3D objects conditioned on text or images. | | Model |
+| [Stable Dreamfusion](https://github.com/ashawkey/stable-dreamfusion) | A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. | | Model |
+| [LION](https://github.com/nv-tlabs/LION) | Latent Point Diffusion Models for 3D Shape Generation. | | Model |
+| [NVIDIA Instant NeRF](https://github.com/NVlabs/instant-ngp) | Instant neural graphics primitives: lightning fast NeRF and more. | | Model |
+| [Instruct-NeRF2NeRF](https://instruct-nerf2nerf.github.io/) | Editing 3D Scenes with Instructions. | | Model |
+| [Zero-1-to-3](https://github.com/cvlab-columbia/zero123) | Zero-shot One Image to 3D Object. | | Model |
+| [PAniC-3D](https://github.com/shuhongchen/panic3d-anime-reconstruction) | Stylized Single-view 3D Reconstruction from Portraits of Anime Characters. | | Model |
+| [Make-It-3D](https://github.com/junshutang/Make-It-3D) | High-Fidelity 3D Creation from A Single Image with Diffusion Prior. | | Model |
+| [Blockade Labs](https://www.blockadelabs.com/) | Digital alchemy is real with Skybox Lab - the ultimate AI-powered solution for generating incredible 360° skybox experiences from text prompts. | | Model |
+| [Spline AI](https://spline.design/ai) | The power of AI is coming to the 3rd dimension. Generate objects, animations, and textures using prompts. | | Model |
+| [Luma AI](https://lumalabs.ai/) | Capture in lifelike 3D. Unmatched photorealism, reflections, and details. The future of VFX is now, for everyone! | | Model |
+| [Anything-3D](https://github.com/Anything-of-anything/Anything-3D) | Segment-Anything + 3D. Let's lift the anything to 3D. | | Model |
+| [threestudio](https://github.com/threestudio-project/threestudio) | A unified framework for 3D content generation. | | Model |
+| [3Dpresso](https://3dpresso.ai/) | Extract a 3D model of an object, captured on a video. | | Model |
+| [ProlificDreamer](https://ml.cs.tsinghua.edu.cn/prolificdreamer/) | High-Fidelity and diverse Text-to-3D generation with Variational score Distillation. | | Model |
+| [HiFA](https://hifa-team.github.io/HiFA-site/) | High-fidelity Text-to-3D with advance Diffusion guidance. | | Model |
+| [Tafi](https://maketafi.com/ai) | AI text to 3D character engine. | | Model |
+| [Infinigen](https://github.com/princeton-vl/infinigen) | Infinite Photorealistic Worlds using Procedural Generation. | | 3D |
+| [One-2-3-45](https://one-2-3-45.github.io/) | Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization. | | Model |
+| [3D-LLM](https://github.com/UMass-Foundation-Model/3D-LLM) | Injecting the 3D World into Large Language Models. | | 3D |
+| [Dash](https://www.polygonflow.io/) | Your Copilot for World Building in Unreal Engine. | Unreal Engine | 3D |
+| [MVDream](https://github.com/MV-Dream/MVDream) | Multi-view Diffusion for 3D Generation. | | 3D |
+| [CityDreamer](https://github.com/hzxie/city-dreamer) | Compositional Generative Model of Unbounded 3D Cities. | | 3D |
+| [Meshy](https://www.meshy.ai/) | Create Stunning 3D Game Assets with AI. | | 3D |
+| [CSM](https://www.csm.ai/) | Generate 3D worlds from images and videos. | | 3D |
+| [GenieLabs](https://www.genielabs.tech/) | Empower your game with AI-UGC. | | 3D |
+| [GaussianDreamer](https://github.com/hustvl/GaussianDreamer) | Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors. | | 3D |
+| [3D-GPT](https://chuny1.github.io/3DGPT/3dgpt.html) | Procedural 3D MODELING WITH LARGE LANGUAGE MODELS. | | 3D |
+| [Wonder3D](https://github.com/xxlong0/Wonder3D) | Single Image to 3D using Cross-Domain Diffusion. | | 3D |
+| [lumine AI](https://ilumine.ai/) | AI-Powered Creativity. | | 3D |
+| [UnityGaussianSplatting](https://github.com/aras-p/UnityGaussianSplatting) | Toy Gaussian Splatting visualization in Unity. | Unity | 3D |
## Avatar
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [ChatdollKit](https://github.com/uezo/ChatdollKit) | ChatdollKit enables you to make your 3D model into a chatbot. | Unity | Avatar |
-| [UnityAIWithChatGPT](https://github.com/haili1234/UnityAIWithChatGPT) | Based on Unity, ChatGPT+UnityChan voice interactive display is realized. | Unity | Avatar |
-| [ChatAvatar](https://hyperhuman.deemos.com/chatavatar) | Progressive generation Of Animatable 3D Faces Under Text guidance. | | Avatar |
-| [StyleAvatar3D](https://github.com/icoz69/StyleAvatar3D) | Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation. | | Avatar |
-| [HeadSculpt](https://brandonhan.uk/HeadSculpt/) | Crafting 3D Head Avatars with Text. | | Avatar |
-| [MotionGPT](https://github.com/OpenMotionLab/MotionGPT) | Human Motion as a Foreign Language, a unified motion-language generation model using LLMs. | | Avatar |
-| [CALM](https://github.com/NVlabs/CALM) | Conditional Adversarial Latent Models for Directable Virtual Characters. | | Avatar |
-| [Text2Control3D](https://text2control3d.github.io/) | Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model. | | Avatar |
+| Source | Description | Game Engine | Type |
+| :-------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------- | :---------: | :----: |
+| [ChatdollKit](https://github.com/uezo/ChatdollKit) | ChatdollKit enables you to make your 3D model into a chatbot. | Unity | Avatar |
+| [UnityAIWithChatGPT](https://github.com/haili1234/UnityAIWithChatGPT) | Based on Unity, ChatGPT+UnityChan voice interactive display is realized. | Unity | Avatar |
+| [ChatAvatar](https://hyperhuman.deemos.com/chatavatar) | Progressive generation Of Animatable 3D Faces Under Text guidance. | | Avatar |
+| [StyleAvatar3D](https://github.com/icoz69/StyleAvatar3D) | Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation. | | Avatar |
+| [HeadSculpt](https://brandonhan.uk/HeadSculpt/) | Crafting 3D Head Avatars with Text. | | Avatar |
+| [MotionGPT](https://github.com/OpenMotionLab/MotionGPT) | Human Motion as a Foreign Language, a unified motion-language generation model using LLMs. | | Avatar |
+| [CALM](https://github.com/NVlabs/CALM) | Conditional Adversarial Latent Models for Directable Virtual Characters. | | Avatar |
+| [Text2Control3D](https://text2control3d.github.io/) | Controllable 3D Avatar Generation in Neural Radiance Fields using Geometry-Guided Text-to-Image Diffusion Model. | | Avatar |
## Animation
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Wonder Studio](https://wonderdynamics.com/) | An AI tool that automatically animates, lights and composes CG characters into a live-action scene. | | Animation |
-| [NUWA-XL](https://msra-nuwa.azurewebsites.net/#/) | A novel Diffusion over Diffusion architecture for eXtremely Long video generation. | | Animation |
-| [SadTalker](https://github.com/Winfredy/SadTalker) | Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation. | | Animation |
-| [SadTalker-Video-Lip-Sync](https://github.com/Zz-ww/SadTalker-Video-Lip-Sync) | This project is based on SadTalkers Wav2lip for video lip synthesis. | | Animation |
-| [GeneFace](https://github.com/yerfor/GeneFace) | Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis. | | Animation |
-| [TaleCrafter](https://github.com/VideoCrafter/TaleCrafter) | An interactive story visualization tool that support multiple characters. | | Animation |
-| [AnimateDiff](https://github.com/guoyww/animatediff/) | Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. | | Animation |
-| [Omni Animation](https://omnianimation.ai/) | AI Generated High Fidelity Animations. | | Animation |
-| [Animate Anyone](https://github.com/HumanAIGC/AnimateAnyone) | Consistent and Controllable Image-to-Video Synthesis for Character Animation. | | Animation |
+| Source | Description | Game Engine | Type |
+| :---------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------- | :---------: | :-------: |
+| [Wonder Studio](https://wonderdynamics.com/) | An AI tool that automatically animates, lights and composes CG characters into a live-action scene. | | Animation |
+| [NUWA-XL](https://msra-nuwa.azurewebsites.net/#/) | A novel Diffusion over Diffusion architecture for eXtremely Long video generation. | | Animation |
+| [SadTalker](https://github.com/Winfredy/SadTalker) | Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation. | | Animation |
+| [SadTalker-Video-Lip-Sync](https://github.com/Zz-ww/SadTalker-Video-Lip-Sync) | This project is based on SadTalkers Wav2lip for video lip synthesis. | | Animation |
+| [GeneFace](https://github.com/yerfor/GeneFace) | Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis. | | Animation |
+| [TaleCrafter](https://github.com/VideoCrafter/TaleCrafter) | An interactive story visualization tool that support multiple characters. | | Animation |
+| [AnimateDiff](https://github.com/guoyww/animatediff/) | Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. | | Animation |
+| [Omni Animation](https://omnianimation.ai/) | AI Generated High Fidelity Animations. | | Animation |
+| [Animate Anyone](https://github.com/HumanAIGC/AnimateAnyone) | Consistent and Controllable Image-to-Video Synthesis for Character Animation. | | Animation |
## Video
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [CoNR](https://github.com/megvii-research/CoNR) | Genarate vivid dancing videos from hand-drawn anime character sheets(ACS). | | Video |
-| [Gen-2](https://research.runwayml.com/gen2) | A multi-modal AI system that can generate novel videos with text, images, or video clips. | | Video |
-| [CogVideo](https://models.aminer.cn/cogvideo/) | Generate Videos from Text Descriptions. | | Video |
-| [Imagen Video](https://imagen.research.google/video/) | Given a text prompt, Imagen Video generates high definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models. | | Video |
-| [Make-A-Video](https://makeavideo.studio/) | Make-A-Video is a state-of-the-art AI system that generates videos from text. | | Video |
-| [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) | Text-to-Image Diffusion Models are Zero-Shot Video Generators. | | Video |
-| [Tune-A-Video](https://github.com/showlab/Tune-A-Video) | One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation. | | Video |
-| [Video LDMs](https://research.nvidia.com/labs/toronto-ai/VideoLDM/) | Align your Latents: High- resolution Video Synthesis with Latent Diffusion Models. | | Video |
-| [EDGE](https://github.com/Stanford-TML/EDGE) | We introduce EDGE, a powerful method for editable dance generation that is capable of creating realistic, physically-plausible dances while remaining faithful to arbitrary input music. | | Video |
-| [Pollinations](https://pollinations.ai/c/Video) | Creating gets easy, fast, and fun. | | Video |
-| [Mov2mov](https://github.com/Scholar01/sd-webui-mov2mov) | Mov2mov plugin for Automatic1111/stable-diffusion-webui. | | Video |
-| [Track-Anything](https://github.com/gaomingqi/Track-Anything) | Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything and XMem. | | Video |
-| [BackgroundRemover](https://github.com/nadermx/backgroundremover) | Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source. | | Video |
-| [dolphin](https://github.com/kaleido-lab/dolphin) | General video interaction platform based on LLMs. | | Video |
-| [Descript](https://www.descript.com/) | Descript is the simple, powerful , and fun way to edit. | | Video |
-| [Zeroscope](https://huggingface.co/spaces/fffiloni/zeroscope) | Zeroscope Text-to-Video. | | Video |
-| [Pika Labs](https://www.pika.art/) | Pika Labs is revolutionizing video-making experience with AI. | | Video |
-| [Animate-A-Story](https://github.com/VideoCrafter/Animate-A-Story) | Retrieval-Augmented Video Generation for Telling a Story. | | Video |
-| [ShortGPT](https://github.com/RayVentura/ShortGPT) | An experimental AI framework for automated short/video content creation. | | Video |
-| [StableVideo](https://github.com/rese1f/stablevideo) | Text-driven Consistency-aware Diffusion Video Editing. | | Video |
-| [CoDeF](https://github.com/qiuyu96/codef) | Content Deformation Fields for Temporally Consistent Video Processing. | | Video |
-| [Reuse and Diffuse](https://anonymous0x233.github.io/ReuseAndDiffuse/) | Iterative Denoising for Text-to-Video Generation. | | Video |
-| [MotionDirector](https://github.com/showlab/MotionDirector) | Motion Customization of Text-to-Video Diffusion Models. | | Video |
-| [Moonvalley](https://moonvalley.ai/) | Moonvalley is a groundbreaking new text-to-video generative AI model. | | Video |
-| [Stable Video Diffusion](https://github.com/Stability-AI/generative-models) | Stable Video Diffusion (SVD) Image-to-Video. | | Video |
-| [MotionCtrl](https://wzhouxiff.github.io/projects/MotionCtrl/) | A Unified and Flexible Motion Controller for Video Generation. | | Video |
-| [W.A.L.T](https://walt-video-diffusion.github.io/) | Photorealistic Video Generation with Diffusion Models. | | Video |
-| [I2VGen-XL](https://i2vgen-xl.github.io/) | High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models. | | Video |
+| Source | Description | Game Engine | Type |
+| :-------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------: | :---: |
+| [CoNR](https://github.com/megvii-research/CoNR) | Genarate vivid dancing videos from hand-drawn anime character sheets(ACS). | | Video |
+| [Gen-2](https://research.runwayml.com/gen2) | A multi-modal AI system that can generate novel videos with text, images, or video clips. | | Video |
+| [CogVideo](https://models.aminer.cn/cogvideo/) | Generate Videos from Text Descriptions. | | Video |
+| [Imagen Video](https://imagen.research.google/video/) | Given a text prompt, Imagen Video generates high definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models. | | Video |
+| [Make-A-Video](https://makeavideo.studio/) | Make-A-Video is a state-of-the-art AI system that generates videos from text. | | Video |
+| [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) | Text-to-Image Diffusion Models are Zero-Shot Video Generators. | | Video |
+| [Tune-A-Video](https://github.com/showlab/Tune-A-Video) | One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation. | | Video |
+| [Video LDMs](https://research.nvidia.com/labs/toronto-ai/VideoLDM/) | Align your Latents: High- resolution Video Synthesis with Latent Diffusion Models. | | Video |
+| [EDGE](https://github.com/Stanford-TML/EDGE) | We introduce EDGE, a powerful method for editable dance generation that is capable of creating realistic, physically-plausible dances while remaining faithful to arbitrary input music. | | Video |
+| [Pollinations](https://pollinations.ai/c/Video) | Creating gets easy, fast, and fun. | | Video |
+| [Mov2mov](https://github.com/Scholar01/sd-webui-mov2mov) | Mov2mov plugin for Automatic1111/stable-diffusion-webui. | | Video |
+| [Track-Anything](https://github.com/gaomingqi/Track-Anything) | Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything and XMem. | | Video |
+| [BackgroundRemover](https://github.com/nadermx/backgroundremover) | Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source. | | Video |
+| [dolphin](https://github.com/kaleido-lab/dolphin) | General video interaction platform based on LLMs. | | Video |
+| [Descript](https://www.descript.com/) | Descript is the simple, powerful , and fun way to edit. | | Video |
+| [Zeroscope](https://huggingface.co/spaces/fffiloni/zeroscope) | Zeroscope Text-to-Video. | | Video |
+| [Pika Labs](https://www.pika.art/) | Pika Labs is revolutionizing video-making experience with AI. | | Video |
+| [Animate-A-Story](https://github.com/VideoCrafter/Animate-A-Story) | Retrieval-Augmented Video Generation for Telling a Story. | | Video |
+| [ShortGPT](https://github.com/RayVentura/ShortGPT) | An experimental AI framework for automated short/video content creation. | | Video |
+| [StableVideo](https://github.com/rese1f/stablevideo) | Text-driven Consistency-aware Diffusion Video Editing. | | Video |
+| [CoDeF](https://github.com/qiuyu96/codef) | Content Deformation Fields for Temporally Consistent Video Processing. | | Video |
+| [Reuse and Diffuse](https://anonymous0x233.github.io/ReuseAndDiffuse/) | Iterative Denoising for Text-to-Video Generation. | | Video |
+| [MotionDirector](https://github.com/showlab/MotionDirector) | Motion Customization of Text-to-Video Diffusion Models. | | Video |
+| [Moonvalley](https://moonvalley.ai/) | Moonvalley is a groundbreaking new text-to-video generative AI model. | | Video |
+| [Stable Video Diffusion](https://github.com/Stability-AI/generative-models) | Stable Video Diffusion (SVD) Image-to-Video. | | Video |
+| [MotionCtrl](https://wzhouxiff.github.io/projects/MotionCtrl/) | A Unified and Flexible Motion Controller for Video Generation. | | Video |
+| [W.A.L.T](https://walt-video-diffusion.github.io/) | Photorealistic Video Generation with Diffusion Models. | | Video |
+| [I2VGen-XL](https://i2vgen-xl.github.io/) | High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models. | | Video |
## Audio
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [NeuralSound](https://github.com/hellojxt/NeuralSound) | Learning-based Modal Sound Synthesis with Acoustic Transfer. | | Audio |
-| [ArchiSound](https://github.com/archinetai/audio-diffusion-pytorch) | Audio generation using diffusion models, in PyTorch. | | Audio |
-| [Make-An-Audio](https://text-to-audio.github.io/) | Text-To-Audio Generation with Prompt-Enhanced Diffusion Models. | | Audio |
-| [AudioLDM](https://audioldm.github.io/) | Text-to-Audio Generation with Latent Diffusion Models. | | Audio |
-| [AudioLDM 2](https://github.com/haoheliu/audioldm2) | Learning Holistic Audio Generation with Self-supervised Pretraining. | | Audio |
-| [AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | Understanding and Generating Speech, Music, Sound, and Talking Head. | | Audio |
-| [AcademiCodec](https://github.com/yangdongchao/AcademiCodec) | An Open Source Audio Codec Model for Academic Research. | | Audio |
-| [SoundStorm](https://google-research.github.io/seanet/soundstorm/examples/) | Efficient Parallel Audio Generation. | | Audio |
-| [TANGO](https://github.com/declare-lab/tango) | Text-to-Audio Generation using Instruction Tuned LLM and Latent Diffusion Model. | | Audio |
-| [WavJourney](https://github.com/Audio-AGI/WavJourney) | Compositional Audio Creation with Large Language Models. | | Audio |
-| [Stable Audio](https://www.stableaudio.com/) | Fast Timing-Conditioned Latent Audio Diffusion. | | Audio |
-| [Amphion](https://github.com/open-mmlab/Amphion) | An Open-Source Audio, Music, and Speech Generation Toolkit. | | Audio |
-| [Audiobox](https://audiobox.metademolab.com/) | Unified Audio Generation with Natural Language Prompts. | | Audio |
+| Source | Description | Game Engine | Type |
+| :-------------------------------------------------------------------------- | :------------------------------------------------------------------------------- | :---------: | :---: |
+| [NeuralSound](https://github.com/hellojxt/NeuralSound) | Learning-based Modal Sound Synthesis with Acoustic Transfer. | | Audio |
+| [ArchiSound](https://github.com/archinetai/audio-diffusion-pytorch) | Audio generation using diffusion models, in PyTorch. | | Audio |
+| [Make-An-Audio](https://text-to-audio.github.io/) | Text-To-Audio Generation with Prompt-Enhanced Diffusion Models. | | Audio |
+| [AudioLDM](https://audioldm.github.io/) | Text-to-Audio Generation with Latent Diffusion Models. | | Audio |
+| [AudioLDM 2](https://github.com/haoheliu/audioldm2) | Learning Holistic Audio Generation with Self-supervised Pretraining. | | Audio |
+| [AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | Understanding and Generating Speech, Music, Sound, and Talking Head. | | Audio |
+| [AcademiCodec](https://github.com/yangdongchao/AcademiCodec) | An Open Source Audio Codec Model for Academic Research. | | Audio |
+| [SoundStorm](https://google-research.github.io/seanet/soundstorm/examples/) | Efficient Parallel Audio Generation. | | Audio |
+| [TANGO](https://github.com/declare-lab/tango) | Text-to-Audio Generation using Instruction Tuned LLM and Latent Diffusion Model. | | Audio |
+| [WavJourney](https://github.com/Audio-AGI/WavJourney) | Compositional Audio Creation with Large Language Models. | | Audio |
+| [Stable Audio](https://www.stableaudio.com/) | Fast Timing-Conditioned Latent Audio Diffusion. | | Audio |
+| [Amphion](https://github.com/open-mmlab/Amphion) | An Open-Source Audio, Music, and Speech Generation Toolkit. | | Audio |
+| [Audiobox](https://audiobox.metademolab.com/) | Unified Audio Generation with Natural Language Prompts. | | Audio |
## Music
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [MeLoDy](https://efficient-melody.github.io/) | Efficient Neural Music Generation | | Music |
-| [GPTAbleton](https://github.com/BurnedGuitarist/GPTAbleton) | Draft script for processing GPT response and sending the MIDI notes into the Ableton clips with AbletonOSC and python-osc. | | Music |
-| [Chord2Melody](https://github.com/tanreinama/chord2melody) | Automatic Music Generation AI. | | Music |
-| [MusicLM](https://google-research.github.io/seanet/musiclm/examples/) | Generating Music From Text. | | Music |
-| [Riffusion App](https://github.com/riffusion/riffusion-app) | Riffusion is an app for real-time music generation with stable diffusion. | | Music |
-| [Boomy](https://boomy.com/) | Create generative music. Share it with the world. | | Music |
-| [Jukebox](https://github.com/openai/jukebox) | A Generative Model for Music. | | Music |
-| [MuseNet](https://openai.com/research/musenet) | A deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. | | Music |
-| [Magenta](https://github.com/magenta/magenta) | Magenta is a research project exploring the role of machine learning in the process of creating art and music. | | Music |
-| [AIVA](https://www.aiva.ai/) | The Artificial Intelligence composing emotional soundtrack music. | | Music |
-| [Mubert](https://mubert.com/) | AI Generative Music. | | Music |
-| [Amper Music](https://www.shutterstock.com/discover/ampermusic) | Custom music generation technology powered by Amper. | | Music |
-| [SoundRaw](https://soundraw.io/) | AI music generator for creators. | | Music |
-| [MusicGen](https://github.com/facebookresearch/audiocraft) | Simple and Controllable Music Generation. | | Music |
-| [MusicLDM](https://musicldm.github.io/) | Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies. | | Music |
-| [JEN-1](https://www.futureverse.com/research/jen/demos/jen1) | Text-Guided Universal Music Generation with Omnidirectional Diffusion Models. | | Music |
+| Source | Description | Game Engine | Type |
+| :-------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------: | :---: |
+| [MeLoDy](https://efficient-melody.github.io/) | Efficient Neural Music Generation | | Music |
+| [GPTAbleton](https://github.com/BurnedGuitarist/GPTAbleton) | Draft script for processing GPT response and sending the MIDI notes into the Ableton clips with AbletonOSC and python-osc. | | Music |
+| [Chord2Melody](https://github.com/tanreinama/chord2melody) | Automatic Music Generation AI. | | Music |
+| [MusicLM](https://google-research.github.io/seanet/musiclm/examples/) | Generating Music From Text. | | Music |
+| [Riffusion App](https://github.com/riffusion/riffusion-app) | Riffusion is an app for real-time music generation with stable diffusion. | | Music |
+| [Boomy](https://boomy.com/) | Create generative music. Share it with the world. | | Music |
+| [Jukebox](https://github.com/openai/jukebox) | A Generative Model for Music. | | Music |
+| [MuseNet](https://openai.com/research/musenet) | A deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. | | Music |
+| [Magenta](https://github.com/magenta/magenta) | Magenta is a research project exploring the role of machine learning in the process of creating art and music. | | Music |
+| [AIVA](https://www.aiva.ai/) | The Artificial Intelligence composing emotional soundtrack music. | | Music |
+| [Mubert](https://mubert.com/) | AI Generative Music. | | Music |
+| [Amper Music](https://www.shutterstock.com/discover/ampermusic) | Custom music generation technology powered by Amper. | | Music |
+| [SoundRaw](https://soundraw.io/) | AI music generator for creators. | | Music |
+| [MusicGen](https://github.com/facebookresearch/audiocraft) | Simple and Controllable Music Generation. | | Music |
+| [MusicLDM](https://musicldm.github.io/) | Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies. | | Music |
+| [JEN-1](https://www.futureverse.com/research/jen/demos/jen1) | Text-Guided Universal Music Generation with Omnidirectional Diffusion Models. | | Music |
## Voice
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | SoftVC VITS Singing Voice Conversion. | | Voice |
-| [VI-SVS](https://github.com/PlayVoice/VI-SVS) | Use VITS and Opencpop to develop singing voice synthesis; Different from VISinger. | | Voice |
-| [DiffSinger](https://github.com/MoonInTheRiver/DiffSinger) | Singing Voice Synthesis via Shallow Diffusion Mechanism. | | Voice |
-| [Retrieval-based-Voice-Conversion-WebUI](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) | An easy-to-use SVC framework based on VITS. | | Voice |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------- | :---------: | :---: |
+| [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) | SoftVC VITS Singing Voice Conversion. | | Voice |
+| [VI-SVS](https://github.com/PlayVoice/VI-SVS) | Use VITS and Opencpop to develop singing voice synthesis; Different from VISinger. | | Voice |
+| [DiffSinger](https://github.com/MoonInTheRiver/DiffSinger) | Singing Voice Synthesis via Shallow Diffusion Mechanism. | | Voice |
+| [Retrieval-based-Voice-Conversion-WebUI](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) | An easy-to-use SVC framework based on VITS. | | Voice |
## Speech
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [VALL-E](https://valle-demo.github.io/) | Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers. | | Speech |
-| [VALL-E X](https://vallex-demo.github.io/) | Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling | | Speech |
-| [Whisper](https://github.com/openai/whisper) | Whisper is a general-purpose speech recognition model. | | Speech |
-| [Vocode](https://docs.vocode.dev/) | Vocode is an open-source library for building voice-based LLM applications. | | Speech |
-| [LOVO](https://lovo.ai/) | LOVO is the go-to AI Voice Generator & Text to Speech platform for thousands of creators. | | Speech |
-| [Fliki](https://fliki.ai/) | Turn text into videos with AI voices. | | Speech |
-| [Audyo](https://www.audyo.ai/) | Text in. Audio out. | | Speech |
-| [speech-to-text-gpt3-unity](https://github.com/dr-iskandar/speech-to-text-gpt3-unity) | This is the repo I use Whisper and ChatGPT API from OpenAI in Unity. | Unity | Speech |
-| [CLAPSpeech](https://clapspeech.github.io/) | Learning Prosody from Text Context with Contrastive Language-Audio Pre-Training. | | Speech |
-| [SpeechGPT](https://github.com/0nutation/SpeechGPT) | Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities. | | Speech |
-| [Voicebox](https://github.com/SpeechifyInc/Meta-voicebox) | Text-Guided Multilingual Universal Speech Generation at Scale. | | Speech |
-| [TTS Generation WebUI](https://github.com/rsxdalv/tts-generation-webui) | TTS Generation WebUI (Bark, MusicGen, Tortoise, RVC, Vocos, Demucs). | | Speech |
+| Source | Description | Game Engine | Type |
+| :------------------------------------------------------------------------------------ | :---------------------------------------------------------------------------------------- | :---------: | :----: |
+| [VALL-E](https://valle-demo.github.io/) | Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers. | | Speech |
+| [VALL-E X](https://vallex-demo.github.io/) | Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling | | Speech |
+| [Whisper](https://github.com/openai/whisper) | Whisper is a general-purpose speech recognition model. | | Speech |
+| [Vocode](https://docs.vocode.dev/) | Vocode is an open-source library for building voice-based LLM applications. | | Speech |
+| [LOVO](https://lovo.ai/) | LOVO is the go-to AI Voice Generator & Text to Speech platform for thousands of creators. | | Speech |
+| [Fliki](https://fliki.ai/) | Turn text into videos with AI voices. | | Speech |
+| [Audyo](https://www.audyo.ai/) | Text in. Audio out. | | Speech |
+| [speech-to-text-gpt3-unity](https://github.com/dr-iskandar/speech-to-text-gpt3-unity) | This is the repo I use Whisper and ChatGPT API from OpenAI in Unity. | Unity | Speech |
+| [CLAPSpeech](https://clapspeech.github.io/) | Learning Prosody from Text Context with Contrastive Language-Audio Pre-Training. | | Speech |
+| [SpeechGPT](https://github.com/0nutation/SpeechGPT) | Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities. | | Speech |
+| [Voicebox](https://github.com/SpeechifyInc/Meta-voicebox) | Text-Guided Multilingual Universal Speech Generation at Scale. | | Speech |
+| [TTS Generation WebUI](https://github.com/rsxdalv/tts-generation-webui) | TTS Generation WebUI (Bark, MusicGen, Tortoise, RVC, Vocos, Demucs). | | Speech |
## Analytics
-| Source | Description | Game Engine | Type |
-| :------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :-------: |
-| [Ludo.ai](https://ludo.ai/) | Assistant for game research and design. | | Analytics |
+| Source | Description | Game Engine | Type |
+| :-------------------------- | :-------------------------------------- | :---------: | :-------: |
+| [Ludo.ai](https://ludo.ai/) | Assistant for game research and design. | | Analytics |