- Table of Contents 📖 Model Introduction 📊 Evaluation Performance 🥇🥇🔥🔥 🎥 Visualization 🛠️ Quick Start 📜 License and Citation 📖 Model Introduction We have open-sourced the Kolors large model, which is a large-scale text-to-image generation model based on latent diffusion. The current open-source model has 2.7 billion parameters. Kolors is trained on billions of text-image pairs and demonstrates significant advantages in visual quality, complex semantic understanding, and even text generation (Chinese and English characters) compared to both open-source and closed-source models.
+ We have open-sourced the Kolors large model, which is a large-scale text-to-image generation model based on latent diffusion. The current open-source model has 2.7 billion parameters. Kolors is trained on billions of text-image pairs and demonstrates significant advantages in visual quality, complex semantic understanding, and even text generation (Chinese and English characters) compared to both open-source and closed-source models. Additionally, Kolors supports both Chinese and English, making it more competitive in understanding Chinese-specific content. The model link is provided in the table below; please click to learn more.
@@ -170,12 +179,11 @@
- Authors Sixian Zhang, Bohan Wang, Junqiang Wu*, Yan Li‡, Tingting Gao, Di Zhang, Zhongyuan Wang
-Links Paper Code arXiv Abstract Current metrics for text-to-image models typically rely on statistical metrics which inadequately represent the real preference of humans. Although recent works attempt to learn these preferences via human annotated images, they reduce the rich tapestry of human preference to a single overall score. However, the preference results vary when humans evaluate images with different aspects.
+ To learn the multi-dimensional human preferences, we propose the Multi-dimensional Preference Score (MPS), the first multi-dimensional preference scoring model for the evaluation of text-to-image models. The MPS introduces the preference condition module upon CLIP model to learn these diverse preferences. It is trained based on our Multi-dimensional Human Preference (MHP) Dataset, which comprises 918,315 human preference choices across 4 dimensions on 607,541 images. The images are generated by a wide range of latest text-to-image models. The MPS outperforms existing scoring methods across 3 datasets in 4 dimensions, enabling it a promising metric for evaluating and improving text-to-image generation.
@@ -200,6 +208,15 @@