-
This is a question about the tag api when I upload a image. Civitai will automatically add tags for the uploaded images. And if there is some NSFW tags the image will be filtered out based on user options. I think this is a much better rule than the black boxed rule in https://huggingface.co/CompVis/stable-diffusion-safety-checker. I want to implement a similary api which can add tags for my own images, and do I need to retrain the data myself? (I know a project https://github.com/LAION-AI/CLIP-based-NSFW-Detector). Or if there is some model which is already exists for this function? I know CLIP can be used for zero-shot classification. But the distance between the text is quite hard to design, and the classification result is not as good as the stable-diffusion-safety-checker. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 5 replies
-
Any one give some help? Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
Is anyone can give some tips about this? |
Beta Was this translation helpful? Give feedback.
-
@aisensiy |
Beta Was this translation helpful? Give feedback.
-
Thanks sooo much for these replies. |
Beta Was this translation helpful? Give feedback.
-
I think currently you can create only tags for Models and then use it to tag images. I created tag "visualnovel". |
Beta Was this translation helpful? Give feedback.
We'd like to do it this way. If anyone builds this in an open-source way, let me know, we'd love to use it.
In reality we're currently using Amazon Rekognition.