-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Support multi-modal semantic search #318
Comments
@vamshin for 2.11 we can do only limited support due to tight deadline. I see that is added to the roadmap, do you want to keep it as is and we create a new feature request for a extended solution, or we can remove 2.11 label and keep using this issue. |
@martin-gaievski I think we should create new one for tracking extended solutions and tag this issue as a parent issue. This way community knows this feature is already shipped. |
Is your feature request related to a problem?
This feature aims to enhance semantic search use cases by enabling Multimodal support in Neural Search plugin. Multimodal improves the relevancy of the results by combining text and other forms of inputs like image, audio, video etc.
As of today OpenSearch supports Semantic search use cases purely based on text based embedding models. While this works for majority of usecases, it does not scale well for applications that need to embed other forms of input. For example, consider a query “Give me bright colored blue shoes”. This can get more relevant results if image properties such as color, intensity is captured along with text. Multimodal solves this problem
What solution would you like?
The text was updated successfully, but these errors were encountered: