You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug report. If you’ve found a bug, please provide a code snippet or test to reproduce it below.
The easier it is to track down the bug, the faster it is solved.
Feature Request. Start by telling us what problem you’re trying to solve.
Often a solution already exists! Don’t send pull requests to implement new features without
first getting our support. Sometimes we leave features out on purpose to keep the project small.
Feature description
Setting weights for your suggester fields allow you to prioritize the suggestions returned from query (note that they can't be sorted). For the standard API, you set the weights by including a weight parameter for each document, but to my understanding this is not possible when using es-hadoop and Spark DataFrames. In Databricks I would do something like this
I tried with adding an int column called weight, but that didn't do it. I tried search for parameter in the documentation, but I couldn't find any. I haven't tried a nested field, but I doubt it works.
I guess it is not trivial what the syntax would be if you have more than one completion field either, but you'll figure something out :)
The text was updated successfully, but these errors were encountered:
What kind an issue is this?
The easier it is to track down the bug, the faster it is solved.
Often a solution already exists! Don’t send pull requests to implement new features without
first getting our support. Sometimes we leave features out on purpose to keep the project small.
Feature description
Setting weights for your suggester fields allow you to prioritize the suggestions returned from query (note that they can't be sorted). For the standard API, you set the weights by including a weight parameter for each document, but to my understanding this is not possible when using es-hadoop and Spark DataFrames. In Databricks I would do something like this
To upload to an index with this mapping
I tried with adding an int column called weight, but that didn't do it. I tried search for parameter in the documentation, but I couldn't find any. I haven't tried a nested field, but I doubt it works.
I guess it is not trivial what the syntax would be if you have more than one completion field either, but you'll figure something out :)
The text was updated successfully, but these errors were encountered: