diff --git a/_search-plugins/neural-sparse-with-pipelines.md b/_search-plugins/neural-sparse-with-pipelines.md index 6db3714a73..daee9fc9a9 100644 --- a/_search-plugins/neural-sparse-with-pipelines.md +++ b/_search-plugins/neural-sparse-with-pipelines.md @@ -279,7 +279,7 @@ PUT /my-nlp-index "default_pipeline": "nlp-ingest-pipeline-sparse" }, "mappings": { - "_source": { + "_source": { "excludes": [ "passage_embedding" ] @@ -424,6 +424,28 @@ The response contains the matching documents: } ``` +To reduce the latency of disk I/O and network I/O about sparse embedding source , you can exclude the embedding vector source in query as follows: + +```json +GET my-nlp-index/_search +{ + "_source": { + "excludes": [ + "passage_embedding" + ] + }, + "query": { + "neural_sparse": { + "passage_embedding": { + "query_text": "Hi world", + "model_id": "" + } + } + } +} +``` +{% include copy-curl.html %} + ## Accelerating neural sparse search To learn more about improving retrieval time for neural sparse search, see [Accelerating neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/#accelerating-neural-sparse-search).