Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce new setting search.concurrent.max_slice to control the slice computation for concurrent segment search #8847

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
### Added
- Add server version as REST response header [#6583](https://github.com/opensearch-project/OpenSearch/issues/6583)
- Start replication checkpointTimers on primary before segments upload to remote store. ([#8221]()https://github.com/opensearch-project/OpenSearch/pull/8221)
- Introduce new static cluster setting to control slice computation for concurrent segment search. ([#8847](https://github.com/opensearch-project/OpenSearch/pull/8847))

### Dependencies
- Bump `org.apache.logging.log4j:log4j-core` from 2.17.1 to 2.20.0 ([#8307](https://github.com/opensearch-project/OpenSearch/pull/8307))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
import org.opensearch.index.ShardIndexingPressureMemoryManager;
import org.opensearch.index.ShardIndexingPressureSettings;
import org.opensearch.index.ShardIndexingPressureStore;
import org.opensearch.search.SearchBootstrapSettings;
import org.opensearch.search.backpressure.settings.NodeDuressSettings;
import org.opensearch.search.backpressure.settings.SearchBackpressureSettings;
import org.opensearch.search.backpressure.settings.SearchShardTaskSettings;
Expand Down Expand Up @@ -493,6 +494,7 @@ public void apply(Settings value, Settings current, Settings previous) {
SearchService.MAX_OPEN_SCROLL_CONTEXT,
SearchService.MAX_OPEN_PIT_CONTEXT,
SearchService.MAX_PIT_KEEPALIVE_SETTING,
SearchBootstrapSettings.CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_SETTING,
CreatePitController.PIT_INIT_KEEP_ALIVE,
Node.WRITE_PORTS_FILE_SETTING,
Node.NODE_NAME_SETTING,
Expand Down
2 changes: 2 additions & 0 deletions server/src/main/java/org/opensearch/node/Node.java
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@
import org.opensearch.monitor.fs.FsProbe;
import org.opensearch.plugins.ExtensionAwarePlugin;
import org.opensearch.plugins.SearchPipelinePlugin;
import org.opensearch.search.SearchBootstrapSettings;
import org.opensearch.telemetry.tracing.NoopTracerFactory;
import org.opensearch.telemetry.tracing.Tracer;
import org.opensearch.telemetry.tracing.TracerFactory;
Expand Down Expand Up @@ -466,6 +467,7 @@ protected Node(

// Ensure to initialize Feature Flags via the settings from opensearch.yml
FeatureFlags.initializeFeatureFlags(settings);
SearchBootstrapSettings.initialize(settings);

final List<IdentityPlugin> identityPlugins = new ArrayList<>();
if (FeatureFlags.isEnabled(FeatureFlags.IDENTITY)) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/

package org.opensearch.search;

import org.opensearch.common.settings.Setting;
import org.opensearch.common.settings.Settings;

/**
* Keeps track of all the search related node level settings which can be accessed via static methods
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add @opensearch.internal

*/
public class SearchBootstrapSettings {
// settings to configure maximum slice created per search request using OS custom slice computation mechanism. Default lucene
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any question as to whether we will switch away from the static method to the dynamic method when the next release of Lucene is available? If not, I'd go ahead and create a GitHub issue to track it and link the issue in a comment in the code where appropriate.

Copy link
Collaborator

@reta reta Jul 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sohami to this point, why this setting is static? As far as I can tell, it is used in non-static context and could be implemented as a regular search-related setting.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reta The slices method here is called from the constructor of the IndexSearcher in 9.7. Due to this it doesn't allow to make it configurable using any member variable of ContextIndexSearcher. This is changed in the lucene PR here and will be available in 9.8 which is when we can move to a dynamic setting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sohami the main is moving to 9.8 #8668

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thats interesting, do we usually update main with the lucene snapshot builds as well ? Was wondering if we take any dependency on a new change in unreleased lucene version and that gets changed in released lucene version then the change will break.

But nonetheless we can keep this change as is and backport to 2.x until 9.8 is officially released. Then add a follow-up commit to move it to dynamic setting as part of #8870

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since now main is already moved to 9.8.0-snapshot version of lucene, I will add a follow-up PR to clean this up in main and have a dynamic mechanism but not backport that to 2.x.

I am trying to understand why do you want to get this change in main? We know this is not the way to go forward, and we do have the solution, you will have to redo this work in two branches instead of just cherry picking one simple change into 2.x.

Copy link
Collaborator Author

@sohami sohami Jul 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reta Let me try explaining :) . I do get your solution and can use that but would like to first try explaining why I am trying to merge it in main as well.

As an example in this PR, we have this new class SearchBootstrapSettings which provides static access to this new node setting. If I were to create a separate PR to make this setting as dynamic this class will not be needed.

  • With approach 1, if I merge this PR (say pr_1) only in 2.x this class will be present in 2.x only. Now the new PR (say pr_2) with dynamic setting will be built independent of this PR (pr_1) and will not know anything about this class as well and will go to main. When me move this new PR (pr_2) from main to 2.x, we will need to ensure the cleanup like for SearchBootstrapSettings and other conflicts happens properly and not get missed (which I was thinking can be messy as these 2 PRs are not built on top of each other).
  • With approach 2, if we merge pr_1 both in 2.x and main. Then add pr_2 in main on top of pr_1, then all the cleanup will be done as part of this pr_2 itself and it can be merged in same way in 2.x as well. I was thinking this backport to 2.x will be cleaner compared to approach 1 and hence was going with this approach. I may be missing something here but thanks for the discussion ?

Copy link
Collaborator

@reta reta Jul 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When me move this new PR (pr_2) from main to 2.x, we will need to ensure the cleanup like for SearchBootstrapSettings and other conflicts happens properly and not get missed (which I was thinking can be messy as these 2 PRs are not built on top of each other).

This is inevitable in any case - we will be backporting all changes related to Lucene 9.8.0 (as we did for all previous Apache Lucene versions), the argument here is: keep main clean by using the new Apache Lucene snapshots (this is why we've been always doing that, giving the ride to the feature before the release - bugs do happen), do backport as a best effort.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will close this PR and open a new one against 2.x. Will make the dynamic setting change as separate PR for main branch. And will create another tracking backport issue for merging the dynamic setting change to 2.x when lucene 9.8 gets released. As part of that backport we can revert the commit for static setting in 2.x and then apply the commit from main.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reta @andrross @jed326 Created PR #8884 for 2.x branch. Will be working on dynamic setting change for main branch.

// mechanism will not be used if this setting is set with value > 0
public static final String CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_KEY = "search.concurrent.max_slice";
public static final int CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_DEFAULT_VALUE = -1;

// value <= 0 means lucene slice computation will be used
public static final Setting<Integer> CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_SETTING = Setting.intSetting(
CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_KEY,
CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_DEFAULT_VALUE,
Setting.Property.NodeScope
);
Comment on lines +24 to +28
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use the intSetting with minValue (and maybe max value):

public static Setting<Integer> intSetting(String key, int defaultValue, int minValue, int maxValue, Property... properties) {
return intSetting(key, defaultValue, minValue, maxValue, v -> {}, properties);
}

For max value, it is naturally bounded by the number of segments, which shouldn't grow unbounded due to Lucene merges so I'm inclined to say that is not necessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this as a must have since there is no true bounds we are enforcing for now. Anything < 0 is treated as to use the lucene mechanism vs custom mechanism. And if a large +ve value is set that will be tuned down to the segment count in the slice computer logic.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about this some more, I agree that this is not a must have from a functional perspective since the <0 case is handled in slicesInternal, but since there's no valid use case for anyone to set, for example -200 for this setting (and I don't foresee this becoming a valid use case in the future either), it's better to just disallow it entirely.
Agree that max value is not necessary though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but since there's no valid use case for anyone to set, for example -200 for this setting

This setting is used in 2 ways. If value > 0 then use the custom slice mechanism and use the value as target max slice. If value is <=0 then use the lucene slice mechanism. So here the actual negative value or 0 is not relevant other than meaning fallback to lucene behavior. It is used for enabling/disabling the feature (which is custom slicer vs lucene slicer). I will prefer min/max range for settings where there is clear range defined. Here <=0 is used a boolean flag to control using one feature over other. Also didn't want to add a new setting to control that, as I would expect based on the test we will default to one behavior.

private static Settings settings;

public static void initialize(Settings openSearchSettings) {
settings = openSearchSettings;
}

public static int getValueAsInt(String settingName, int defaultValue) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit weird because any setting can be passed in here even if unrelated to search. I'd probably implement this as SearchBootstrapSettings.getTargetMaxSliceCount(). I'd consider using Optional<Integer> or a nullable Integer as the return type as opposed to using the magic value of -1.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I thought about adding the getTargetMaxSliceCount method but then it will not keep this class generic which can be utilized for other search settings. For each new setting we will need to add specific method. But I see your point about being able to access any settings. But that is true for any access to ClusterSettings from different components as well. Let me know if you still prefer adding explicit method getTargetMaxSliceCount here.
  2. -1 is the default value for this setting as it doesn't allow null for it. When the setting will be converted to dynamic then it will return the default value when setting value is looked up using clusterSettings.get(CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_SETTING) and it is not explicitly set. It will not return null so caller needs to handle this default of -1 to fallback to lucene behavior in that case. Hence keeping it as is without adding nullability check for now. The cases will be:
  • Not set explicitly --> default value will be returned so use lucene slice computation
  • Set explicitly to -1/0 --> use lucene slice computation
  • Set explicitly to >0 --> use custom slice computation

return (settings != null) ? settings.getAsInt(settingName, defaultValue) : defaultValue;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@

package org.opensearch.search.internal;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.LeafReaderContext;
Expand Down Expand Up @@ -66,6 +68,7 @@
import org.opensearch.common.lucene.search.TopDocsAndMaxScore;
import org.opensearch.common.lease.Releasable;
import org.opensearch.search.DocValueFormat;
import org.opensearch.search.SearchBootstrapSettings;
import org.opensearch.search.SearchService;
import org.opensearch.search.dfs.AggregatedDfs;
import org.opensearch.search.profile.ContextualProfileBreakdown;
Expand Down Expand Up @@ -93,11 +96,13 @@
* @opensearch.internal
*/
public class ContextIndexSearcher extends IndexSearcher implements Releasable {

private static final Logger logger = LogManager.getLogger(ContextIndexSearcher.class);
/**
* The interval at which we check for search cancellation when we cannot use
* a {@link CancellableBulkScorer}. See {@link #intersectScorerAndBitSet}.
*/
private static int CHECK_CANCELLED_SCORER_INTERVAL = 1 << 11;
private static final int CHECK_CANCELLED_SCORER_INTERVAL = 1 << 11;

private AggregatedDfs aggregatedDfs;
private QueryProfiler profiler;
Expand Down Expand Up @@ -439,6 +444,20 @@ public CollectionStatistics collectionStatistics(String field) throws IOExceptio
return collectionStatistics;
}

/**
* Compute the leaf slices that will be used by concurrent segment search to spread work across threads
* @param leaves all the segments
* @return leafSlice group to be executed by different threads
*/
@Override
public LeafSlice[] slices(List<LeafReaderContext> leaves) {
final int target_max_slices = SearchBootstrapSettings.getValueAsInt(
SearchBootstrapSettings.CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_KEY,
SearchBootstrapSettings.CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_DEFAULT_VALUE
);
return slicesInternal(leaves, target_max_slices);
}

public DirectoryReader getDirectoryReader() {
final IndexReader reader = getIndexReader();
assert reader instanceof DirectoryReader : "expected an instance of DirectoryReader, got " + reader.getClass();
Expand Down Expand Up @@ -518,4 +537,19 @@ private boolean shouldReverseLeafReaderContexts() {
}
return false;
}

// package-private for testing
LeafSlice[] slicesInternal(List<LeafReaderContext> leaves, int target_max_slices) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

targetMaxSlices to follow convention

LeafSlice[] leafSlices;
if (target_max_slices <= 0) {
// use the default lucene slice calculation
leafSlices = super.slices(leaves);
logger.debug("Slice count using lucene default [{}]", leafSlices.length);
} else {
// use the custom slice calculation based on target_max_slices. It will sort
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment looks cut off

leafSlices = MaxTargetSliceSupplier.getSlices(leaves, target_max_slices);
logger.debug("Slice count using max target slice supplier [{}]", leafSlices.length);
}
return leafSlices;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/

package org.opensearch.search.internal;

import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.IndexSearcher;

import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;

/**
* Supplier to compute leaf slices based on passed in leaves and max target slice count to limit the number of computed slices. It sorts
* all the leaves based on document count and then assign each leaf in round-robin fashion to the target slice count slices. Based on
* experiment results as shared in <a href=https://github.com/opensearch-project/OpenSearch/issues/7358>issue-7358</a>
* we can see this mechanism helps to achieve better tail/median latency over default lucene slice computation.
*/
public class MaxTargetSliceSupplier {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add @opensearch.internal


public static IndexSearcher.LeafSlice[] getSlices(List<LeafReaderContext> leaves, int target_max_slice) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

targetMaxSlice

if (target_max_slice <= 0) {
throw new IllegalArgumentException("MaxTargetSliceSupplier called with unexpected slice count of " + target_max_slice);
}

// slice count should not exceed the segment count
int target_slice_count = Math.min(target_max_slice, leaves.size());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

targetSliceCount


// Make a copy so we can sort:
List<LeafReaderContext> sortedLeaves = new ArrayList<>(leaves);

// Sort by maxDoc, descending:
sortedLeaves.sort(Collections.reverseOrder(Comparator.comparingInt(l -> l.reader().maxDoc())));

final List<List<LeafReaderContext>> groupedLeaves = new ArrayList<>();
for (int i = 0; i < target_slice_count; ++i) {
groupedLeaves.add(new ArrayList<>());
}
// distribute the slices in round-robin fashion
List<LeafReaderContext> group;
for (int idx = 0; idx < sortedLeaves.size(); ++idx) {
int currentGroup = idx % target_slice_count;
group = groupedLeaves.get(currentGroup);
group.add(sortedLeaves.get(idx));
Comment on lines +49 to +50
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The group local variable confused me when first reading this. Any reason not to just do groupedLeaves.get(currentGroup).add(sortedLeaves.get(idx)) instead?

}

IndexSearcher.LeafSlice[] slices = new IndexSearcher.LeafSlice[target_slice_count];
int upto = 0;
for (List<LeafReaderContext> currentLeaf : groupedLeaves) {
slices[upto] = new IndexSearcher.LeafSlice(currentLeaf);
++upto;
}
return slices;
Comment on lines +53 to +59
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you replace this with the following?

return groupedLeaves.stream()
    .map(IndexSearcher.LeafSlice::new)
    .toArray(IndexSearcher.LeafSlice[]::new);

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@
import java.io.UncheckedIOException;
import java.util.Collections;
import java.util.IdentityHashMap;
import java.util.List;
import java.util.Set;

import static org.mockito.Mockito.mock;
Expand All @@ -100,6 +101,7 @@
import static org.opensearch.search.internal.ExitableDirectoryReader.ExitableTerms;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf;
import static org.opensearch.search.internal.IndexReaderUtils.getLeaves;

public class ContextIndexSearcherTests extends OpenSearchTestCase {
public void testIntersectScorerAndRoleBits() throws Exception {
Expand Down Expand Up @@ -304,6 +306,56 @@ public void onRemoval(ShardId shardId, Accountable accountable) {
IOUtils.close(reader, w, dir);
}

public void testSlicesInternal() throws Exception {
final List<LeafReaderContext> leaves = getLeaves(10);

final Directory directory = newDirectory();
IndexWriter iw = new IndexWriter(directory, new IndexWriterConfig(new StandardAnalyzer()).setMergePolicy(NoMergePolicy.INSTANCE));
Document document = new Document();
document.add(new StringField("field1", "value1", Field.Store.NO));
document.add(new StringField("field2", "value1", Field.Store.NO));
iw.addDocument(document);
iw.commit();
DirectoryReader directoryReader = DirectoryReader.open(directory);

SearchContext searchContext = mock(SearchContext.class);
IndexShard indexShard = mock(IndexShard.class);
when(searchContext.indexShard()).thenReturn(indexShard);
when(searchContext.bucketCollectorProcessor()).thenReturn(SearchContext.NO_OP_BUCKET_COLLECTOR_PROCESSOR);
ContextIndexSearcher searcher = new ContextIndexSearcher(
directoryReader,
IndexSearcher.getDefaultSimilarity(),
IndexSearcher.getDefaultQueryCache(),
IndexSearcher.getDefaultQueryCachingPolicy(),
true,
null,
searchContext
);
// Case 1: Verify the slice count when lucene default slice computation is used
IndexSearcher.LeafSlice[] slices = searcher.slicesInternal(leaves, -1);
int expectedSliceCount = 2;
// 2 slices will be created since max segment per slice of 5 will be reached
assertEquals(expectedSliceCount, slices.length);
for (int i = 0; i < expectedSliceCount; ++i) {
assertEquals(5, slices[i].leaves.length);
}

// Case 2: Verify the slice count when custom max slice computation is used
expectedSliceCount = 4;
slices = searcher.slicesInternal(leaves, expectedSliceCount);

// 4 slices will be created with 3 leaves in first 2 slices and 2 leaves in other slices
assertEquals(expectedSliceCount, slices.length);
for (int i = 0; i < expectedSliceCount; ++i) {
if (i < 2) {
assertEquals(3, slices[i].leaves.length);
} else {
assertEquals(2, slices[i].leaves.length);
}
}
IOUtils.close(directoryReader, iw, directory);
}

private SparseFixedBitSet query(LeafReaderContext leaf, String field, String value) throws IOException {
SparseFixedBitSet sparseFixedBitSet = new SparseFixedBitSet(leaf.reader().maxDoc());
TermsEnum tenum = leaf.reader().terms(field).iterator();
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
/*
* SPDX-License-Identifier: Apache-2.0
*
* The OpenSearch Contributors require contributions made to
* this file be licensed under the Apache-2.0 license or a
* compatible open source license.
*/

package org.opensearch.search.internal;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.NoMergePolicy;
import org.apache.lucene.store.Directory;

import java.util.List;

import static org.apache.lucene.tests.util.LuceneTestCase.newDirectory;

public class IndexReaderUtils {

/**
* Utility to create leafCount number of {@link LeafReaderContext}
* @param leafCount count of leaves to create
* @return created leaves
*/
public static List<LeafReaderContext> getLeaves(int leafCount) throws Exception {
final Directory directory = newDirectory();
IndexWriter iw = new IndexWriter(directory, new IndexWriterConfig(new StandardAnalyzer()).setMergePolicy(NoMergePolicy.INSTANCE));
for (int i = 0; i < leafCount; ++i) {
Document document = new Document();
final String fieldValue = "value" + i;
document.add(new StringField("field1", fieldValue, Field.Store.NO));
document.add(new StringField("field2", fieldValue, Field.Store.NO));
iw.addDocument(document);
iw.commit();
}
iw.close();
DirectoryReader directoryReader = DirectoryReader.open(directory);
List<LeafReaderContext> leaves = directoryReader.leaves();
directoryReader.close();
directory.close();
return leaves;
}
}
Loading