Skip to content

Commit

Permalink
Merge branch 'main' into with_query_recompute_moved_out
Browse files Browse the repository at this point in the history
Signed-off-by: Sagar <[email protected]>
  • Loading branch information
sgup432 authored Jun 21, 2024
2 parents 7fb3893 + f8e8865 commit acdc6e7
Show file tree
Hide file tree
Showing 143 changed files with 5,694 additions and 1,577 deletions.
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# 3. Use the command palette to run the CODEOWNERS: Show owners of current file command, which will display all code owners for the current file.

# Default ownership for all repo files
* @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @dreamer-89 @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @tlfeng @VachaShah
* @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah

/modules/transport-netty4/ @peternied

Expand All @@ -24,4 +24,4 @@

/.github/ @peternied

/MAINTAINERS.md @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @dreamer-89 @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @tlfeng @VachaShah
/MAINTAINERS.md @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
### Added
- Add fingerprint ingest processor ([#13724](https://github.com/opensearch-project/OpenSearch/pull/13724))
- [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/))
- Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865))
- [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782))

### Dependencies
- Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442))
Expand All @@ -19,18 +21,27 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398))
- Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399))
- Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397))
- Bump `opentelemetry` from 1.36.0 to 1.39.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457))

### Changed
- [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187))
- unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416))
- Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568))
- Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448))

### Deprecated

### Removed

### Fixed
- Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474))
- Fix handling of Short and Byte data types in ScriptProcessor ingest pipeline ([#14379](https://github.com/opensearch-project/OpenSearch/issues/14379))
- Switch to iterative version of WKT format parser ([#14086](https://github.com/opensearch-project/OpenSearch/pull/14086))
- Fix the computed max shards of cluster to avoid int overflow ([#14155](https://github.com/opensearch-project/OpenSearch/pull/14155))
- Fixed rest-high-level client searchTemplate & mtermVectors endpoints to have a leading slash ([#14465](https://github.com/opensearch-project/OpenSearch/pull/14465))
- Write shard level metadata blob when snapshotting searchable snapshot indexes ([#13190](https://github.com/opensearch-project/OpenSearch/pull/13190))
- Fix aggs result of NestedAggregator with sub NestedAggregator ([#13324](https://github.com/opensearch-project/OpenSearch/pull/13324))
- Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495))

### Security

Expand Down
22 changes: 11 additions & 11 deletions MAINTAINERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,18 @@ This document contains a list of maintainers in this repo. See [opensearch-proje
| Sarat Vemulapalli | [saratvemulapalli](https://github.com/saratvemulapalli) | Amazon |
| Shweta Thareja | [shwetathareja](https://github.com/shwetathareja) | Amazon |
| Sorabh Hamirwasia | [sohami](https://github.com/sohami) | Amazon |
| Suraj Singh | [dreamer-89](https://github.com/dreamer-89) | Amazon |
| Tianli Feng | [tlfeng](https://github.com/tlfeng) | Amazon |
| Vacha Shah | [VachaShah](https://github.com/VachaShah) | Amazon |

## Emeritus

| Maintainer | GitHub ID | Affiliation |
| --------------------- | ----------------------------------------- | ----------- |
| Megha Sai Kavikondala | [meghasaik](https://github.com/meghasaik) | Amazon |
| Xue Zhou | [xuezhou25](https://github.com/xuezhou25) | Amazon |
| Kartik Ganesh | [kartg](https://github.com/kartg) | Amazon |
| Abbas Hussain | [abbashus](https://github.com/abbashus) | Meta |
| Himanshu Setia | [setiah](https://github.com/setiah) | Amazon |
| Ryan Bogan | [ryanbogan](https://github.com/ryanbogan) | Amazon |
| Rabi Panda | [adnapibar](https://github.com/adnapibar) | Independent |
| Maintainer | GitHub ID | Affiliation |
| ---------------------- |-------------------------------------------- | ----------- |
| Megha Sai Kavikondala | [meghasaik](https://github.com/meghasaik) | Amazon |
| Xue Zhou | [xuezhou25](https://github.com/xuezhou25) | Amazon |
| Kartik Ganesh | [kartg](https://github.com/kartg) | Amazon |
| Abbas Hussain | [abbashus](https://github.com/abbashus) | Meta |
| Himanshu Setia | [setiah](https://github.com/setiah) | Amazon |
| Ryan Bogan | [ryanbogan](https://github.com/ryanbogan) | Amazon |
| Rabi Panda | [adnapibar](https://github.com/adnapibar) | Independent |
| Tianli Feng | [tlfeng](https://github.com/tlfeng) | Amazon |
| Suraj Singh | [dreamer-89](https://github.com/dreamer-89) | Amazon |
40 changes: 33 additions & 7 deletions TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,9 @@ OpenSearch uses [jUnit](https://junit.org/junit5/) for testing, it also uses ran
- [Bad practices](#bad-practices)
- [Use randomized-testing for coverage](#use-randomized-testing-for-coverage)
- [Abuse randomization in multi-threaded tests](#abuse-randomization-in-multi-threaded-tests)
- [Use `Thread.sleep`](#use-threadsleep)
- [Expect a specific segment topology](#expect-a-specific-segment-topology)
- [Leave environment in an unstable state after test](#leave-environment-in-an-unstable-state-after-test)
- [Test coverage analysis](#test-coverage-analysis)
- [Building with extra plugins](#building-with-extra-plugins)
- [Environment misc](#environment-misc)
Expand Down Expand Up @@ -88,21 +91,23 @@ This will instruct all JVMs (including any that run cli tools such as creating t

## Test case filtering

- `tests.class` is a class-filtering shell-like glob pattern
- `tests.method` is a method-filtering glob pattern.
To be able to run a single test you need to specify the module where you're running the tests from.

Example: `./gradlew server:test --tests "*.ReplicaShardBatchAllocatorTests.testNoAsyncFetchData"`

Run a single test case (variants)

./gradlew test -Dtests.class=org.opensearch.package.ClassName
./gradlew test "-Dtests.class=*.ClassName"
./gradlew module:test --tests org.opensearch.package.ClassName
./gradlew module:test --tests org.opensearch.package.ClassName.testName
./gradlew module:test --tests "*.ClassName"

Run all tests in a package and its sub-packages

./gradlew test "-Dtests.class=org.opensearch.package.*"
./gradlew module:test --tests "org.opensearch.package.*"

Run any test methods that contain *esi* (e.g.: .r*esi*ze.)

./gradlew test "-Dtests.method=*esi*"
./gradlew module:test --tests "*esi*"

Run all tests that are waiting for a bugfix (disabled by default)

Expand Down Expand Up @@ -455,7 +460,7 @@ Unit tests are the preferred way to test some functionality: most of the time th

The reason why `OpenSearchSingleNodeTestCase` exists is that all our components used to be very hard to set up in isolation, which had led us to having a number of integration tests but close to no unit tests. `OpenSearchSingleNodeTestCase` is a workaround for this issue which provides an easy way to spin up a node and get access to components that are hard to instantiate like `IndicesService`. Whenever practical, you should prefer unit tests.

Finally, if the the functionality under test needs to be run in a cluster, there are two test classes to consider:
Finally, if the functionality under test needs to be run in a cluster, there are two test classes to consider:
* `OpenSearchRestTestCase` will connect to an external cluster. This is a good option if the tests cases don't rely on a specific configuration of the test cluster. A test cluster is set up as part of the Gradle task running integration tests, and test cases using this class can connect to it. The configuration of the cluster is provided in the Gradle files.
* `OpenSearchIntegTestCase` will create a local cluster as part of each test case. The configuration of the cluster is controlled by the test class. This is a good option if different tests cases depend on different cluster configurations, as it would be impractical (and limit parallelization) to keep re-configuring (and re-starting) the external cluster for each test case. A good example of when this class might come in handy is for testing security features, where different cluster configurations are needed to fully test each one.

Expand All @@ -477,6 +482,27 @@ However, it should not be used for coverage. For instance if you are testing a p

Multi-threaded tests are often not reproducible due to the fact that there is no guarantee on the order in which operations occur across threads. Adding randomization to the mix usually makes things worse and should be done with care.

### Use `Thread.sleep`

`Thread.sleep()` is almost always a bad idea because it is very difficult to know that you've waited long enough. Using primitives like `waitUntil` or `assertBusy`, which use Thread.sleep internally, is okay to wait for a specific condition. However, it is almost always better to instrument your code with concurrency primitives like a `CountDownLatch` that will allow you to deterministically wait for a specific condition, without waiting longer than necessary that will happen with a polling approach used by `assertBusy`.

Example:
- [PrimaryShardAllocatorIT](https://github.com/opensearch-project/OpenSearch/blob/7ffcd6500e0bd5956cef5c289ee66d9f99d533fc/server/src/internalClusterTest/java/org/opensearch/gateway/ReplicaShardAllocatorIT.java#L208-L235): This test is using two latches: one to wait for a recovery to start and one to block that recovery so that it can deterministically test things that happen during a recovery.

### Expect a specific segment topology

By design, OpenSearch integration tests will vary how the merge policy works because in almost all scenarios you should not depend on a specific segment topology (in the real world your code will see a huge diversity of indexing workloads with OpenSearch merging things in the background all the time!). If you do in fact need to care about the segment topology (e.g. for testing statistics that might vary slightly depending on number of segments), then you must take care to ensure that segment topology is deterministic by doing things like disabling background refreshes, force merging after indexing data, etc.

Example:
- [SegmentReplicationResizeRequestIT](https://github.com/opensearch-project/OpenSearch/blob/f715ee1a485e550802accc1c2e3d8101208d4f0b/server/src/internalClusterTest/java/org/opensearch/indices/replication/SegmentReplicationResizeRequestIT.java#L102-L109): This test disables refreshes to prevent interfering with the segment replication behavior under test.

### Leave environment in an unstable state after test

The default test case will ensure that no open file handles or running threads are left after tear down. You must ensure that all resources are cleaned up at the end of each test case, or else the cleanup may end up racing with the tear down logic in the base test class in a way that is very difficult to reproduce.

Example:
- [AwarenessAttributeDecommissionIT](https://github.com/opensearch-project/OpenSearch/blob/main/server/src/internalClusterTest/java/org/opensearch/cluster/coordination/AwarenessAttributeDecommissionIT.java#L951): Recommissions any decommissioned nodes at the end of the test to ensure the after-test checks succeed.

# Test coverage analysis

The code coverage report can be generated through Gradle with [JaCoCo plugin](https://docs.gradle.org/current/userguide/jacoco_plugin.html).
Expand Down
4 changes: 2 additions & 2 deletions buildSrc/version.properties
Original file line number Diff line number Diff line change
Expand Up @@ -74,5 +74,5 @@ jzlib = 1.1.3
resteasy = 6.2.4.Final

# opentelemetry dependencies
opentelemetry = 1.36.0
opentelemetrysemconv = 1.23.1-alpha
opentelemetry = 1.39.0
opentelemetrysemconv = 1.25.0-alpha
Original file line number Diff line number Diff line change
Expand Up @@ -540,7 +540,7 @@ static Request searchTemplate(SearchTemplateRequest searchTemplateRequest) throw
Request request;

if (searchTemplateRequest.isSimulate()) {
request = new Request(HttpGet.METHOD_NAME, "_render/template");
request = new Request(HttpGet.METHOD_NAME, "/_render/template");
} else {
SearchRequest searchRequest = searchTemplateRequest.getRequest();
String endpoint = endpoint(searchRequest.indices(), "_search/template");
Expand Down Expand Up @@ -803,8 +803,7 @@ static Request termVectors(TermVectorsRequest tvrequest) throws IOException {
}

static Request mtermVectors(MultiTermVectorsRequest mtvrequest) throws IOException {
String endpoint = "_mtermvectors";
Request request = new Request(HttpGet.METHOD_NAME, endpoint);
Request request = new Request(HttpGet.METHOD_NAME, "/_mtermvectors");
request.setEntity(createEntity(mtvrequest, REQUEST_BODY_CONTENT_TYPE));
return request;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -701,7 +701,7 @@ public void testOpenExistingIndex() throws IOException {
closeIndex(index);
ResponseException exception = expectThrows(
ResponseException.class,
() -> client().performRequest(new Request(HttpGet.METHOD_NAME, index + "/_search"))
() -> client().performRequest(new Request(HttpGet.METHOD_NAME, "/" + index + "/_search"))
);
assertThat(exception.getResponse().getStatusLine().getStatusCode(), equalTo(RestStatus.BAD_REQUEST.getStatus()));
assertThat(exception.getMessage().contains(index), equalTo(true));
Expand All @@ -714,7 +714,7 @@ public void testOpenExistingIndex() throws IOException {
);
assertTrue(openIndexResponse.isAcknowledged());

Response response = client().performRequest(new Request(HttpGet.METHOD_NAME, index + "/_search"));
Response response = client().performRequest(new Request(HttpGet.METHOD_NAME, "/" + index + "/_search"));
assertThat(response.getStatusLine().getStatusCode(), equalTo(RestStatus.OK.getStatus()));
}

Expand Down Expand Up @@ -771,7 +771,7 @@ public void testCloseExistingIndex() throws IOException {

ResponseException exception = expectThrows(
ResponseException.class,
() -> client().performRequest(new Request(HttpGet.METHOD_NAME, indexResult.getIndex() + "/_search"))
() -> client().performRequest(new Request(HttpGet.METHOD_NAME, "/" + indexResult.getIndex() + "/_search"))
);
assertThat(exception.getResponse().getStatusLine().getStatusCode(), equalTo(RestStatus.BAD_REQUEST.getStatus()));
assertThat(exception.getMessage().contains(indexResult.getIndex()), equalTo(true));
Expand Down Expand Up @@ -1270,7 +1270,7 @@ public void testGetAliasesNonExistentIndexOrAlias() throws IOException {
assertThat(getAliasesResponse.getException(), nullValue());
}
createIndex(index, Settings.EMPTY);
client().performRequest(new Request(HttpPut.METHOD_NAME, index + "/_alias/" + alias));
client().performRequest(new Request(HttpPut.METHOD_NAME, "/" + index + "/_alias/" + alias));
{
GetAliasesRequest getAliasesRequest = new GetAliasesRequest().indices(index, "non_existent_index");
GetAliasesResponse getAliasesResponse = execute(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ public void testRankEvalRequest() throws IOException {
}

// now try this when test2 is closed
client().performRequest(new Request("POST", "index2/_close"));
client().performRequest(new Request("POST", "/index2/_close"));
rankEvalRequest.indicesOptions(IndicesOptions.fromParameters(null, "true", null, "false", SearchRequest.DEFAULT_INDICES_OPTIONS));
response = execute(rankEvalRequest, highLevelClient()::rankEval, highLevelClient()::rankEvalAsync);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1399,7 +1399,7 @@ public void testRenderSearchTemplate() throws Exception {

// Verify that the resulting REST request looks as expected.
Request request = RequestConverters.searchTemplate(searchTemplateRequest);
String endpoint = "_render/template";
String endpoint = "/_render/template";

assertEquals(HttpGet.METHOD_NAME, request.getMethod());
assertEquals(endpoint, request.getEndpoint());
Expand Down Expand Up @@ -1565,7 +1565,7 @@ public void testMultiTermVectors() throws IOException {

Request request = RequestConverters.mtermVectors(mtvRequest);
assertEquals(HttpGet.METHOD_NAME, request.getMethod());
assertEquals("_mtermvectors", request.getEndpoint());
assertEquals("/_mtermvectors", request.getEndpoint());
assertToXContentBody(mtvRequest, request.getEntity());
}

Expand All @@ -1585,7 +1585,7 @@ public void testMultiTermVectorsWithType() throws IOException {

Request request = RequestConverters.mtermVectors(mtvRequest);
assertEquals(HttpGet.METHOD_NAME, request.getMethod());
assertEquals("_mtermvectors", request.getEndpoint());
assertEquals("/_mtermvectors", request.getEndpoint());
assertToXContentBody(mtvRequest, request.getEntity());
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -727,7 +727,7 @@ public void testSearchWithSuggest() throws IOException {
}

public void testSearchWithWeirdScriptFields() throws Exception {
Request doc = new Request("PUT", "test/_doc/1");
Request doc = new Request("PUT", "/test/_doc/1");
doc.setJsonEntity("{\"field\":\"value\"}");
client().performRequest(doc);
client().performRequest(new Request("POST", "/test/_refresh"));
Expand Down Expand Up @@ -774,7 +774,7 @@ public void testSearchWithWeirdScriptFields() throws Exception {
public void testSearchWithDerivedFields() throws Exception {
// Just testing DerivedField definition from SearchSourceBuilder derivedField()
// We are not testing the full functionality here
Request doc = new Request("PUT", "test/_doc/1");
Request doc = new Request("PUT", "/test/_doc/1");
doc.setJsonEntity("{\"field\":\"value\"}");
client().performRequest(doc);
client().performRequest(new Request("POST", "/test/_refresh"));
Expand Down
Loading

0 comments on commit acdc6e7

Please sign in to comment.