Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to create empty local translog if remote translog is empty #10854

Closed

Conversation

sachinpkale
Copy link
Member

@sachinpkale sachinpkale commented Oct 23, 2023

Description

  • In this change, we add a support to create empty translog on local if there is no data in remote translog.
  • Remote store restore fails if remote translog is empty. As most of the data is part of segments, the new query parameter added in this PR, force_empty_translog, ignores remote translog and creates empty translog on local.
  • But this is not ideal solution. We made sure each time primary shard is created, remote translog is populated here: Sync translog to remote on primary activate #10839 but it is not guaranteed to succeed always (e.g. if remote store calls are failing at the time of shard creation)

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Failing checks are inspected and point to the corresponding known issue(s) (See: Troubleshooting Failing Builds)
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)
  • Public documentation issue/PR created

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 23, 2023

Compatibility status:

Checks if related components are compatible with change 2e3fbfd

Incompatible components

Incompatible components: [https://github.com/opensearch-project/cross-cluster-replication.git, https://github.com/opensearch-project/performance-analyzer.git]

Skipped components

Compatible components

Compatible components: [https://github.com/opensearch-project/security-analytics.git, https://github.com/opensearch-project/security.git, https://github.com/opensearch-project/custom-codecs.git, https://github.com/opensearch-project/opensearch-oci-object-storage.git, https://github.com/opensearch-project/index-management.git, https://github.com/opensearch-project/geospatial.git, https://github.com/opensearch-project/sql.git, https://github.com/opensearch-project/notifications.git, https://github.com/opensearch-project/job-scheduler.git, https://github.com/opensearch-project/observability.git, https://github.com/opensearch-project/neural-search.git, https://github.com/opensearch-project/k-nn.git, https://github.com/opensearch-project/alerting.git, https://github.com/opensearch-project/anomaly-detection.git, https://github.com/opensearch-project/performance-analyzer-rca.git, https://github.com/opensearch-project/asynchronous-search.git, https://github.com/opensearch-project/ml-commons.git, https://github.com/opensearch-project/common-utils.git, https://github.com/opensearch-project/reporting.git]

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@codecov
Copy link

codecov bot commented Oct 23, 2023

Codecov Report

Attention: 31 lines in your changes are missing coverage. Please review.

Comparison is base (8f13dee) 71.25% compared to head (2e3fbfd) 71.32%.
Report is 348 commits behind head on main.

Files Patch % Lines
...org/opensearch/cluster/routing/RecoverySource.java 33.33% 7 Missing and 5 partials ⚠️
...java/org/opensearch/index/shard/StoreRecovery.java 11.11% 4 Missing and 4 partials ⚠️
...remotestore/restore/RestoreRemoteStoreRequest.java 44.44% 3 Missing and 2 partials ⚠️
...arch/index/recovery/RemoteStoreRestoreService.java 0.00% 5 Missing ⚠️
...on/admin/cluster/RestRestoreRemoteStoreAction.java 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #10854      +/-   ##
============================================
+ Coverage     71.25%   71.32%   +0.06%     
- Complexity    58684    58749      +65     
============================================
  Files          4869     4869              
  Lines        276512   276520       +8     
  Branches      40204    40212       +8     
============================================
+ Hits         197030   197228     +198     
+ Misses        63020    62813     -207     
- Partials      16462    16479      +17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Sachin Kale <[email protected]>
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

Copy link
Collaborator

@Bukhtawar Bukhtawar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is our marker here to denote completeness, to ensure we aren't marking incomplete data available. In the case you call out like shard creation failure, can we not retry the same and fail fast if we aren't able to write to remote store.
If that is something that you think is hard, having a force_allocate in the remotestore/restore API sounds like a better option

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@sachinpkale
Copy link
Member Author

What is our marker here to denote completeness, to ensure we aren't marking incomplete data available. In the case you call out like shard creation failure, can we not retry the same and fail fast if we aren't able to write to remote store.

We can retry on failure but we decided to log the error and continue the shard creation in #10839. There are a couple of reasons:

  1. The issue occurs immediately after shard creation where ingestion has not happened yet. If we index at least a single doc successfully, remote translog will be populated and the issue will not occur. So, IMO this is a rare scenario. Failing shard creation and the recovery which triggered the shard creation (failover, primary relocation) may impact existing flows.
  2. We got to know about the issue during testing and we made the change in Sync translog to remote on primary activate #10839. But there can be other scenarios where a shard in remote translog will not have any data.

If that is something that you think is hard, having a force_allocate in the remotestore/restore API sounds like a better option

  • Ideally, we should be using new parameter introduced with this change only when we are sure that remote translog does not exist and there is no data loss. But, as you pointed out, we won't be able to guarantee that. For example, if the data in remote translog is incomplete (due to any reason, can't think of a reason as of now :) ), with this change, user would be able to restore data at least in segments.
  • I like the name force_allocate over current force_empty_translog. Was your suggestion only on the parameter name or you were suggesting something more with this parameter?

Signed-off-by: Sachin Kale <[email protected]>
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@gbbafna
Copy link
Collaborator

gbbafna commented Oct 24, 2023

Ideally, we should be using new parameter introduced with this change only when we are sure that remote translog does not exist and there is no data loss. But, as you pointed out, we won't be able to guarantee that. For example, if the data in remote translog is incomplete (due to any reason, can't think of a reason as of now :) ), with this change, user would be able to restore data at least in segments.

This looks giving a risky parameter in hands of the user .

The issue occurs immediately after shard creation where ingestion has not happened yet. If we index at least a single doc successfully, remote translog will be populated and the issue will not occur. So, IMO this is a rare scenario. Failing shard creation and the recovery which triggered the shard creation (failover, primary relocation) may impact existing flows.

This issue is rare that during failover the primary is able to download all the segment files, but is not able to upload empty tlog/ckp files. We should rather let the failovers fail. In that case restore API can be retried again , right ?

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.remotestore.RemoteStoreStatsIT.testDownloadStatsCorrectnessSinglePrimaryMultipleReplicaShards
      1 org.opensearch.http.SearchRestCancellationIT.testAutomaticCancellationDuringQueryPhase
      1 org.opensearch.client.PitIT.testDeleteAllAndListAllPits

@opensearch-trigger-bot
Copy link
Contributor

This PR is stalled because it has been open for 30 days with no activity.

@opensearch-trigger-bot opensearch-trigger-bot bot added the stalled Issues that have stalled label Nov 23, 2023
@ticheng-aws ticheng-aws added the enhancement Enhancement or improvement to existing feature or request label Jan 5, 2024
@ticheng-aws
Copy link
Contributor

Hi @sachinpkale, Is this being worked upon? Feel free to reach out to maintainers for further reviews.

@opensearch-trigger-bot opensearch-trigger-bot bot removed the stalled Issues that have stalled label Jan 6, 2024
@opensearch-trigger-bot
Copy link
Contributor

This PR is stalled because it has been open for 30 days with no activity.

@opensearch-trigger-bot opensearch-trigger-bot bot added stalled Issues that have stalled and removed stalled Issues that have stalled labels Feb 8, 2024
@sachinpkale
Copy link
Member Author

Not going ahead with this change as creating shard without translog can result in data loss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request skip-changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants