Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement prepareForDataSynchronization func #51

Merged
merged 1 commit into from
Oct 18, 2024

Conversation

peng225
Copy link
Contributor

@peng225 peng225 commented Oct 9, 2024

Signed-off-by: Shinya Hayashi [email protected]

@peng225 peng225 force-pushed the impl-data-sync-prepare branch 6 times, most recently from 5d1e66c to 331f0c4 Compare October 11, 2024 01:25
@peng225 peng225 marked this pull request as ready for review October 11, 2024 02:17
@peng225 peng225 marked this pull request as draft October 11, 2024 06:35
@peng225 peng225 marked this pull request as ready for review October 11, 2024 06:56
Copy link
Contributor

@satoru-takeuchi satoru-takeuchi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peng225 I've reviewed only implementation code and have not reviewed test code yet. I'll provide the remaining feedback later.

internal/controller/mantlebackup_controller.go Outdated Show resolved Hide resolved
internal/controller/mantlebackup_controller.go Outdated Show resolved Hide resolved
internal/controller/mantlebackup_controller.go Outdated Show resolved Hide resolved
internal/controller/mantlebackup_controller_test.go Outdated Show resolved Hide resolved
internal/controller/mantlebackup_controller_test.go Outdated Show resolved Hide resolved
func searchForDiffOriginMantleBackup(
backup *mantlev1.MantleBackup,
primaryBackups []mantlev1.MantleBackup,
secondaryBackupSet map[string]*mantlev1.MantleBackup,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
secondaryBackupSet map[string]*mantlev1.MantleBackup,
secondaryBackupMap map[string]*mantlev1.MantleBackup,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.
105292a

internal/controller/mantlebackup_controller.go Outdated Show resolved Hide resolved
DescribeTable("Search for the MantleBackup which is used for the diff origin",
func(backup *mantlev1.MantleBackup,
primaryBackups []mantlev1.MantleBackup,
secondaryBackupSet map[string]*mantlev1.MantleBackup,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
secondaryBackupSet map[string]*mantlev1.MantleBackup,
secondaryBackupMap map[string]*mantlev1.MantleBackup,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.
105292a

internal/controller/mantlebackup_controller_test.go Outdated Show resolved Hide resolved

testSecondaryMantleBackups := map[string]*mantlev1.MantleBackup{
"test1": basePrimaryBackups[0].DeepCopy(),
"test2": basePrimaryBackups[1].DeepCopy(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you tell me why testSecondaryMantleBackups includes "test2"? "test1", "test3", "test4", "test0", and "test2" are replicated in order by design.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peng225 Sorry, I should have explained what I meant more clearly. My point was, "test2" mon't be existed in the secondary k8s cluster because its snapID is higher than the "test0"'s one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the design is as you explained.

I just believe that we should check every possible condition in the unit tests, even if those conditions are unlikly to occur when everything is functioning correctly.

This is just my personal opinion. I can also remove such unlikely test cases. Which do you think is better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understood. It's not necessary to remove the test code. Instead, could you add a comment to the purpose of the existence of "test2" in the secondary cluster?

Copy link
Contributor Author

@peng225 peng225 Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@satoru-takeuchi
Sorry, but I reconcidered it, and have changed my mind.
I misunderstood that the exsistence of "test2" on the secondary cluster contributed to the higher code coverrage, but I realised it did not.

Let me remove "test2" on the second cluster,
("test2" should be in the primary cluster to check if we can skip the MantleBackup with the higher snapID.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me remove "test2" on the second cluster,

I added a comment.
8b3b868

Copy link
Contributor

@satoru-takeuchi satoru-takeuchi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peng225 LGTM. Could you squash commits once CI passes?

@peng225
Copy link
Contributor Author

peng225 commented Oct 18, 2024

@satoru-takeuchi
I squashed commits. Could you check again?

@satoru-takeuchi satoru-takeuchi merged commit d9f469e into main Oct 18, 2024
3 checks passed
@satoru-takeuchi satoru-takeuchi deleted the impl-data-sync-prepare branch October 18, 2024 02:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants