Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TH2-2212] merged with dev 5 #283

Merged
merged 60 commits into from
Mar 5, 2024
Merged

Conversation

lumber1000
Copy link
Member

No description provided.

Xanclry and others added 30 commits June 14, 2022 16:24
…' of github.com:th2-net/th2-common-j into TH2-2212-common-cannot-recover-channel-level-exceptions
@OptimumCode
Copy link
Member

@Nikita-Smirnov-Exactpro Could you please check the PR and if you don't have any objections I think we can merge it

README.md Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
@@ -189,6 +189,10 @@ private SubscriberMonitor basicConsume() {
return connectionManager.basicConsume(queue, this::handle, this::canceled);
} catch (IOException e) {
throw new IllegalStateException("Can not subscribe to queue = " + queue, e);
} catch (InterruptedException e) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we catch InterruptedException in here (private method). I think InterruptedException should be a signal for interruption of retry or subscribe process

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are in catch block subscribe retry process is already interrupted (because it's in try block - connectionManager.basicConsume).

@lumber1000 lumber1000 marked this pull request as draft November 16, 2023 15:39
Oleg Smelov added 2 commits November 17, 2023 04:01
filtering out recovering tasks for non subscribed channels
duplicated test removed
addShutdownListenerToChannel refactored
@lumber1000 lumber1000 marked this pull request as ready for review November 17, 2023 09:48
} finally {
lock.unlock();

TimeUnit.MILLISECONDS.sleep(recoveryDelay);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this instruction was moved from catch block?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid locking for a long time.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all operation with a channel are executed under lock. If one operation faces to a retry problem another will wait too.
In this case we can block another operations until resolve the retry problem

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, i will add one more lock for processing callbacks.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, please satay as is

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Several publishers follow replay schedule in your case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need 2nd lock to read channel and isSubscribed values to filter out unnecessary callbacks without waiting for channel recovery. Otherwise all callbacks will be accumulated in executor service queue until the connection and channels will be recovered (and every recovery attempt generates new callback).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use volatile or AtomicBoolean instead. Less lock, simple code, less chance to make a deadlock in future changes

@lumber1000 lumber1000 marked this pull request as draft November 17, 2023 14:15
@lumber1000 lumber1000 marked this pull request as ready for review November 20, 2023 07:51
@OptimumCode
Copy link
Member

@Nikita-Smirnov-Exactpro @lumber1000 Hi, what are we doing about this one? If we have tested it let's pull the latest change and if integration tests pass merge the PR

@Nikita-Smirnov-Exactpro
Copy link
Member

@Nikita-Smirnov-Exactpro @lumber1000 Hi, what are we doing about this one? If we have tested it let's pull the latest change and if integration tests pass merge the PR

I agree with you, because we haven't got other tests

Oleg Smelov added 4 commits March 4, 2024 15:54
@lumber1000 lumber1000 merged commit 1a76450 into dev-version-5 Mar 5, 2024
8 of 9 checks passed
@lumber1000 lumber1000 deleted the TH2-2212-merged-with-dev-5 branch March 5, 2024 10:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants