Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fileCache not clear in distributed yorc servers #702

Open
trihoangvo opened this issue Oct 19, 2020 · 2 comments
Open

fileCache not clear in distributed yorc servers #702

trihoangvo opened this issue Oct 19, 2020 · 2 comments
Labels
bug Something isn't working

Comments

@trihoangvo
Copy link
Contributor

trihoangvo commented Oct 19, 2020

Bug Report

Description

When a deployment is deleted, yorc deletes its local fileCache:
https://github.com/ystia/yorc/blob/develop/deployments/deployments.go#L271

However, when we have more than one yorc servers, only one yorc server receiving the deployment deletion request, deletes its local file cache. The other yorc servers may have their caches of the given deployment.

When the same application is new deployed / updated again, one of the yorc server gets the old topology from its local cache:
https://github.com/ystia/yorc/blob/develop/storage/internal/file/store.go#L230

Actual behavior

As a result, yorc server may reuse outdated information of the old topology from its local file cache. The deployment may be inconsistent occasionally.

Expected behavior

yorc should check if it has its local file cache of the given deployment and clear all keys before proceeding the request?

Steps to reproduce the issue

Have 3 yorc servers

  1. Create an application with 1 compute, 1 software component (e.g., HelloWorld).
  2. Deploy it
  3. Un-deploy it
  4. Change the HelloWorld property to print something else.
  5. Deploy again.

Sometimes, the deployment log prints the old message from the old topology.

Additional information you deem important (e.g. issue happens only occasionally)

Issue happens only occasionally from time to time, when the yorc instances, which delete the deployment (i.e., have the cache cleared) and deploy new (i.e., still have the old cache) are not the same.

Additional environment details (infrastructure implied: AWS, OpenStack, etc.)

no

Output of yorc version

current develop

Yorc configuration file

Priority

This can be temporally fixed by scaling the yorc instances to one so I put it to medium.

@trihoangvo trihoangvo added the bug Something isn't working label Oct 19, 2020
@loicalbertin
Copy link
Member

Hi @trihoangvo thanks for reporting this.
We will investigate how to fix this.

@trihoangvo
Copy link
Contributor Author

@loicalbertin We found a simple fix in a4c. In the administration section, update the deploymentID with a timestamp. For example:

Old value: (application.id + '-' + environment.name).replaceAll('[^\w\-_]', '_')

New value: (application.id + '-' + environment.name + '-' + new java.text.SimpleDateFormat("yyyyddMMHHmmss").format(new java.util.Date())).replaceAll('[^\w\-_]', '_')

This makes the deploymentID of the same topology different each time it is deployed. As a result, yorc will load a new file cache for the new deployment. So I think we can close this issue but may document it somewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants