Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two different redis clusters in different namespaces interconnected. Production disturbed. #698

Closed
PrudhviApx7 opened this issue May 16, 2024 · 7 comments · May be fixed by #702 or freshworks/redis-operator#3
Labels

Comments

@PrudhviApx7
Copy link

PrudhviApx7 commented May 16, 2024

Expected behaviour

Two redis clusters must work seamless if they have different namespace and name. accordign to this question

What do you want to achieve?
Run Mulltiple redis clusters in different namespaces with single operator

Actual behaviour

The replicas from cluster/replicated setup have connected to master of another cluster in diffrent namespace.

Describe step by step what you've have done to get to this point

They were working fine from last two months but yesterday at 6am, spot nodes in which these replicas are running have been taken down by GCP and new ones have spinned off during this transition soemthing happened which resulted in sudden drop to 2000+ keys and interconnection of replicas between clusters.

Environment

kubernetes

  • Redis Operator version v1.2.4
  • Kubernetes version 1.26
  • Kubernetes configuration used (eg: Is RBAC active?)
@PrudhviApx7
Copy link
Author

My redis failover cluster are names as sync-redis and aurora-redis.
In the below logs we can see replicas of sync-redis are made slaves of master of aurora-redis.
Screenshot from 2024-05-16 21-35-04
Screenshot from 2024-05-16 21-36-25

Copy link

github-actions bot commented Jul 1, 2024

This issue is stale because it has been open for 45 days with no activity.

@github-actions github-actions bot added the stale label Jul 1, 2024
@samof76
Copy link
Contributor

samof76 commented Jul 2, 2024

This issue is still relevant, and it happens because pods from a different cluster inherit ips that belong to some other cluster and sentinel is still not flushed to look for these new ip addresses. possible fixes

  • change the operator to not use mymaster to identify the cluster
  • change the sentinel to look up redis statefulset hostnames, rather than using ips. This means that must make the sentinels too statefulset, as all can be host identifiable.
  • apply network policies with pod selectors only allowing the pods with the similar selector communicate with pods with labels applied.(no operator changes required)

@samof76
Copy link
Contributor

samof76 commented Jul 2, 2024

@PrudhviApx7 Also please avoid using the same port for all tenants.

@github-actions github-actions bot removed the stale label Jul 3, 2024
Copy link

This issue is stale because it has been open for 45 days with no activity.

Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 31, 2024
@PrudhviApx7
Copy link
Author

@samof76 thanks for the workarounds. Will implement them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
2 participants