Postal 3.x - Cluster #2856
Replies: 3 comments 5 replies
-
We run LoadBalancer + 4 postal nodes (v2) + remote rabbit + xtraDB cluster (3 nodes). It will be interesting how they decide which task they should take without Rabbit. |
Beta Was this translation helpful? Give feedback.
-
Also very interested about this way of setup. |
Beta Was this translation helpful? Give feedback.
-
That is correct. The workers will elect a leader which will handle the actions which were previously handled by the cron process. If the elected worker disappears a new worker will take over that role within 5 minutes. All state and queueing is now handled only within the database. There was a lot of other information in your message but this seemed to be the only actual question. Did you have any specific areas you wanted advice or feedback on? |
Beta Was this translation helpful? Give feedback.
-
I know this has been asked for previous versions of Postal, so in a way I want to clarify things for 3.x
My goal is to build a cluster of VPS nodes which each handle a portion of mail processing for a few key purposes.
== Load Balancing ==
We have a bunch of mailing list servers amongst others which rely on sending lots of mail (in the hundreds of thousands to millions) per day. Distributing the load across a cluster of servers allows for more mail to be sent out faster. Also allows us to conduct maintenance on one server while the load gets sent to another node therefore reducing downtime seen by our customers.
== Rate Limiting ==
We acknowledge that too much mail coming from a single IP address, even when there are multiple lists from different clients is seen by many providers as SPAM. Therefore we distribute when possible mail across multiple nodes to increase the rate at which we can deliver to these providers such as Gmail, Microsoft, etc.
== Blacklist Management ==
Because of the above "Rate Limit" issue, along with other false positives, we like to build a cluster that can be used to distribute mail through a node that is capable of reaching the destination server when we've been temporarily blacklisted, while we work with the provider to either get whitelisted or off their blacklist.
== Changes in 3.x ==
I noticed a few big changes with 3.x, in particular that it no longer relies on RabbitMQ, or the "cron" feature of the previous version.
Is it to be assumed that, simply setting up a centralized MariaDB server, then pointing each node at this server along with per-node configurations (DNS related primarily) is all that is needed to accomplish the cluster I am after?
In the past I was instructed to only setup "one" node which had the "cron" and "RabbitMQ" instance running on it... How does 3.x address these tasks given the "cron" and "RabbitMQ" requirements have been removed?
== Internal Routing ==
From the internal routing perspective, "outbound" mail from any of our "Service" nodes to "Postal" would be handled by software on those nodes by way of a "transport map" which utilizes a custom script that checks the health of each of the "Postal" nodes (deliverability per domain). Our system checks the "Postal" nodes to see whether they are able to connect to a given destination provider, then whitelist the successful ones. Finally relaying outbound traffic from the "Service" node to one of the "Postal" nodes for delivery.
"Inbound" traffic would be done in a similar manner, whereby a message would get received at one of the "Postal" nodes by way of MX routing, then forwarded onto the appropriate "Service" node.
Anyways, sorry for the long thread, would appreciate your feedback on this!
*** We're also looking at deploying a "master-master" cluster of MariaDB servers to add further fault tolerance and performance improvements ***
Beta Was this translation helpful? Give feedback.
All reactions