-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faktory Configuration within AWS Fargate #356
Comments
Cron data is kinda weird because it's app data and so is more mutable than typical Faktory config. You want your Faktory cron config to live with your app code. Assume you have an /opt/someapp Rails app. You can use Docker to mount a mutable directory at /etc/faktory, as noted here: https://github.com/contribsys/faktory/wiki/Docker#read-only-configuration
I don't know if AWS has a nice way to mount an S3 bucket as a Docker volume but those are my initial thoughts. Maybe an EBS volume? |
Currently we're having faktory data and config live in EFS since it's the easiest way to attach persistent storage to a Faktory container. I guess we could configure a separate container that contains our app code and mounts the Faktory EFS. We could then copy the configuration into the the EFS volume. We would want to do this per release, so we could probably combine it with our container that currently runs migrations. That would get the content into the proper directory, however I still don't see a way to send a HUP to Faktory in ECS. Is there any way to signal Faktory via the API instead of process HUP that it should reload config? |
There is no remote way to signal HUP. What I'm hearing is that putting things on the local filesystem is increasingly painful for customers using containers. I need to think about alternative ways to get configuration into Faktory. If you have any ideas, please speak up. Perhaps the right thing is to persist config in Redis and provide a way to use for x in config/faktory/*.toml; do
curl -d $(basename $x)=@$x https://faktory:7420/config/update
done |
I don't plan to expose the Redis instance being used by Faktory directly. Not sure I'd want to be pushing data into the internal store directly. Populating the files into the file system seems doable with EFS, although it's definitely a fair bit of gymnastics. At this point the blocker is having a way to signal to Faktory that it should check it's config. I imagine even if config was pushed into redis and we could change it directly we'd still need a way to signal to Faktory that new config is ready. |
I guess I didn't read the Yes, if there was an endpoint like |
Thats the point of the
There's no easy way to do a transactional update though... How would you upload five files and have it HUP after the last...? Unless I added a |
Again, guess I didn't analyze the curl well enough. That would indeed solve our problem. You're right that this would likely result in multiple reloads. We could probably minimize this knowing that these likely come in a loop as you demo'ed and we could debounce reloads for a second or two before triggering the HUP. That said, if we have a |
I hope that issue is sufficient and you can find a way to make the current setup work for now. |
That issue definitely looks like it would address our needs |
We're trying to sort out a good pattern for configuring Faktory when running in AWS Fargate. (We're migrating from Sidekiq, hence the comparisons to that flow belo.w)
The primary configuration we're performing is period/cron jobs. With Sidekiq updates to schedule config flowed naturally with the code. Now we're looking at how we can get configuration files from the GitHub repo, where we want them to live so we can adjust them in step with the code, and get them onto the EFS file system that we're mounting to our Faktory container.
The current solution we're looking at building is creating a script as part of our build pipeline that publish configuration files to S3. Then we would need to write a script that can pull those config files from S3 and place them into Faktory's config directory. After pulling the new config we would need to send a HUP signal to the Faktory process. If we want to continue running inside of AWS Fargate we would need to create a custom docker image with a different entry point that will run our script that pulls files from S3 and also runs the Faktory server process.
Are you aware of any examples / patterns of how users set something like this up?
Is there any way to manage config, specifically periodic jobs, via the Faktory API? That way a client could submit the cron jobs to Faktory and we could avoid the need for sync'ing toml config files between repos and sending HUP signals to docker processes?
Thanks for any insights you can provide.
The text was updated successfully, but these errors were encountered: