Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faktory Configuration within AWS Fargate #356

Closed
boushley opened this issue May 19, 2021 · 10 comments
Closed

Faktory Configuration within AWS Fargate #356

boushley opened this issue May 19, 2021 · 10 comments

Comments

@boushley
Copy link

  • Which Faktory package and version? Faktory Enterprise via Docker version 1.5.1
  • Which Faktory worker package and version? Ruby version 1.1.0

We're trying to sort out a good pattern for configuring Faktory when running in AWS Fargate. (We're migrating from Sidekiq, hence the comparisons to that flow belo.w)

The primary configuration we're performing is period/cron jobs. With Sidekiq updates to schedule config flowed naturally with the code. Now we're looking at how we can get configuration files from the GitHub repo, where we want them to live so we can adjust them in step with the code, and get them onto the EFS file system that we're mounting to our Faktory container.

The current solution we're looking at building is creating a script as part of our build pipeline that publish configuration files to S3. Then we would need to write a script that can pull those config files from S3 and place them into Faktory's config directory. After pulling the new config we would need to send a HUP signal to the Faktory process. If we want to continue running inside of AWS Fargate we would need to create a custom docker image with a different entry point that will run our script that pulls files from S3 and also runs the Faktory server process.

Are you aware of any examples / patterns of how users set something like this up?

Is there any way to manage config, specifically periodic jobs, via the Faktory API? That way a client could submit the cron jobs to Faktory and we could avoid the need for sync'ing toml config files between repos and sending HUP signals to docker processes?

Thanks for any insights you can provide.

@mperham
Copy link
Collaborator

mperham commented May 19, 2021

Cron data is kinda weird because it's app data and so is more mutable than typical Faktory config. You want your Faktory cron config to live with your app code. Assume you have an /opt/someapp Rails app. You can use Docker to mount a mutable directory at /etc/faktory, as noted here:

https://github.com/contribsys/faktory/wiki/Docker#read-only-configuration

-v /opt/someapp/config/faktory:/etc/faktory

I don't know if AWS has a nice way to mount an S3 bucket as a Docker volume but those are my initial thoughts. Maybe an EBS volume?

@boushley
Copy link
Author

Currently we're having faktory data and config live in EFS since it's the easiest way to attach persistent storage to a Faktory container.

I guess we could configure a separate container that contains our app code and mounts the Faktory EFS. We could then copy the configuration into the the EFS volume. We would want to do this per release, so we could probably combine it with our container that currently runs migrations.

That would get the content into the proper directory, however I still don't see a way to send a HUP to Faktory in ECS. Is there any way to signal Faktory via the API instead of process HUP that it should reload config?

@mperham
Copy link
Collaborator

mperham commented May 20, 2021

There is no remote way to signal HUP.

What I'm hearing is that putting things on the local filesystem is increasingly painful for customers using containers. I need to think about alternative ways to get configuration into Faktory. If you have any ideas, please speak up. Perhaps the right thing is to persist config in Redis and provide a way to use curl to update the bytes for deploys, e.g.:

for x in config/faktory/*.toml; do 
  curl -d $(basename $x)=@$x https://faktory:7420/config/update
done

@boushley
Copy link
Author

boushley commented May 20, 2021

I don't plan to expose the Redis instance being used by Faktory directly. Not sure I'd want to be pushing data into the internal store directly.

Populating the files into the file system seems doable with EFS, although it's definitely a fair bit of gymnastics.

At this point the blocker is having a way to signal to Faktory that it should check it's config. I imagine even if config was pushed into redis and we could change it directly we'd still need a way to signal to Faktory that new config is ready.

@boushley
Copy link
Author

I guess I didn't read the curl and fully understand what you meant at first.

Yes, if there was an endpoint like /config/update where I could post config contents that would certainly help. Although it looks like that would likely be append only, unless we had some way of specifying the filename these contents are replacing.

@mperham
Copy link
Collaborator

mperham commented May 20, 2021

Thats the point of the basename: you POST cron.toml=<contents of cron.toml> and Faktory stores the bytes in Redis at the cron.toml key. Inside Redis it would use a config Hash which has a set of filename keys and byte values:

config = {
  cron.toml => <...bytes...>,
  web.toml => <...bytes...>,
}

There's no easy way to do a transactional update though... How would you upload five files and have it HUP after the last...? Unless I added a /config/reload endpoint that is like HUP...

@boushley
Copy link
Author

Again, guess I didn't analyze the curl well enough. That would indeed solve our problem.

You're right that this would likely result in multiple reloads. We could probably minimize this knowing that these likely come in a loop as you demo'ed and we could debounce reloads for a second or two before triggering the HUP.

That said, if we have a /config/reload endpoint that triggers the same behavior as HUP that alone would allow us to proceed with the file system approach.

@mperham
Copy link
Collaborator

mperham commented May 20, 2021

#357

@mperham
Copy link
Collaborator

mperham commented May 20, 2021

I hope that issue is sufficient and you can find a way to make the current setup work for now.

@mperham mperham closed this as completed May 20, 2021
@boushley
Copy link
Author

That issue definitely looks like it would address our needs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants