Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reload credentials before writing to avoid clashes with other processes #13

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mmrwoods
Copy link
Contributor

The credentials file is currently read once in main, and then those
credentials are stored in memory and used as the basis for writing to
the credentials file when update_credentials_file is eventually called.

This allows lots of time for other processes to write to the credentials
file, and awsmfa then obliterates those changes when it writes the file.

There is no easy way to ensure that multiple processes co-operate, but
we can at least reduce the timeframe for clashes, and this commit does
that by simply reloading the credentials before updating and writing.

@mmrwoods
Copy link
Contributor Author

Hi @dcoker,

I should have given some background to why I'm proposing this change...

I use a separate script to rotate my access keys for all aws profiles daily. I also use multiple computers, so I use a synchronisation service to copy changes to my aws credentials to those computers. And I use awsmfa to authenticate with MFA regularly.

This means I can have multiple processes making changes to the aws credentials file within a few minutes of me switching on a computer in the morning.

As awsmfa currently reads the entire credentials file on initialisation, and then writes the entire file after authenticating with the MFA token, this provides plenty of scope for the other processes to have modified the credentials after awsmfa has read them, and then all of those changes get wiped out when awsmfa writes the credentials file.

I can't entirely eliminate the possibility of multiple processes writing to the same file clobbering each other's changes, but modifying awsmfa to re-read the credentials before writing to the target profile is a big help. I've actually had no issues with invalid keys since making that change locally.

Thanks

Mark

@dcoker
Copy link
Owner

dcoker commented Jun 17, 2020

Thanks for taking the time to contribute this patch. The solution you propose significantly reduces the risk of multiple processes racing to modify the file, but I think this situation calls for a stronger solution:

  1. Ensure only one awsmfa process is operating on the file at a time by using some form of advisory lock. There may be pypi libraries that will do this in a cross-platform way.

  2. Serial execution of other tools can be accomplished by wrapping them in /usr/bin/flock (or whatever win32 equivalent might exist).

  3. The synchronization tool probably does not respect the locks, which may lead to races despite the other processes following a locking protocol. To minimize the risk of this, awsmfa should test the credentials file for changes immediately before the atomic rename, aborting if it has changed. Note that re-reading the file is not strong enough; the operations awsmfa performs to derive credentials from the first read may no longer be valid given the credentials from the second read, which would result in the profiles being inconsistent. It would be better to fail so that the user can try again.

Thanks!

@mmrwoods
Copy link
Contributor Author

mmrwoods commented Jun 17, 2020

Hi,

As the proposed change significantly reduces the risk of multiple processes racing to modify the file, could it not be merged to improve the situation, rather than waiting for other improvements?

I don't think either 1 or 2 are relevant to my situation, and I would guess they are unlikely to be relevant to other people running into this kind of problem. I'm rarely, if ever, using multiple awsmfa process in parallel, and there is no practical way to ensure all processes that might modify the aws credentials file use flock or similar to serialise write access.

For example, we use https://github.com/rhyeal/aws-rotate-iam-keys to rotate our aws access keys daily. On MacOS, the homebrew formula for this creates a launch agent which runs the commands to rotate access keys daily. It's not practical to ask people to modify the xml configuration for this launch agent to run the script with the same file lock as invocations of awsmfa and other process that might also write to the credentials file (the xml file is also generated automatically, so changes will be overwritten when the homebrew package is updated).

As for 3, yes, you are correct, the synchronisation tool is a GUI app that would not be able to use the same file locks to serialise write access to the credentials file.

BTW, I did consider things like generating a checksum of the file on read and then verifying it before write, but I figured that as there is no practical way to serialise writes to the aws credentials file, the best we can really hope for is to reduce the timeframe for race conditions.

When you say that re-reading the file is not enough, because this could leave the profiles in an inconsistent state if the credentials have changed between the first read and the second, I'm not sure this is relevant... as long as the credentials are valid for the identity profile at the time the session token is acquired, isn't this enough? Does it matter if the access keys for the identify profile are then modified by another process, won't the target profile credentials remain valid for awsmfa to write to the credentials file?

The credentials file is currently read once in main, and then those
credentials are stored in memory and used as the basis for writing to
the credentials file when update_credentials_file is eventually called.

This allows lots of time for other processes to write to the credentials
file, and awsmfa then obliterates those changes when it writes the file.

There is no easy way to ensure that multiple processes co-operate, but
we can at least reduce the timeframe for clashes, and this commit does
that by simply reloading the credentials before updating and writing.
@mmrwoods mmrwoods force-pushed the feature/be-nice-to-other-processes branch from e3c9da7 to 3fef374 Compare June 19, 2020 08:26
@mmrwoods
Copy link
Contributor Author

Hi @dcoker,

I'm just wondering if there is anything I can do to progress this?

This small change fixes the problem I've had with awsmfa clobbering changes made to the aws credentials file by other processes.

I realise you have suggested an alternative to simply re-reading before writing, namely to "test the credentials file for changes immediately before the atomic rename, aborting if it has changed", but I'm not sure how helpful that will be...

awsmfa reads credentials from an identity profile and writes temporary credentials to a target profile. If awsmfa aborts when there has been any change to the credentials file while it has been operating that will result in unnecessary errors/aborts which need to be explained to users, and in the case of key rotation, potentially invalid identity profile credentials left in the credentials file (awsmfa deletes the existing key before creating a new one, and then updates the credentials file).

What really matters here is that the target profile is valid at the time of writing, and that awsmfa does not clobber other changes to the credentials file. If another process has updated the identity profile, or indeed any other profile in the credentials file, that doesn't matter as long as the target profile is valid and awsmfa only updates the target profile.

As the target profile keys have just been updated by awsmfa they are very likely to be valid, and re-reading the credentials file before writing massively reduces the chances of awsmfa clobbering other changes to the file. One small improvement to simply re-reading the credentials could be to verify the target profile before writing, but this probably only matters in the case of key rotation, where both the identify and target profiles for updating the credentials file are the same.

Mark

@mmrwoods
Copy link
Contributor Author

Hi @dcoker,

I wonder if you could let me know your thoughts on how to progress this. Since I introduced this change on my local machines two months ago I've had no issues with awsmfa clobbering changes from other processes. Before this, I'd run into problems probably once a week.

On my team we recommend all developers use https://github.com/rhyeal/aws-rotate-iam-keys to rotate access keys daily. We also use awsmfa from shell scripts and python apps to perform MFA as required.

As we expand the number of people who use both aws-rotate-iam-keys and awsmfa, we're going to run into more and more examples of awsmfa clobbering changes to the aws credentials file made by aws-rotate-iam-keys, so I really need to fix this issue before finding myself providing support to people less familiar with aws credentials files and access keys.

I would much prefer to come up with a solution that you are happy with than fork the project and end up with a custom version of awsmfa.

Mark

@mmrwoods
Copy link
Contributor Author

mmrwoods commented Nov 7, 2023

Hi @dcoker,

Just wondering if you would reconsider merging this minor change to reduce the timeframe for race conditions between awsmfa and other processes writing to the credentials file?

In your initial response you acknowledged that this change "significantly reduces the risk of multiple processes racing to modify the file" but also noted that you thought the "situation calls for a stronger solution".

I don't think any of the proposed stronger solutions are practical, given the user may have little control over the other processes that are writing to the credentials file (e.g. cloud file synchronisations apps like sync.com).

We have been using a fork at https://github.com/OUP/awsmfa which includes this small change, and we have had no reports in a couple of years with people using this fork running into issues with race conditions.

Maybe the existing solution is good enough for most users and could be merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants