Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock when generating encryption key #107

Merged
merged 10 commits into from
Aug 16, 2018
Merged

Lock when generating encryption key #107

merged 10 commits into from
Aug 16, 2018

Conversation

dpanayotov
Copy link
Member

@dpanayotov dpanayotov commented Aug 7, 2018

Fixes #106

Opened #116 for test

@coveralls
Copy link

coveralls commented Aug 7, 2018

Pull Request Test Coverage Report for Build 865

  • 29 of 33 (87.88%) changed or added relevant lines in 3 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.07%) to 90.71%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/sm/sm.go 6 8 75.0%
storage/postgres/security.go 20 22 90.91%
Totals Coverage Status
Change from base Build 856: -0.07%
Covered Lines: 1826
Relevant Lines: 2013

💛 - Coveralls

return err
}
if len(encryptionKey) != 0 {
return nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here you should return the key instead null

dzahariev
dzahariev previously approved these changes Aug 7, 2018
@dpanayotov dpanayotov changed the title Only leader should set encryption key Lock when generating encryption key Aug 8, 2018
pkg/sm/sm.go Outdated
@@ -155,6 +155,9 @@ func (smb *ServiceManagerBuilder) Build() *ServiceManager {
}

func initializeSecureStorage(secureStorage storage.Security) error {
if err := secureStorage.Lock(); err != nil {
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to return an error if database is locked. I think it should be:

  1. Try to lock database.
  2. If it is already locked, then wait some time (2 seconds for example)
  3. Try to lock again
  4. Repeat several times(5 for example)
  5. If it couldn't acquire the lock, then panic
  6. If lock is acquired then check for the encryption key. If it is already there then do nothing, otherwise generate one

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears that advisory locks block until a lock can be acquired:
https://medium.com/@codeflows/application-level-locking-with-postgres-advisory-locks-c29759eeb7a7

// Lock acquires a database lock so that only one process can manipulate the encryption key.
// Returns an error if the process has already acquired the lock
func (s *securityStorage) Lock() error {
if s.isLocked {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to not rely on a second inmemory lock and to directly rely on the response from the db query?

// Returns an error if the process has already acquired the lock
func (s *securityStorage) Lock() error {
if s.isLocked {
return fmt.Errorf("Lock is already acquired")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why error? The fact that the lock is locked, means its locked and if we decided that we do blocking select we should stick with that in all situations (even when in the same node)

return fmt.Errorf("Lock is already acquired")
}
_, err := s.db.Exec("SELECT pg_advisory_lock($1)", securityLockIndex)
if err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.rre this

return nil
}

_, err := s.db.Exec("SELECT pg_advisory_unlock($1)", securityLockIndex)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.rre


// Unlock releases the database lock.
func (s *securityStorage) Unlock() error {
if !s.isLocked {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again i would prefer to drop the inmemory lock


"github.com/DATA-DOG/go-sqlmock"
"github.com/Peripli/service-manager/security"
"github.com/jmoiron/sqlx"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestAPostgresStorage(t *testing.T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably it would make sense to add some tests in sm_test.go (talking to a real db and validating basic scenarios)

  1. failure occurs if the first key is missing from application.yml/env/flags
  2. things happen on startup if first key is present
    2.1. second key is generated
    2.2 locking works (lock twice, verify first is ok, second returns correct db response or waits and eventually when 1st unlocks, 2cd proceeds)

pkg/sm/sm.go Outdated
@@ -155,6 +155,9 @@ func (smb *ServiceManagerBuilder) Build() *ServiceManager {
}

func initializeSecureStorage(secureStorage storage.Security) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just thinking out loud - should we explicitly fail if this method actually attempts to generate a new key (len(encryptionKey) == 0) and we already have data in the database? Is there a sane way to validate this? Because we would be in a very messed up situation if SM starts in such state

pkg/sm/sm.go Outdated
@@ -155,6 +155,9 @@ func (smb *ServiceManagerBuilder) Build() *ServiceManager {
}

func initializeSecureStorage(secureStorage storage.Security) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on line

return fmt.Errorf("Could not generate encryption key: %v", err)

wrap error with %v - its fine but in some places in SM we use %s and in others we do it with %v - should we try to be consistent?

if os.Getenv("CLEAN_ENV") == "" {
Skip("Environment is not clean to validate behavior of parallel encryption key generation")
} else {
command := exec.Command(binaryPath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass port as flag here so both can use different ports

// Lock acquires a database lock so that only one process can manipulate the encryption key.
// Returns an error if the process has already acquired the lock
func (s *securityStorage) Lock() error {
if s.isLocked {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a mutex here to safeguard from future possible raceconditions


// Unlock releases the database lock.
func (s *securityStorage) Unlock() error {
if !s.isLocked {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a mutex here to safeguard from future possible raceconditions

// Lock acquires a database lock so that only one process can manipulate the encryption key.
// Returns an error if the process has already acquired the lock
func (s *securityStorage) Lock(ctx context.Context) error {
s.mutex.Lock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the correct lock ( e.x. this or RWLock )

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RWMutex provides only R(W)Lock and R(W)Unlock for these explicit situations. RWMutext.Lock/Unlock is the same as Mutex.Lock/Unlock
Since this is both reading and writing the value, there's no difference between RWMutex.Lock and Mutex.Lock

@dpanayotov dpanayotov merged commit f586159 into master Aug 16, 2018
@dpanayotov dpanayotov deleted the dblock branch August 25, 2018 13:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working 👋request review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants