Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JWT claim-based authorization vault/bucket/role setup MinIO+AWS aka. Role Chaining #10

Closed
18 tasks done
chenkins opened this issue Jul 7, 2023 · 17 comments
Closed
18 tasks done
Assignees
Labels
design decision enhancement New feature or request katta-clientlib katta-clientlib katta-server katta-server (extension of Cryptomator Hub) stop-starting-and-start-finishing Get it done!
Milestone

Comments

@chenkins
Copy link
Collaborator

chenkins commented Jul 7, 2023

Story

  • Persona: dev/ops/sec
  • Need: only create roles at hub setup time AWS-side
  • Purpose: prevent role escalation / complexity kills security

Acceptance Criteria

  • finalize bucket naming convention --> implementation in Configurable prefix for bucket names, roles and policies #15
  • where to store vaults in Keycloak: use user attribute or roles....? If user attribute vault instead of vaults
  • cleanup dev-realm.json
    • re-use cryptomator client id to for cipherduck, see discussion Finalize vault storage configuration API #6, minimal diff to upstream
    • apply same setup in staging Keycloak
    • more transparency in keycloak configs for integration tests and local runs with password grant - can we do without duplicating the config?
    • what about user001 in dev-realm.json? Add it on the fly for integration tests?
    • syncer should not have full admin role in dev-realm.json
  • Documentation aud etc.
  • proper solution to allow AssumeRoleWithWebIdentity if no vault is shared with us. -> not required if we go for shift7-ch/katta-clientlib#30
    • MinIO setting
    • AWS setting
  • Document design decision and considered alternatives
  • Custom mapper AWS
  • AWS/MinIO switch (which mapper -> already UMA?)
  • frontend implementation create bucket AWS + MinIO using AssumeRoleWithWebIdentity Hub frontend/client UI vault storage configuration for STS #3
  • implementation chained assume role in cyberduck/client
  • documenation OIDC + bucket creation role + role chaining roles Admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation. #23

Open Questions

Context

Implementation

@chenkins chenkins added enhancement New feature or request v1 katta-clientlib katta-clientlib katta-server katta-server (extension of Cryptomator Hub) labels Jul 7, 2023
@chenkins chenkins self-assigned this Jul 7, 2023
@chenkins chenkins changed the title Vault/Bucket/Role creation e2e testing MinIO+AWS Vault/Bucket/Role setup MinIO+AWS Jul 11, 2023
@chenkins
Copy link
Collaborator Author

@ylangisc @overheadhunter @tobihagemann is it OK for you to use vaultId as bucketName when we create S3 buckets automatically for new vaults? This would keep things simple. Or do we need something more fancy like a hub-specific prefix to make the buckets recognizable as coming from the same hub for AWS admins? I suggest we store the bucket name in the vault config (resp. vault JWE, see discussion in #6) when we create the vault - this would allow to start with vault-UUID only and then add prefix or something if need arises without "touching" existing vaults.

@tobihagemann
Copy link
Collaborator

I'm also thinking about the S3 admins who might be wondering what these buckets are. I think a prefix with the product name would make sense and help with recognition. But since we don't have a product name yet, it's hard to define a prefix right now.

@overheadhunter
Copy link
Collaborator

While the vault id is unique and constant, I agree that a prefix might help humans.

@chenkins
Copy link
Collaborator Author

chenkins commented Jul 14, 2023

Summary of discussion with @ylangisc

Available keys for AWS web identity federation

You can use web identity federation to give temporary security credentials to users who have been authenticated through an OpenID Connect compliant OpenID Provider (OP) to an IAM OpenID Connect (OIDC) identity provider in your AWS account.
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#condition-keys-wif

Likely interpretation: this list of keys=claims is exhaustive. See also:

Federated claims are not propagated into the AWS session and are not accessible in the trust policy.
Source: https://stackoverflow.com/questions/48492885/how-to-check-for-custom-openid-claim-in-an-iam-roles-trust-policy

And:

IAM and Cognito still does not allow you to use custom JWT claims in IAM permissions. This only works for a small subset of claims that Cognito sets by default like the Cognito user sub:
Source: https://repost.aws/questions/QUW1WibDWjQd2rOll4mDiPMA/federated-identity-authenticated-role-custom-claims

This rules the possibility to (0a) add separate custom claims <vaultID>: true or (0b) one custom claim "vaults": [>vaultId>...] to the token in AWS (the second approach (0b) would work in MinIO).

Furthermore, empirically if a list/set is sent in the aud claim, then trust policies are only evaluated against the first value in that list.

Currently, this leaves us with possibilities:

  1. "mis-use" the amr claim, which supports set operations (ForAnyValue:...). The amr claim is intended for

Authentication Methods References. JSON array of strings that are identifiers for authentication methods used in the authentication. For instance, values might indicate that both password and OTP authentication methods were used.
Source: https://openid.net/specs/openid-connect-core-1_0.html

  1. Concatenate the vault IDs into one aud claim (string, not set/array of string) using a delimiter (e.g. | and then test in the trust policy using StringLIke operator (e.g. |<vaultId>|).

The first option has the advantage that the same setup can be used for AWS and for MinIO. Furthermore, for the second option we'd probably have to write a custom mapper and deploy it to Keycloak: https://stackoverflow.com/questions/60767085/keycloak-map-multiple-user-attributes

Can we run into problems with the first approach?

  • Keycloak not under our control, different use of the amr claim?
  • AWS not supporting the claim any more or evaluating the claims differently? -> minimal

@chenkins
Copy link
Collaborator Author

chenkins commented Jul 14, 2023

Complement: MinIO has no concept of trust policy and roles as in AWS, it has only two possibilities for OIDC STS:

  • claim-based: you can specify a claim and all the values in the claim are mapped to a set of policies (so MinIO is more powerful in this regard in that you can activate multiple policies in one STS call, whereas in AWS you can assume only one role to which multiple policies can be attached (this attaching of policies to roles is static and cannot be done during trust policy evaluation)
  • role-based: the OIDC token entitles you to assume the policy specified in the role_policy for the OIDC server configuration (probably, we could register separate OIDC servers in MinIO for each policy=vault going the the same Keycloak URL, to be tested)
    See: https://min.io/docs/minio/linux/reference/minio-mc/mc-idp-openid.html#mc.idp.openid.add

This rules out that we can evaluate a string claim in MinIO in StringLike manner, only verbatim matching.

-> use amr instead of aud for now.

@chenkins
Copy link
Collaborator Author

Complement for further reference:

aud for OAuth 2.0 Google client IDs of your application, when the azp field is not set. When the azp field is set, the aud field matches the accounts.google.com:oaud condition key.
Source https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html

-> if the azp field=claim is set, then aud in the condition is populated by the azp field and the values from aud claim go into oaud.......................

@chenkins
Copy link
Collaborator Author

https://auth0.com/docs/get-started/authentication-and-authorization-flow/which-oauth-2-0-flow-should-i-use

I have an application that needs to talk to different resource servers

If a single application needs access tokens for different resource servers, then multiple calls to /authorize (that is, multiple executions of the same or different Authorization Flow) needs to be performed. Each authorization will use a different value for audience, which will result in a different access token at the end of the flow.

@chenkins
Copy link
Collaborator Author

Can we do more with https://www.keycloak.org/docs/latest/authorization_services/#_resource_overview? Not sure yet.

@chenkins
Copy link
Collaborator Author

chenkins commented Aug 2, 2023

Another thread (thx @overheadhunter ): https://community.auth0.com/t/token-exchange-multiple-audiences/7188

https://auth0.com/docs/get-started/authentication-and-authorization-flow/which-oauth-2-0-flow-should-i-use

I have an application that needs to talk to different resource servers
If a single application needs access tokens for different resource servers, then multiple calls to /authorize (that is, multiple executions of the same or different Authorization Flow) needs to be performed. Each authorization will use a different value for audience, which will result in a different access token at the end of the flow.

@overheadhunter
Copy link
Collaborator

Complement for further reference:

aud for OAuth 2.0 Google client IDs of your application, when the azp field is not set. When the azp field is set, the aud field matches the accounts.google.com:oaud condition key.
Source https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html

-> if the azp field=claim is set, then aud in the condition is populated by the azp field and the values from aud claim go into oaud.......................

Did using the azp claim make any difference?

@chenkins
Copy link
Collaborator Author

chenkins commented Aug 2, 2023

@overheadhunter I'll need to check empirically whether azp could be used with multiple values.

Strangely, the documentation says on the one hand

Define condition keys using the name of the OIDC provider followed by the claim (:aud, :azp, :amr, sub). For roles used by Amazon Cognito, keys are defined using cognito-identity.amazonaws.com followed by the claim.

On the other hand, azp does not appear in the list of available keys for AWS web identity federation...

amr is the only claim for which they explicitly say it's multivalued:

The key is multivalued, meaning that you test it in a policy using condition set operators

Reading this, I would guess it will only pass the first value into evaluation.

chenkins added a commit that referenced this issue Aug 4, 2023
chenkins added a commit that referenced this issue Aug 4, 2023
@chenkins chenkins removed the v1 label Aug 24, 2023
@chenkins chenkins added this to the v1 milestone Aug 24, 2023
@chenkins
Copy link
Collaborator Author

chenkins commented Aug 31, 2023

Adding vault UUIDs to an ID token rather quickly reaches size restrictions (e.g. Serialized token too large for session), approx. 10-15 UUIDs. See also https://stackoverflow.com/questions/686217/maximum-on-http-header-values for header, however in our case it's in the body:

 <ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
  <Error>
    <Type>Sender</Type>
    <Code>PackedPolicyTooLarge</Code>
    <Message>Serialized token too large for session</Message>
  </Error>
  <RequestId>72a8a854-4a74-47f0-8e1f-e5f9cf145fb8</RequestId>
</ErrorResponse>

Approaches aka. Auslegeordnung

  • live with this restriction
    • if we have vaults top-level, we might also have a restriction at the UI
    • in most cases probably not a problem....
  • shorten UUIDs with the danger of collisions (some people could then have read/write access to the bucket, but they could not decrypt the data, and would only see the buckets if they get access to the ID token using dev tools and list buckets with the ID token)
    • still restrictions on the amount of vaults per user, maybe hard-to-predict ones and hard to balance ones.
  • We find a way of using https://www.keycloak.org/docs/latest/authorization_services/#_resource_overview.
    • Highly unlikely to get an ID token this way, though.
  • Instead of adding vault IDs to tokens, edit/create/delete roles on the S3 side when users are added/removed to/from a vault
    • vault owners would need to get (temporary) access to create roles;
    • users need to be given (temporary) access to to create buckets and roles (maybe restricted to some prefix);
    • in the end we would duplicate the roles in the hub tables on the AWS side (vault owners would need to be able to create and delete roles corresponding to granting access to the vault).
    • needs to be checked whether it is easy to design the roles so the users cannot do harmful actions
      • only create buckets, never delete should be feasible;
      • however, how to check only compliant roles are added;
      • still users could intercept tokens and create buckets/roles outside of hub as long as it complies with the defined restrictions.
    • organizations are usually reluctant to give away such permissions to users without central control/monitoring (e.g. through self service portal)
  • ...?

@ylangisc request for comment: corrections, complements?

@chenkins chenkins added the help wanted Extra attention is needed label Aug 31, 2023
@chenkins
Copy link
Collaborator Author

chenkins commented Aug 31, 2023

Role limit: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html -> max 5000 per account. Not feasible approach in larger organization (10'000 users each having access to more than one vault).

Discussion with @ylangisc

  • having AWS calls on every vault sharing is not KISS
  • reduce IdentityToken size by removing roles etc. and using alternative to UUIDs (system millis, enumerating vaults starting from 0, ....) -> check out how fare we get with that, how many vaults can we squeeze in? Non-achievable max. upper limit 2048 / 32 = 64 when using UUIDs. This would also leave with one role per bucket and the corresponding IAM limit.

Interestingly, the following also uses the amr claim and discusses similar issues:
mozilla-iam/mozilla-aws-cli#26 -> https://github.com/mozilla-iam/auth0-deploy/blob/master/rules/AWS-Federated-AMR.js

chenkins added a commit that referenced this issue Nov 20, 2023
… in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client) (#10 #41)
chenkins added a commit that referenced this issue Dec 14, 2023
chenkins added a commit that referenced this issue Dec 14, 2023
chenkins added a commit that referenced this issue Dec 14, 2023
…y credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).
chenkins added a commit that referenced this issue Dec 14, 2023
…y credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).
chenkins added a commit that referenced this issue Dec 14, 2023
chenkins added a commit that referenced this issue Dec 14, 2023
… in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client) (#10 #41)
chenkins added a commit that referenced this issue Jan 10, 2024
chenkins added a commit that referenced this issue Jan 10, 2024
chenkins added a commit that referenced this issue Jan 10, 2024
…y credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).
chenkins added a commit that referenced this issue Jan 10, 2024
…y credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).
chenkins added a commit that referenced this issue Jan 10, 2024
chenkins added a commit that referenced this issue Jan 10, 2024
… in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client) (#10 #41)
chenkins added a commit that referenced this issue Mar 19, 2024
Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Mar 19, 2024
Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Jun 7, 2024
Allow for bucket acceleration to be nullable (#44).

Link to admin setup documentation in github (#44).

Install JDK before running mvn (#44).

Run compile before generating openapi.json in github (#44).

Debug openapi.json github (#44).

Debug openapi.json github (#44).

Fix type safety for storage profile details (#44).

Storage profile details with annotation from openapi.json (#44).

Storage profiles in admin area (#44).

Add missing http client libraries for S3 (#47).

Post-rebase fixes

Fix formatting.

Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Jun 7, 2024
Allow for bucket acceleration to be nullable (#44).

Link to admin setup documentation in github (#44).

Install JDK before running mvn (#44).

Run compile before generating openapi.json in github (#44).

Debug openapi.json github (#44).

Debug openapi.json github (#44).

Fix type safety for storage profile details (#44).

Storage profile details with annotation from openapi.json (#44).

Storage profiles in admin area (#44).

Add missing http client libraries for S3 (#47).

Post-rebase fixes

Fix formatting.

Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Jun 7, 2024
Post-uvf-rebase fix repository refactoring upstream.

Post-uvf-rebase fix ConfigResource.

Allow for bucket acceleration to be nullable (#44).

Link to admin setup documentation in github (#44).

Install JDK before running mvn (#44).

Run compile before generating openapi.json in github (#44).

Debug openapi.json github (#44).

Debug openapi.json github (#44).

Fix type safety for storage profile details (#44).

Storage profile details with annotation from openapi.json (#44).

Storage profiles in admin area (#44).

Add missing http client libraries for S3 (#47).

Post-rebase fixes

Fix formatting.

Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Aug 20, 2024
Post-uvf-rebase fix repository refactoring upstream.

Post-uvf-rebase fix ConfigResource.

Allow for bucket acceleration to be nullable (#44).

Link to admin setup documentation in github (#44).

Install JDK before running mvn (#44).

Run compile before generating openapi.json in github (#44).

Debug openapi.json github (#44).

Debug openapi.json github (#44).

Fix type safety for storage profile details (#44).

Storage profile details with annotation from openapi.json (#44).

Storage profiles in admin area (#44).

Add missing http client libraries for S3 (#47).

Post-rebase fixes

Fix formatting.

Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
chenkins added a commit that referenced this issue Nov 5, 2024
Post-uvf-rebase fix repository refactoring upstream.

Post-uvf-rebase fix ConfigResource.

Allow for bucket acceleration to be nullable (#44).

Link to admin setup documentation in github (#44).

Install JDK before running mvn (#44).

Run compile before generating openapi.json in github (#44).

Debug openapi.json github (#44).

Debug openapi.json github (#44).

Fix type safety for storage profile details (#44).

Storage profile details with annotation from openapi.json (#44).

Storage profiles in admin area (#44).

Add missing http client libraries for S3 (#47).

Post-rebase fixes

Fix formatting.

Use AWS SDK compatible with Quarkus native images (#47).

Build docker image on every build as latest (#47).

Remove UUID from vault JWE (#4/#6).

Add storage class, bucket acceleration and bucket encryption options to storage profiles (#44).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging and openapi response documentation (#6).

Improved error handling/logging for storage profiles and bucket creation (#6/#3/17).

Decouple the client protocol identifier (s3-hub/s3-hub-sts) and the discriminator values (S3/S3STS) in the backend DB tables through openapi-generated client code (#6).

Fix linting (#6).

Hierarchical DB schema for storage profiles based on DiscriminatorColumn (#6).

Use discriminatorProperty openapi scheme annotations for use with openapi-generator's oneOf discriminator lookup (#6).

Use optional chaining to make linter happy again (TS18048) (#17/#3).

Formatting.

Improved error messages (#3/#17).

Slim storage profile for what can be fetched from /api/config (#6).

Fix openapi/markdown documentation for openapi-generator (#6).

Do not show region/GetBucketLocation in permanent case. Update openapi/markdown documentation for storage configurations (#6/#17).

Update openapi/markdown documentation for storage configurations (#6)

Pull up automatic access grant top-level in vault JWE along key and backend (#13).

Create bucket as first call in vault creation (#3).

Bufgix GET for individual storage profile allow all users (not only admins) (#17).

Show aws cli command for setting CORS (#17).

Improve validation message vault template upload frontend permanent (#17)

Implement GET for individual storage profile and DELETE WiP (#17)

Bugfix vault template upload frontend permanent (#17)

Bugfix vault template upload frontend permanent (#17).

Fix linting (#17).

Validate permanent credentials before uploading vault template (#17).

Bugfix vault template upload frontend permanent (#17)

Fix missing aud claim required for MinIO.

Update documentation uploading storage profiles with admin role (#6).

Fix base uri to open in Cipherduck desktop.

Enforce authentication for storage profile api.

Fix enable S3 Versioning upon bucket creation (#44).

Formatting.

Enable S3 Versioning upon bucket creation (#44).

Enable S3 Versioning upon bucket creation (#44).

Implement migration path automatic access grant with WoT (#13 / #43).

Document decision remove access to vault.

Rename KeycloakCryptomatorVaultsHelper.

Rename S3StorageHelper.

Fix linting.

Fix upload vault template to bucket heading.

Use *.cyberduckprofile for hub, s3-hub, s3-hub-sts.

Fix cipherduck start/end extension markers.

Refactoring (R3) storage profile service persistence (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Refactoring (R3) storage profile service WiP (#4 #6).

Set container image group and name in application.properties instead of pom.xml (#47).

Update setup documentation(#47).

Set container image group and name (#47)

Re-enable docker image build and pushing to registry.

Remove cipherduckhubbookmark endpoint, add hub UUID to config endpoint to allow client-side hub-specific profile and bookmark generation (#6).

Use better name  VaultRequested instead of VaultR for Tag Key in assuming second role in chain for AWS (review dko).

Get full region list from AWS SDK instead of hard-coding (code review overheadhunter).

Extract global constant axiosUnAuth in backend.ts (code review overheadhunter).

Inline hubbookmark.duck in order to avoit poentital special handling when using GraalVM to build a native image.

Apply suggestions from code review

More idiomatic usage of Java stream API.

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/cipherduck/BackendsConfigResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Remove obsolete added into line in diff to upstream.

Comply with vue-tsc (Vue 3 Type-Checking).

Update README.md

Co-authored-by: Sebastian Stenzel <[email protected]>

Moving S3 policies away from src/main/resources.

Remove CreateVaultS3.vue in order to rebase changes in CreateVault.vue from upstream. Bugfix description displayed as false when vaults created in hub introduced through forking CreateVaultS3.vue from CreateVault.vue and then missing breaking API change.

By default, in dev-realm.json, map only realm roles into access token in cryptomator and cryptomator hub clients, but not client roles. Separate roles in MinIO: bucket creation (cryptomator and cryptomatorhub cliients) and bucket access (for cryptomatorvaults client)  (#10 #41)

Implement template upload for permanent shared credentials (#17).

Tentative implementation clean-up sync deleting dangling roles in cryptomatorvaults and corresonding client scopes (#41).

Extract profiles and simplify vault jwe (#28, #6).

Variable cleanup in CreateVaultS3.vue

Comply with pre-release API change in granting access to newly created vault (cryptomator/hub@1c2133d).

Post-rebase fix: remove manage-realm from syncer role in dev-realm.json (#41).

Move staging/testing properties into custom application.properties (#41).

Distinguish stsRoleArn for client and hub when creating bucket, update documentation (#12 #23).

Bugfix download template in CreateVaultS3.vue

User cipherduck profiles to simplify hub application.properties, add AWS permanent credentials to backend configurations application.properties (#28).

Get AWS-STS back to work again, update documentation (#10 #23).

Add developer flag for showing vaultId in VaultDetails and VaultList.

Add missing import  ArrowDownTrayIcon in CreateVaultS3.vue.

BackendsConfigDto instead of Any in backend.ts

Remove unnecessary manage-realm role for syncer (#41).

Automatic Access Grant Flag upon vault creation (#13).

Extract hard-coded cryptomatorvaults client to application.properties (#41).

Get hubId from backends config service (#10 #41).

Implement sharing vaults with groups and unsharing with users/groups; token-exchange into separate client (#10 #41).

Cleanup application.properties

Cleanup application.properties

Remove proxyman stuff again as not used.

Complete region list.

Remove obsolete dependencies in pom.xml.

Refactoring protocol serialization (#4).

Remove obsolete CipherduckBookmark.vue (#16).

Remove obsolete CipherduckBookmark.vue (#16). Localization DE (#31).

Mark cipherduck extensions in vues.

Shared long-living credentials: ask for bucket name and offer vault template download after vault creation (#17).

Shared long-living credentials (#17)

Use inline policy to restrict credentials passed to Hub backend (#3).

Allow for choosing region upon vault creation (#3).

Cleanup and documentation VaultJWEBackend (#23 #6).

Button "Open in Cipherduck" not necessary in vault details, as it is confusing (does not open single vault) and on top of the vault list is still visible (#16).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#23 #6).

Cleanup and documentation VaultJWEBackend (#15 #23 #6).

Bugfix backend/storage configuration not re-encoded upon granting access (#13).

Cleanup bucket prefix and documentation (#15 #23 #6).

Implement token-exchange to get scoped token for AWS with testing.hub.cryptomator.org (#41 #10 #23 #3).

Gitignore local backend/config/application.properties.

Updated top-level README.md for Cipherduck.

Show Vault ID in VaultDetails for debugging.

Implement token-exchange to get scoped token for MinIO (#41 #10 #23 #3).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

AssumeRoleWithWebIdentity (MinIO + AWS) in frontend and pass temporary credentials to backend: get rid of policy upload and use only AWS client, admin documentation (#3, #23, #10).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Cipherduckhubbookmark end point for 1 vault = 1 storage (#4).

Use StorageConfig service in frontend to get values (#3).

Add configuration for hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

 Add admin Documentation for setting up OIDC Provider at AWS/MinIO and testing vault creation (#23).

Add hub frontend vault storage configuration for STS (MinIO + AWS) (#3).

Add protocol field to StorageDto (#6).

Refactor StorageDto into record instead of POJO.

Update frontend/src/common/backend.ts

Co-authored-by: Sebastian Stenzel <[email protected]>

Update backend/src/main/java/org/cryptomator/hub/api/StorageResource.java

Co-authored-by: Sebastian Stenzel <[email protected]>

Fix failing tests as Keycloak is not available at quarkus test time.

Comment out sonarcloud in github action.

Update issue templates (#33)

added "open bookmark" button in vault details (just in case), hid "download vault template" button

Use 'x-cipherduck-action' instead of 'io.mountainduck' for OAuth custom scheme handling (#28).

added "open bookmark" button in vault list

Add Hub Id as UUID in hub bookmark to prevent adding the same bookmark multiple times (#29).

renamed most obvious instances of Cryptomator Hub to Cipherduck

Bugfix missing description in openapi.json for vault storage shared long-living credentials API (#17).

cleaned up frontend

Implement cipherduck hub bookmark download from browser (#16).

Implement cipherduck hub bookmark download frontend page (#16).

Bugfix missing constructor for first version hub frontend vault storage shared long-living credentials (#17).

Implement first version hub frontend vault storage shared long-living credentials (#17).

Implement cipherduck hub bookmark endpoint (#16).

Use amr claim instead of aud claim for now (#10).

Use cryptomator client id in staging keycloak as well (#10). Use vault instead of vault user attribute (#10).

Remove admin role for syncer (#10).

Remove minio client id.

Switch /api/config/cipherduckprofile to local MinIO configuration to fix HubIntegration test in client project.

Update TODOs.

Bugfix empty attributes in keycloak.

Config cipherduck-staging (one role for all buckets).

Set directAccessGrantsEnabled to false.

Simplify concat

Add top-level .gitignore (ignoring top-level .idea folder).

Add /api/config/cipherduckprofile v0.

Remove obsolete dependencies to commons-io and qute.

Move GeneratePolicy back to duck again. Dev-realm with minio client_id.

Upload bucket policy (aws cli call in backend for now) upon vault creation and add vaultId to keycloak upon vault JWE upload. TODO: create bucket upon vault creation.

Update application.properties: comment out proxyman.local

Improve local dev setup description in README.  Add user-001 to dev-realm.json. Add configuration with alternative host proxyman.local instead of localhost name as requests to localhost are bypassing configured proxies.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design decision enhancement New feature or request katta-clientlib katta-clientlib katta-server katta-server (extension of Cryptomator Hub) stop-starting-and-start-finishing Get it done!
Projects
None yet
Development

No branches or pull requests

3 participants