Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add state store cache proposal - first draft #7

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

berndverst
Copy link
Member

Signed-off-by: Bernd Verst [email protected]

This is a proposal for using existing state stores with TTL feature as a cache layer for all state stores.

For example Redis Cache or In Memory state store could be used as caches for any other state store (e.g. Azure Cosmos DB)

@ItalyPaleAle
Copy link
Contributor

I think this is a valid proposal, but wondering if we can make the API generic so that they can be used as a general-purpose cache too? Just like secret stores, which can be used as general purpose secret stores and for storing component secrets.

I also would recommend two more changes in components:

2. Perform Set Request against Cache, setting default TTL (or TTL from metadata)

If a Set Request cannot be applied to the Cache, the cache should be invalidated for this key, but the request should succeed.
If any underlying Get or Set Request performed against the underlying state store fails, the whole operation should fail.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and the cache should be invalidated for this key

- Perform Get Request against Cache, return value is present
- If value is not present:
1. Perform Get request against original state store
2. Perform Set Request against Cache for the key and apply default TTL for expiry (or apply TTL read from original state store item)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are steps 2-3 serialized? if yes this means callers will have to pay more request latency than they otherwise should for cache misses. suggest step 2 be made concurrent with step 3 to avoid this.

there's also no description here of how you will avoid cache inconsistency on a single host in situations where there are multiple concurrent requests to the same key. e.g. suppose you have an active request to populate a cache entry on a read-miss but concurrent to that you have another active request to write to the same key.

3. Return the value.
- State Store Set Request:
1. Perform Set Request against original State Store.
2. Perform Set Request against Cache, setting default TTL (or TTL from metadata)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar latency & concurrency concerns here as above

@@ -0,0 +1,95 @@
# Caching: State Stores as Cache
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall:

Suggest augmenting the proposal to make clear what the expected cache consistency semantics are supposed to be & how these are expected to interact w/ existing eventual/strong consistency and firstwrite/lastwrite concurrency models. (e.g. as written this proposal seems to break requests employing strong consistency semantics). I would recommend the discussion include single-host/single-thread, single-host/multi-thread, multi-host/multi-thread scenarios.

@omidb
Copy link

omidb commented Dec 6, 2022

Dapr sidecar has access to the lifecycle of a service (grpc (proxy) or http) and on each invoke we can cache the results for that specific request. This way we get a consistent cache for the service. We can invalidate the cache as soon as sidecar is restarted.

@berndverst
Copy link
Member Author

Someone else please continue this proposal as I will not be active on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants