Skip to content

Commit

Permalink
Merge branch 'v1.14' into pubsub-api-link
Browse files Browse the repository at this point in the history
  • Loading branch information
WhitWaldo authored Nov 30, 2024
2 parents d218a6f + b7b010a commit f530497
Show file tree
Hide file tree
Showing 32 changed files with 199 additions and 22 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -336,14 +336,13 @@ Status | Description
`RETRY` | Message to be retried by Dapr
`DROP` | Warning is logged and message is dropped

Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
Refer to [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.

### Example

Please refer following code samples for how to use Bulk Subscribe:

{{< tabs "Java" "JavaScript" ".NET" >}}
The following code examples demonstrate how to use Bulk Subscribe.

{{< tabs "Java" "JavaScript" ".NET" "Python" >}}
{{% codetab %}}

```java
Expand Down Expand Up @@ -471,7 +470,50 @@ public class BulkMessageController : ControllerBase

{{% /codetab %}}

{{% codetab %}}
Currently, you can only bulk subscribe in Python using an HTTP client.

```python
import json
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
# Define the bulk subscribe configuration
subscriptions = [{
"pubsubname": "pubsub",
"topic": "TOPIC_A",
"route": "/checkout",
"bulkSubscribe": {
"enabled": True,
"maxMessagesCount": 3,
"maxAwaitDurationMs": 40
}
}]
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
return jsonify(subscriptions)
# Define the endpoint to handle incoming messages
@app.route('/checkout', methods=['POST'])
def checkout():
messages = request.json
print(messages)
for message in messages:
print(f"Received message: {message}")
return json.dumps({'success': True}), 200, {'ContentType': 'application/json'}
if __name__ == '__main__':
app.run(port=5000)
```

{{% /codetab %}}

{{< /tabs >}}

## How components handle publishing and subscribing to bulk messages

For event publish/subscribe, two kinds of network transfers are involved.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Currently, you can experience this actors quickstart using the .NET SDK.
As a quick overview of the .NET actors quickstart:

1. Using a `SmartDevice.Service` microservice, you host:
- Two `SmartDectectorActor` smoke alarm objects
- Two `SmokeDetectorActor` smoke alarm objects
- A `ControllerActor` object that commands and controls the smart devices
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
Expand Down Expand Up @@ -119,7 +119,7 @@ If you have Zipkin configured for Dapr locally on your machine, you can view the

When you ran the client app, a few things happened:

1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
1. Two `SmokeDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
- `proxySmartDevice.SetDataAsync(data)`

Expand Down Expand Up @@ -177,7 +177,7 @@ When you ran the client app, a few things happened:
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
```

1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
1. The [`DetectSmokeAsync` method of `SmokeDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).

```csharp
public async Task DetectSmokeAsync()
Expand Down Expand Up @@ -216,7 +216,7 @@ When you ran the client app, a few things happened:
await proxySmartDevice1.DetectSmokeAsync();
```

1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called.
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmokeDetectorActor 1` and `2` are called.
```csharp
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
Expand All @@ -234,9 +234,9 @@ When you ran the client app, a few things happened:

For full context of the sample, take a look at the following code:

- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
- [`SmokeDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor`
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmokeDetectorActor`
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
{{% /codetab %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
- [AWS CLI](https://aws.amazon.com/cli/)
- [eksctl](https://eksctl.io/)
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
- [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/)

## Deploy an EKS cluster

Expand All @@ -25,20 +26,57 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
aws configure
```

1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required).
1. Create a new file called `cluster-config.yaml` and add the content below to it, replacing `[your_cluster_name]`, `[your_cluster_region]`, and `[your_k8s_version]` with the appropriate values:

```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
name: [your_cluster_name]
region: [your_cluster_region]
version: [your_k8s_version]
tags:
karpenter.sh/discovery: [your_cluster_name]

iam:
withOIDC: true

managedNodeGroups:
- name: mng-od-4vcpu-8gb
desiredCapacity: 2
minSize: 1
maxSize: 5
instanceType: c5.xlarge
privateNetworking: true

addons:
- name: vpc-cni
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- name: coredns
version: latest
- name: kube-proxy
version: latest
- name: aws-ebs-csi-driver
wellKnownPolicies:
ebsCSIController: true
```
1. Create the cluster by running the following command:
```bash
eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup
eksctl create cluster -f cluster.yaml
```

Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`.

1. Verify kubectl context:

1. Verify the kubectl context:

```bash
kubectl config current-context
```

## Add Dapr requirements for sidecar access and default storage class:

1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.

```bash
Expand All @@ -49,11 +87,37 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
--source-group [your_security_group]
```

2. Add a default storage class if you don't have one:

```bash
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

## Install Dapr

Install Dapr on your cluster by running:

```bash
dapr init -k
```

You should see the following response:

```bash
⌛ Making the jump to hyperspace...
ℹ️ Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced

ℹ️ Container images will be pulled from Docker Hub
✅ Deploying the Dapr control plane with latest version to your cluster...
✅ Deploying the Dapr dashboard with latest version to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started
```
## Troubleshooting
### Access permissions
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile:
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information [here](https://repost.aws/knowledge-center/eks-api-server-unauthorized-error):
```bash
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]
Expand Down
2 changes: 1 addition & 1 deletion daprdocs/content/en/operations/observability/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ weight: 60
description: See and measure the message calls to components and between networked services
---

[The following overview video and demo](https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&t=9931) demonstrates how observability in Dapr works.
[The following overview video and demo](https://www.youtube.com/watch?v=0y7ne6teHT4&t=12652s) demonstrates how observability in Dapr works.

<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&amp;start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,10 @@ This component supports **output binding** with the following operations:
- `delete` : [Delete blob](#delete-blob)
- `list`: [List blobs](#list-blobs)

The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}).

Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information.

### Create blob

To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,21 @@ This component supports **output binding** with the following operations:

- `create`: publishes a message on the Event Grid topic

## Receiving events

You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview)

In the following table, you can find the list of Dapr components that can raise events.

| Event sources | Dapr components |
| ------------- | --------------- |
| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}}) <br/>[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) |
| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}}) <br/>[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) |
| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}}) <br/>[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) |
| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) |

## Microsoft Entra ID credentials

The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
Expand Down Expand Up @@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"

> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
### Testing locally
## Testing locally

- Install [ngrok](https://ngrok.com/download)
- Run locally using a custom port, for example `9000`, for handshakes
Expand All @@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
```

### Testing on Kubernetes
## Testing on Kubernetes

Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,12 @@ spec:
value: 2.0.0
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
value: "true"
- name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
value: 1
- name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
value: 2097152
- name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
value: 512
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
value: http://localhost:8081
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
Expand Down Expand Up @@ -111,7 +117,9 @@ spec:
| schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` |
| clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` |
| clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` |
| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` |
| consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` |
| heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` |
| sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
Expand Down Expand Up @@ -460,7 +468,7 @@ Apache Kafka supports the following bulk metadata options:

When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.

The param name is `partitionKey`.
The param name can either be `partitionKey` or `__key`

Example:

Expand All @@ -476,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti

### Message headers

All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.

```shell
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
Expand All @@ -487,7 +495,51 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla
}
}'
```
### Kafka Pubsub special message headers received on consumer side

When consuming messages, special message metadata are being automatically passed as headers. These are:
- `__key`: the message key if available
- `__topic`: the topic for the message
- `__partition`: the partition number for the message
- `__offset`: the offset of the message in the partition
- `__timestamp`: the timestamp for the message

You can access them within the consumer endpoint as follows:
{{< tabs "Python (FastAPI)" >}}

{{% codetab %}}

```python
from fastapi import APIRouter, Body, Response, status
import json
import sys
app = FastAPI()
router = APIRouter()
@router.get('/dapr/subscribe')
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'my-topic',
'route': 'my_topic_subscriber',
}]
return subscriptions
@router.post('/my_topic_subscriber')
def my_topic_subscriber(
key: Annotated[str, Header(alias="__key")],
offset: Annotated[int, Header(alias="__offset")],
event_data=Body()):
print(f"key={key} - offset={offset} - data={event_data}", flush=True)
return Response(status_code=status.HTTP_200_OK)
app.include_router(router)
```

{{% /codetab %}}
## Receiving message headers with special characters

The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
Expand Down
Binary file modified daprdocs/static/images/actors-quickstart/actors-quickstart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/building_blocks.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/concepts-components.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/crypto-quickstart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/observability-sidecar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/observability-tracing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview-kubernetes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview-sidecar-apis.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview-sidecar-model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview-standalone.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview-vms-hosting.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/security-dapr-API-scoping.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/security-end-to-end-communication.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/security-mTLS-dapr-system-services.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/security-mTLS-sentry-kubernetes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/security-mTLS-sentry-selfhosted.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/service-invocation-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/state-management-quickstart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified daprdocs/static/images/workflow-quickstart-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit f530497

Please sign in to comment.