Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BD-3161] Feature flag experiments | Release Date TBD #7507

Draft
wants to merge 4 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
nav_title: Feature Flags in Canvas
page_order: 30
page_order: 40
noindex: true
tool: Feature Flags
platform:
Expand Down
55 changes: 39 additions & 16 deletions _docs/_developer_guide/platform_wide/feature_flags/experiments.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
nav_title: Feature Flag Experiments
article_title: Feature Flag Experiments
page_order: 40
page_order: 30
description: "Feature flag experiments let you A/B test changes to your applications to optimize conversion rates."
tool: Feature Flags
platform:
Expand All @@ -11,7 +11,7 @@ platform:

---

# Creating a feature flag experiment
# Feature flag experiments

> Feature flag experiments let you A/B test changes to your applications to optimize conversion rates. Marketers can use feature flags to determine whether a new feature positively or negatively impacts conversion rates, or which set of feature flag properties is most optimal.

Expand All @@ -32,46 +32,69 @@ if (featureFlag?.enabled) {

```

## Step 1: Create an experiment
## Creating a feature flag experiment

### Step 1: Create an experiment

1. Go to **Messaging** > **Campaigns** and click **+ Create Campaign**.
2. Select **Feature Flag Experiment**.
3. Name your campaign something clear and meaningful.

## Step 2: Add experiment variants

Next, create variations. For each variant, choose the feature flag you want to turn on or off and review the assigned properties.
### Step 2: Add experiment variants

To test the impact of your feature, use variants to split traffic into two or more groups. Name one group "My control group" and turn its feature flags off.
Next, create variations. For each variant, choose the feature flag you want to turn on or off and review the assigned properties. To test the impact of your feature, use variants to split traffic into two or more groups. Name one group "My control group" and turn its feature flags off.

### Overwriting properties
#### Overwriting properties

Though you specified default properties when you originally set up your feature flag, you can choose to overwrite those values for users who receive a specific campaign variant.
Though you specified default properties when you originally set up your feature flag, you can choose to overwrite those values for users who receive a specific campaign variant. To edit, add, or remove additional default properties, edit the feature flag itself from **Messaging** > **Feature Flags**.

![][image1]{: style="max-width:80%"}

To edit, add, or remove additional default properties, edit the feature flag itself from **Messaging** > **Feature Flags**.

## Step 3: Choose users to target
### Step 3: Choose users to target

Next, you need to [target users][4] by choosing segments or filters to narrow down your audience. Segment membership is calculated when feature flags are refreshed for a given user.

{% alert note %}
Your target audience will be eligible for the feature flag as soon as you save a rollout greater than 0%. Changes are made available once your app refreshes feature flags, or when a new session is started.
{% endalert %}

## Step 4: Distribute variants
### Step 4: Distribute variants

Choose the percentage distribution for your experiment. As a best practice, you should not change the distribution once your experiment has been launched.

## Step 5: Assign conversions
### Step 5: Assign conversions

Braze lets you to track how often users perform specific actions, [conversion events][5], after receiving a campaign. Specify up to a 30-day window during which a conversion will be counted if the user takes the specified action.

## Step 6: Review and launch
### Step 6: Review and launch

After you’ve finished building the last of your experiment, review its details, then click **Launch Experiment**. When your experiment is finished, you can [analyze the results](#analyzing-your-experiment).

## Reviewing the results

After your feature flag experiment is finished, you can review your results for...
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bkasman95 qq: what are some of the metrics that a user can find here? just want to list a couple as an example here.


Go to **Messaging** > **Campaigns** and select the campaign with your feature flag experiment.

### Campaign analytics

**Campaign Analytics** offers a high-level overview of your experiment's performance, such as:

- The total number of impressions
- The number of unique impressions
- The primary conversion rate
- The total revenue generated by the message
- The estimated audience

You can also view the experiment's settings for delivery, audience, and conversion.

![ALT_TEXT]()

### Feature flag experiment performance

After you’ve finished building the last of your experiment, review its details, then click **Launch Experiment**.
**Feature Flags Experiments Performance** shows how well your message performed across various dimensions. The specific metrics you see will vary depending on your chosen messaging channel, and whether you're running a multivariate test. To see the feature flag values associated with each variant, select **Preview**.

![ALT_TEXT]()

[1]: {{site.baseurl}}/user_guide/administrative/manage_your_braze_users/teams/
[2]: {{site.baseurl}}/user_guide/administrative/app_settings/manage_app_group/tags/
Expand Down
4 changes: 4 additions & 0 deletions _docs/_user_guide/data_and_analytics/report_metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,10 @@ Total number of clicks on Button 2 of the message.

{% api %}

### Campaign analytics
internetisaiah marked this conversation as resolved.
Show resolved Hide resolved

The performance of the message across various dimensions. The metrics shown depend on the selected messaging channel, and whether the [Feature Flag experiment]({{site.baseurl}}/developer_guide/platform_wide/feature_flags/experiments/#campaign-analytics) is a multivariate test.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bkasman95 : what is meant by "dimensions"? is that a term that we widely use throughout the product/docs? or is there another/more specific term? I'm unfamiliar with marketing lingo, so i'm not sure either way.


### Choices Submitted

{% apitags %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ guide_featured_list:
- name: Email
link: /docs/user_guide/message_building_by_channel/email/reporting_and_analytics/email_reporting/
image: /assets/img/braze_icons/mail-01.svg
- name: Feature Flags
link: docs/developer_guide/platform_wide/feature_flags/experiments/
image: /assets/img/braze_icons/whatsapp.svg
internetisaiah marked this conversation as resolved.
Show resolved Hide resolved
- name: In-App Messages
link: /docs/user_guide/message_building_by_channel/in-app_messages/reporting/
image: /assets/img/braze_icons/message-text-circle-01.svg
Expand Down