Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 13 additions & 12 deletions config/_default/menus/main.en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8789,7 +8789,7 @@ menu:
parent: rum
weight: 13
- name: Product Analytics
url: product_analytics
url: product_analytics/
pre: product-analytics
identifier: product_analytics
parent: digital_experience_heading
Expand Down Expand Up @@ -8840,30 +8840,31 @@ menu:
identifier: pa_profiles
weight: 4
- name: Experiments
url: product_analytics/experimentation/
parent: product_analytics
url: experiments/
pre: experiment-wui
parent: digital_experience_heading
identifier: pa_experiments
weight: 5
weight: 50000
- name: Define Metrics
url: product_analytics/experimentation/defining_metrics
url: experiments/defining_metrics
parent: pa_experiments
identifier: pa_experiments_metrics
weight: 501
weight: 1
- name: Reading Experiment Results
url: product_analytics/experimentation/reading_results
url: experiments/reading_results
parent: pa_experiments
identifier: pa_experiments_results
weight: 502
weight: 2
- name: Minimum Detectable Effects
url: product_analytics/experimentation/minimum_detectable_effect
url: experiments/minimum_detectable_effect
parent: pa_experiments
identifier: pa_experiments_mde
weight: 503
weight: 3
- name: Troubleshooting
url: product_analytics/experimentation/troubleshooting
url: experiments/troubleshooting
parent: pa_experiments
identifier: pa_experiments_troubleshooting
weight: 504
weight: 4
- name: Guides
url: product_analytics/guide/
parent: product_analytics
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
---
title: Planning and Launching Experiments
description: Experimentation allows you to measure the causal relationship new experiences or features have on user outcomes.
description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes.
aliases:
- /product_analytics/experimentation/
further_reading:
- link: "https://www.datadoghq.com/blog/datadog-product-analytics"
tag: "Blog"
text: "Make data-driven design decisions with Product Analytics"
- link: "/product_analytics/experimentation/defining_metrics"
- link: "/experiments/defining_metrics"
tag: "Documentation"
text: "Defining Experiment Metrics"
---
Expand All @@ -15,11 +17,10 @@ Datadog Experiments is in Preview. Complete the form to request access.
{{< /callout >}}

## Overview
Datadog Experimentation allows you to measure the causal relationship that new experiences and features have on user outcomes. To do this, Experimentation randomly allocates traffic between two or more variations, using one of the variations as a control group.
Datadog Experiments allows you to measure the causal relationship that new experiences and features have on user outcomes. Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group.

This page walks you through planning and launching your experiments.


## Setup
To create, configure, and launch your experiment, complete the following steps:

Expand All @@ -29,15 +30,13 @@ To create, configure, and launch your experiment, complete the following steps:
2. Click **+ Create Experiment**.
3. Enter your experiment name and hypothesis.

{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="create an experiment and add a hypothesis for the experiment." style="width:80%;" >}}

{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}}

### Step 2 - Add metrics

After you’ve created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics.

{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="create an experiment and add a hypothesis for the experiment." style="width:80%;" >}}

{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}}

#### Add a sample size calculation (optional)

Expand All @@ -48,44 +47,29 @@ After selecting your experiment’s metrics, use the optional sample size calcul

1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference that you are able to detect between your experiment’s variants.

{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="Sleect an entrypoint event to run a sample size calculation" style="width:90%;" >}}
{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="Select an entry point event to run a sample size calculation" style="width:90%;" >}}

### Step 3 - Launch your experiment

After specifying your metrics, you can launch your experiment.

1. Select a Feature Flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page.

1. Click **Set up experiment on feature flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout.

1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout.

{{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}}


## Next steps
1. **[Defining metrics][2]**: Define the metrics you want to measure during your experimentation.
1. **[Reading Experiment Results][5]**: Review and explore your Experiment results.
1. Learn more about **[Minimum Detectable Effects][3]**: Choose an appropriately sized MDE.













1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments.
1. **[Reading Experiment Results][5]**: Review and explore your experiment results.
1. **[Minimum Detectable Effects][3]**: Choose an appropriately sized MDE.


## Further reading
{{< partial name="whats-next/whats-next.html" >}}

[1]: https://app.datadoghq.com/product-analytics/experiments
[2]: /product_analytics/experimentation/defining_metrics
[3]: /product_analytics/experimentation/minimum_detectable_effect
[2]: /experiments/defining_metrics
[3]: /experiments/minimum_detectable_effect
[4]: /getting_started/feature_flags/
[5]: /product_analytics/experimentation/reading_results
[5]: /experiments/reading_results
Original file line number Diff line number Diff line change
@@ -1,20 +1,22 @@
---
title: Defining Metrics
description: Define the metrics you want to measure during your experimentation.
description: Define the metrics you want to measure during your experiments.
aliases:
- /product_analytics/experimentation/defining_metrics/
further_reading:
- link: "https://www.datadoghq.com/blog/datadog-product-analytics/"
tag: "Blog"
text: "Make data-driven design decisions with Product Analytics"
- link: "/product_analytics/experimentation/reading_results"
- link: "/experiments/reading_results"
tag: "Documentation"
text: "Reading Experiment Results"
---

## Overview

Define the metrics you want to measure during your experimentation. Metrics can be built using Product Analytics and Real User Monitoring (RUM) data.
Define the metrics you want to measure during your experiments. Metrics can be built using Product Analytics and Real User Monitoring (RUM) data.

<div class="alert alert-info"> In order to create a metric, you must have Datadog’s client SDK installed in your application and be actively capturing data.
<div class="alert alert-info"> To create a metric, you must have Datadog’s client SDK installed in your application and be actively capturing data.
</div>

## Create your first metric
Expand All @@ -35,7 +37,6 @@ After you’ve selected your event of interest, you can specify an aggregation m

{{< img src="/product_analytics/experiment/exp_default_metric_agg.png" alt="Dropdown menu to select the method of aggregation for metrics." style="width:90%;" >}}


### Default metric normalization

All metrics are normalized by the number of enrolled subjects. For example, a **count of unique users** metric is computed as:
Expand Down Expand Up @@ -68,17 +69,13 @@ For example, an e-commerce company that wants to measure the _Average Order Valu

Datadog’s statistical engine accounts for correlations between the numerator and denominator using the [delta method][2].


## Add filters
You can also add filters to your metrics, similar to other [Product Analytics dashboards][3]. For instance, you might want to filter page views based on referring URL or UTM parameters. Similarly, you might want to filter actions to a specific page or value of a custom attribute. As you add filters, you can check metric values in real time using the chart on the right.


{{< img src="/product_analytics/experiment/exp_filter_by.png" alt="Filter flow to scope your metric by specific properties." style="width:90%;" >}}



## Advanced options
Datadog supports several advanced options specific to experimentation:
Datadog Experiments supports several advanced options:

`Timeframe filters`
: - By default, Datadog will include all events between a user's first exposure and the end of the experiment. If you want to measure a time-boxed value such as “sessions within 7 days”, you can add a timeframe filter.
Expand All @@ -92,10 +89,6 @@ Datadog supports several advanced options specific to experimentation:
: - Real world data often includes extreme outliers that can impact experiment results.
- Use this setting to set a threshold at which data is truncated. For instance, set a 99% upper bound to truncate all results at the metric’s 99th percentile.





## Further reading
{{< partial name="whats-next/whats-next.html" >}}

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
title: Minimum Detectable Effects
description: Determine the smallest detectable difference that may result in a statistically significant experiment result.
aliases:
- /product_analytics/experimentation/minimum_detectable_effect/
further_reading:
- link: "https://www.datadoghq.com/blog/datadog-product-analytics/"
tag: "Blog"
Expand Down Expand Up @@ -40,10 +42,5 @@ If the MDE is too small, the experiment may require excessive traffic or run tim

A common way to choose an MDE is to examine results from past experiments. For example, if historical experiments in a particular domain typically yield effects of 5–10%, selecting an MDE near the lower end of that range (such as 5%) can be a reasonable starting point.






## Further reading
{{< partial name="whats-next/whats-next.html" >}}
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
title: Reading Experiment Results
description: Read and understand the results of your Experimentation.
description: Read and understand the results of your experiments.
aliases:
- /product_analytics/experimentation/reading_results/
further_reading:
- link: "https://www.datadoghq.com/blog/datadog-product-analytics/"
tag: "Blog"
Expand All @@ -12,9 +14,9 @@ further_reading:

## Overview

After [launching your experiment][1], Datadog immediately begins calculating results for your selected metrics. You can add additional metrics at any time, organize metrics into groups, and explore related user sessions to understand the impact of each variant.
After [launching your experiment][1], Datadog begins calculating results for your selected metrics. You can add additional metrics, organize metrics into groups, and explore related user sessions to understand the impact of each variant.

{{< img src="/product_analytics/experiment/exp_reading_exps_overview.png" alt="A view of the metrics and their variations in the control and experiment groups ." style="width:90%;" >}}
{{< img src="/product_analytics/experiment/exp_reading_exps_overview.png" alt="A view of the metrics and their variations in the control and experiment groups." style="width:90%;" >}}

## Confidence intervals
For each metric, Datadog shows the average per-subject value (typically per user) for both the control and treatment variants. It also reports the relative lift and the associated confidence interval.
Expand All @@ -34,18 +36,13 @@ If the entire confidence interval is above zero, then the result is statisticall
## Exploring results
To dive deeper into experiment results, hover over a metric and click **Chart**. This gives you the option to compare the experiment’s impact across different user segments.


### Segment-level results
Subject level properties are based on attributes at the initial time of exposure (for example, region, new vistor vs repeat visitor etc.). This is useful for understanding when certain cohorts of users reacted differently to the new experience.

Subject level properties are based on attributes at the initial time of exposure (for example, region, new visitor vs repeat visitor). This is useful for understanding when certain cohorts of users reacted differently to the new experience.

{{< img src="/product_analytics/experiment/exp_segment_view.png" alt="Segment-level view of metric 'click on ADD TO CART' split by four country ISO code." style="width:90%;" >}}





## Further reading
{{< partial name="whats-next/whats-next.html" >}}

[1]: /product_analytics/experimentation/
[1]: /experiments/
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
title: Troubleshooting
description: Troubleshoot issues when running experiments.
aliases:
- /product_analytics/experimentation/troubleshooting/
further_reading:
- link: "https://www.datadoghq.com/blog/datadog-product-analytics"
tag: "Blog"
Expand Down
Loading