diff --git a/config/_default/menus/main.en.yaml b/config/_default/menus/main.en.yaml index 21f512e55cf..9501d9d3b46 100644 --- a/config/_default/menus/main.en.yaml +++ b/config/_default/menus/main.en.yaml @@ -8789,7 +8789,7 @@ menu: parent: rum weight: 13 - name: Product Analytics - url: product_analytics + url: product_analytics/ pre: product-analytics identifier: product_analytics parent: digital_experience_heading @@ -8840,30 +8840,31 @@ menu: identifier: pa_profiles weight: 4 - name: Experiments - url: product_analytics/experimentation/ - parent: product_analytics + url: experiments/ + pre: experiment-wui + parent: digital_experience_heading identifier: pa_experiments - weight: 5 + weight: 50000 - name: Define Metrics - url: product_analytics/experimentation/defining_metrics + url: experiments/defining_metrics parent: pa_experiments identifier: pa_experiments_metrics - weight: 501 + weight: 1 - name: Reading Experiment Results - url: product_analytics/experimentation/reading_results + url: experiments/reading_results parent: pa_experiments identifier: pa_experiments_results - weight: 502 + weight: 2 - name: Minimum Detectable Effects - url: product_analytics/experimentation/minimum_detectable_effect + url: experiments/minimum_detectable_effect parent: pa_experiments identifier: pa_experiments_mde - weight: 503 + weight: 3 - name: Troubleshooting - url: product_analytics/experimentation/troubleshooting + url: experiments/troubleshooting parent: pa_experiments identifier: pa_experiments_troubleshooting - weight: 504 + weight: 4 - name: Guides url: product_analytics/guide/ parent: product_analytics diff --git a/content/en/product_analytics/experimentation/_index.md b/content/en/experiments/_index.md similarity index 70% rename from content/en/product_analytics/experimentation/_index.md rename to content/en/experiments/_index.md index 015e2fffb7e..67da29ba3fa 100644 --- a/content/en/product_analytics/experimentation/_index.md +++ b/content/en/experiments/_index.md @@ -1,11 +1,13 @@ --- title: Planning and Launching Experiments -description: Experimentation allows you to measure the causal relationship new experiences or features have on user outcomes. +description: Use Datadog Experiments to measure the causal relationship that new experiences or features have on user outcomes. +aliases: + - /product_analytics/experimentation/ further_reading: - link: "https://www.datadoghq.com/blog/datadog-product-analytics" tag: "Blog" text: "Make data-driven design decisions with Product Analytics" -- link: "/product_analytics/experimentation/defining_metrics" +- link: "/experiments/defining_metrics" tag: "Documentation" text: "Defining Experiment Metrics" --- @@ -15,11 +17,10 @@ Datadog Experiments is in Preview. Complete the form to request access. {{< /callout >}} ## Overview -Datadog Experimentation allows you to measure the causal relationship that new experiences and features have on user outcomes. To do this, Experimentation randomly allocates traffic between two or more variations, using one of the variations as a control group. +Datadog Experiments allows you to measure the causal relationship that new experiences and features have on user outcomes. Experiments uses [Feature Flags][4] to randomly allocate traffic between two or more variations, using one of the variations as a control group. This page walks you through planning and launching your experiments. - ## Setup To create, configure, and launch your experiment, complete the following steps: @@ -29,15 +30,13 @@ To create, configure, and launch your experiment, complete the following steps: 2. Click **+ Create Experiment**. 3. Enter your experiment name and hypothesis. -{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="create an experiment and add a hypothesis for the experiment." style="width:80%;" >}} - +{{< img src="/product_analytics/experiment/exp_create_experiment.png" alt="The experiment creation form with fields for experiment name and hypothesis." style="width:80%;" >}} ### Step 2 - Add metrics After you’ve created an experiment, add your primary metric and optional guardrails. See [Defining Metrics][2] for details on how to create metrics. -{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="create an experiment and add a hypothesis for the experiment." style="width:80%;" >}} - +{{< img src="/product_analytics/experiment/exp_decision_metrics1.png" alt="The metrics configuration panel with options for primary metric and guardrails." style="width:80%;" >}} #### Add a sample size calculation (optional) @@ -48,7 +47,7 @@ After selecting your experiment’s metrics, use the optional sample size calcul 1. Click **Run calculation** to see the [Minimum Detectable Effects][3] (MDE) your experiment has on your metrics. The MDE is the smallest difference that you are able to detect between your experiment’s variants. -{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="Sleect an entrypoint event to run a sample size calculation" style="width:90%;" >}} +{{< img src="/product_analytics/experiment/exp_sample_size.png" alt="Select an entry point event to run a sample size calculation" style="width:90%;" >}} ### Step 3 - Launch your experiment @@ -56,36 +55,21 @@ After specifying your metrics, you can launch your experiment. 1. Select a Feature Flag that captures the variants you want to test. If you have not yet created a feature flag, see the [Getting Started with Feature Flags][4] page. -1. Click **Set up experiment on feature flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. - +1. Click **Set Up Experiment on Feature Flag** to specify how you want to roll out your experiment. You can either launch the experiment to all traffic, or schedule a gradual rollout. {{< img src="/product_analytics/experiment/exp_feature_flag.png" alt="Set up an experiment on a Feature Flag." style="width:90%;" >}} - ## Next steps -1. **[Defining metrics][2]**: Define the metrics you want to measure during your experimentation. -1. **[Reading Experiment Results][5]**: Review and explore your Experiment results. -1. Learn more about **[Minimum Detectable Effects][3]**: Choose an appropriately sized MDE. - - - - - - - - - - - - - +1. **[Defining metrics][2]**: Define the metrics you want to measure during your experiments. +1. **[Reading Experiment Results][5]**: Review and explore your experiment results. +1. **[Minimum Detectable Effects][3]**: Choose an appropriately sized MDE. ## Further reading {{< partial name="whats-next/whats-next.html" >}} [1]: https://app.datadoghq.com/product-analytics/experiments -[2]: /product_analytics/experimentation/defining_metrics -[3]: /product_analytics/experimentation/minimum_detectable_effect +[2]: /experiments/defining_metrics +[3]: /experiments/minimum_detectable_effect [4]: /getting_started/feature_flags/ -[5]: /product_analytics/experimentation/reading_results +[5]: /experiments/reading_results diff --git a/content/en/product_analytics/experimentation/defining_metrics.md b/content/en/experiments/defining_metrics.md similarity index 90% rename from content/en/product_analytics/experimentation/defining_metrics.md rename to content/en/experiments/defining_metrics.md index 7181f86943f..f4f9cca72bb 100644 --- a/content/en/product_analytics/experimentation/defining_metrics.md +++ b/content/en/experiments/defining_metrics.md @@ -1,20 +1,22 @@ --- title: Defining Metrics -description: Define the metrics you want to measure during your experimentation. +description: Define the metrics you want to measure during your experiments. +aliases: + - /product_analytics/experimentation/defining_metrics/ further_reading: - link: "https://www.datadoghq.com/blog/datadog-product-analytics/" tag: "Blog" text: "Make data-driven design decisions with Product Analytics" -- link: "/product_analytics/experimentation/reading_results" +- link: "/experiments/reading_results" tag: "Documentation" text: "Reading Experiment Results" --- ## Overview -Define the metrics you want to measure during your experimentation. Metrics can be built using Product Analytics and Real User Monitoring (RUM) data. +Define the metrics you want to measure during your experiments. Metrics can be built using Product Analytics and Real User Monitoring (RUM) data. -