The Test Analyses tab in the Incrementality menu shows the results of your incrementality tests. It tracks the status of each test and provides access to detailed insights, visualizations, and calculated lift metrics once the test finishes running.
Use this view to understand how your marketing activity has impacted conversions, revenue, or efficiency.
After you create and preview a test, Funnel runs a statistical model to measure incrementality. The model compares actual outcomes in the test group with predicted outcomes based on the control group, estimating the incremental lift caused by the media exposure.
In the Test Analyses screen, you will see:
A list of all tests that were submitted for analysis
Status of the tests
Key outcome metrics including CPA, ROAS, or conversions
Once the analysis is complete, you can click any row to explore the full results.
Guidelines and requirements to analyze an incrementality test
Ensure to create a test.
Compare tests with similar durations and metric definitions.
Results are only available after the model completes running. This may take several minutes.
You cannot edit a test analysis. Rerun a new test if you want to modify something.
Analyze a test
Complete the following steps to analyze a test.
In Funnel Triangulation, go to Incrementality > Test Configurations.
To start the analysis on a test, click Analyze Test from the Actions menu.
Go to Incrementality > Test Analyses to view your test analysis.
The Test Analyses screen appears with a table of all the incrementality tests you started analyzing. You can see the following details:
Model (typically Linear Regression)
Test name
Created at date
Status (Started or Finished)
Source
KPI name and KPI value
Analysis Details
Click any row in the Test Analyses screen to open the full results. The Analysis Details screen appears. The Analysis Details screen has the following sections that will help you in analyzing the test results:
Analysis Dashboard
Results
Charts
Daily Effect
Understanding the Analysis Dashboard
The Analysis Dashboard provides a structured overview of your incrementality test's setup and outcome. It includes configuration context, key performance results, visualizations, and daily-level breakdowns. Each section plays a role in helping you assess the strength and reliability of your test.
The Test Configuration panel displays the inputs used when setting up the test. It includes:
Test name and type
Metric name (such as conversions or revenue)
Source
Planned spend
Start date and end date of the test
Test group and control group region lists
Description, if provided
These fields serve as the baseline for interpreting the results. They help validate that the model is working against the correct setup and provide transparency when comparing multiple tests.
The Analysis Details panel displays the model used to analyze the test.
Results summary
The Results section aggregates the main outcomes of the test into key metrics:
Total effect: The sum of all incremental conversions, or other KPIs, caused by the media intervention during the test period.
Average effect: The mean daily impact, useful for comparing across tests of different lengths.
Relative effect: Expresses the incremental lift as a percentage over the expected baseline.
R² score: Indicates how well the model fits the data. Values closer to 1.0 imply high reliability.
KPI name and KPI value: Show the selected key metric and its modeled result, such as a modeled CPA or ROAS.
These results offer a high-level snapshot of the test’s performance. A positive total effect and a high R² score typically indicate a successful and trustworthy test.
Charts
The dashboard contains three interactive chart views, each supporting different layers of insight:
Effect: A line chart showing daily incremental outcomes over time. It highlights how consistent or volatile the media impact was during the test.
Cumulative effect: A growing line representing the cumulative incremental value across the test period. A steady upward slope indicates consistent incremental performance, while flat or erratic patterns suggest weaker impact.
Impact analysis: Compares the predicted values (what would have happened without the campaign) to actual performance. A strong gap between actual and predicted lines indicates clear media-driven lift.
These visualizations help users validate if the campaign generated a consistent and explainable change in outcomes.
Daily Effect table
This table provides a breakdown of modeled results at a day-by-day level:
Actual: The observed performance in the test group
Predicted: The estimated performance without the media intervention
Effect: The daily difference between actual and predicted
Cumulative effect: Running total of daily effects over time
Users can use this table to identify trends, verify stability, and assess day-level anomalies. If the daily effect is volatile, it may suggest campaign inconsistency or external influences during the test window.