Growth A/B tests

Fundraising, without the guesswork

At Fundraise Up, every improvement we ship is backed by continuous A/B testing across 3,000+ nonprofits and millions of transactions. Explore our experiment vault to see what we tested, why it mattered, and what we discovered.

+33% conversion increase for recurring donation

Reordering Checkout steps

Moving the payment step to the end led to a +33% lift in recurring donation conversion.

Checkout
Campaign Pages
Conversion optimization
Minimal adoption

DAF payment option in Checkout

Adding DAF to Checkout led to minimal engagement — donors preferred faster payment methods.

Payment methods
Donation revenue
+4.2% increase to average donation amount with LTV

Smart suggested amounts in Campaign Pages

An updated AI model for suggested amounts increased average gift size by 4.2%.

AI
Checkout
Donation revenue
Decrease in conversions

“Stepless” mobile Checkout

Showing all fields on one screen in mobile Checkout reduced donation conversion.

Checkout
Mobile
Conversion optimization
+27% conversion increase to recurring donation

Smart donation frequency defaults

AI-optimized donation frequency drove a +27% lift in recurring gifts and higher revenue per donor.

AI
Recurring giving
Conversion optimization
No significant increase in performance

Smart suggested amounts in the Donation Form Element

An updated version of our AI model grew recurring gift size but lowered conversion.

AI
Checkout
Donation revenue
+0.43% increase to ARPU

Smart fee coverage

Updated AI model for Adaptive Cost Coverage increased ARPU by 0.43% without hurting conversion.

AI
Payment methods
Checkout
Donation revenue
No drop in conversion with a simplified layout

Payment options in Campaign Pages

A simplified payment step with fewer options maintained strong conversion performance.

Checkout
Campaign Pages
Payment methods
Design improvements
+1.3% conversion increase on mobile

Floating labels in Checkout

Floating labels in Checkout increased mobile conversion by 1.3% while improving accessibility.

Checkout
Conversion optimization
Donation revenue

Frequently
Asked Questions

How often do we run A/B tests?

Continuously. We run over 50 experiments each year, with 2-10 tests running at any given time, depending on their scale, across different parts of the donation experience.

How do we analyze test results?

We measure more than 20 key metrics, such as Average Revenue Per User (ARPU), conversion rate, average donation amount, and many others. Each of these key metrics is further broken down by factors like operating system, country, and browser, providing a deeper understanding of donor behavior. Additionally, we measure test-specific metrics that are unique to each experiment. In total, every decision we make is based on 100+ indicators, ensuring a comprehensive and data-driven approach to optimizing the donation experience.

How do we implement changes based on test results?

Only statistically significant findings drive action, meaning every improvement is backed by real data. If a test delivers better results, we roll out the change across the platform, ensuring all nonprofits using Fundraise Up benefit from the latest optimizations.

How do we ensure test result reliability?

For results to be trustworthy, the test setup must be rock solid. We ensure fair traffic distribution, typically split 50/50 but sometimes adjusted based on specific needs. Test duration is carefully calculated to reach statistical significance, ensuring decisions are driven by data, not guesswork.

Moreover, to eliminate bias, we use a sophisticated distribution algorithm that assigns users to test groups randomly. This ensures a 0% probability of skewed results.

What is statistical significance?

Statistical significance in an A/B test means that the difference in results between version A and version B is unlikely to have happened just by chance. It helps to determine whether one version is truly better than the other based on the data.

A key indicator of significance is the p-value. A low p-value (typically under 0.05) signals a real effect, giving us the statistical evidence to act.

We include p-values in experiment summaries to help you understand just how meaningful the results are.

What if a test performs poorly?

We learn from every outcome. When a test shows negative results, we quickly end it and use those insights to guide future development.

How do we ensure test security?

Every test we run is designed with security and compliance in mind. We follow strict best practices to protect donor data and ensure a risk-free testing environment.

  • Risk-mitigation protocols: We handle sensitive tests with extra safeguards to balance innovation with security.
  • Data protection: Donor privacy and security are never compromised, no matter how advanced the testing methodology.
How do ongoing nonprofit campaigns continue during tests?

When conducting tests, we always consider the current settings of each nonprofit organization. This means that elements like minimum donation amounts or required address fields remain unchanged to ensure the test does not interfere with an organization’s existing fundraising strategy. Our tests are designed to enhance, not disrupt, the donor journey.

Ready to elevate your fundraising efforts?

Connect with our sales team to explore how our innovative tools can transform your online giving experience.
Request a demo