Donor experience hub

Fundraising, without the guesswork

At Fundraise Up, every improvement we ship is backed by continuous testing across millions of transactions. Explore the evidence-led optimizations that have helped thousands of nonprofits grow digital revenue.

No significant increase in performance

Reminder Element button design

A more prominent donate button on the Reminder Element did not improve conversion.

Elements
Conversion optimization
ARPU
+7% increase in average recurring donation amount

Upsell 4‑weekly instead of monthly

A four-weekly recurring plan upsell increased the average recurring donation amount by 7%.

Checkout
Recurring giving
Conversion optimization
Donation revenue
Mixed results

Upsell screen with overlay effect in Checkout

A new upsell screen increased recurring giving, but came with a trade-off in one-time giving.

Checkout
Recurring giving
Donation revenue
Conversion optimization
ARPU
-3% decrease in donation conversion

Asking donors for a mailing address in Checkout

Making the mailing address field mandatory in Checkout reduced donation conversion by more than 3%.

Checkout
Conversion optimization
ARPU
No significant increase in performance

Reordering donation frequency buttons

Reversing the order of donation frequency options in Checkout showed no meaningful impact.

Checkout
Recurring giving
Conversion optimization
Donation revenue
ARPU
No significant increase in performance

AI-personalized exit screen in Checkout

Testing a new AI-generated exit confirmation screen showed no improvement.

Checkout
AI
ARPU
Conversion optimization
No significant increase in performance

Video on Campaign Pages

Adding YouTube videos to Campaign Pages did not increase donor conversion or revenue.

Campaign Pages
Conversion optimization
Minimal adoption

Stripe Link in Checkout

Adding Stripe Link to Checkout led to minimal engagement.

Checkout
Campaign Pages
Payment methods
Conversion optimization
Donation revenue
+5.7% increase to donation conversion

New timing for abandoned donation emails

Sending an email reminder 1 hour after checkout abandonment led to a 5.7% increase in conversion.

Checkout
Conversion optimization
Donation revenue

FAQ

How often do we run A/B tests?

Continuously. We run over 50 experiments each year, with 2-10 tests running at any given time, depending on their scale, across different parts of the donation experience.

How do we analyze test results?

We measure more than 20 key metrics, such as Average Revenue Per User (ARPU), conversion rate, average donation amount, and many others. Each of these key metrics is further broken down by factors like operating system, country, and browser, providing a deeper understanding of donor behavior. Additionally, we measure test-specific metrics that are unique to each experiment. In total, every decision we make is based on 100+ indicators, ensuring a comprehensive and data-driven approach to optimizing the donation experience.

How do we implement changes based on test results?

Only statistically significant findings drive action, meaning every improvement is backed by real data. If a test delivers better results, we roll out the change across the platform, ensuring all nonprofits using Fundraise Up benefit from the latest optimizations.

How do we ensure test result reliability?

For results to be trustworthy, the test setup must be rock solid. We ensure fair traffic distribution, typically split 50/50 but sometimes adjusted based on specific needs. Test duration is carefully calculated to reach statistical significance, ensuring decisions are driven by data, not guesswork.

Moreover, to eliminate bias, we use a sophisticated distribution algorithm that assigns users to test groups randomly. This ensures a 0% probability of skewed results.

What is statistical significance?

Statistical significance in an A/B test means that the difference in results between version A and version B is unlikely to have happened just by chance. It helps to determine whether one version is truly better than the other based on the data.

A key indicator of significance is the p-value. A low p-value (typically under 0.05) signals a real effect, giving us the statistical evidence to act.

We include p-values in experiment summaries to help you understand just how meaningful the results are.

What if a test performs poorly?

We learn from every outcome. When a test shows negative results, we quickly end it and use those insights to guide future development.

How do we ensure test security?

Every test we run is designed with security and compliance in mind. We follow strict best practices to protect donor data and ensure a risk-free testing environment.

  • Risk-mitigation protocols: We handle sensitive tests with extra safeguards to balance innovation with security.
  • Data protection: Donor privacy and security are never compromised, no matter how advanced the testing methodology.
How do ongoing nonprofit campaigns continue during tests?

When conducting tests, we always consider the current settings of each nonprofit organization. This means that elements like minimum donation amounts or required address fields remain unchanged to ensure the test does not interfere with an organization’s existing fundraising strategy. Our tests are designed to enhance, not disrupt, the donor journey.

Ready to elevate your fundraising efforts?

Connect with our sales team to explore how our innovative tools can transform your online giving experience.

Request a demo