Highlights from Our 2024 Charity Evaluations AMA

On November 19, 2024, five members of our Programs team hopped on the FAST Forum to answer questions about our 2024 charity recommendations and the charity evaluation process. Below, we’ve rounded up some highlights. We hope these questions and answers provide deeper insight into our decision-making and inspire you to learn more about our Recommended Charities.

You can view the full AMA thread on the FAST Forum.

Note: Questions and answers have been edited for length and/or clarity. Links to the original sources are provided alongside each response.

Expand All

How many counterfactual donations did Recommended Charities receive last year?

This year, we conducted an influenced-giving analysis to assess ACE’s counterfactual impact on funding through our Charity Evaluations and Movement Grants programs. During our last fiscal year (April 2023–March 2024), the reported ACE-influenced donations to Recommended Charities totaled $8.5 million. We estimate that $3.7 million would not have been donated to those charities if not for ACE’s influence. Our Charity Evaluations Influenced Giving Report thoroughly explains how we calculated this figure. —Elisabeth

(Source)

How does gaining or losing a recommendation status affect a charity’s budget?

Our charity recommendations last for two years. We don’t guarantee that any charity will be re-evaluated or re-recommended, so charities know to prepare for that when their two-year recommendation status ends. For some charities, being recommended by ACE might be their first introduction to certain donors. We’ve also found that some donors continue donating to formerly recommended charities.

We expect that being recommended for the first time leads to a greater funding increase than being re-recommended. This is also likely the case for charities focused on newer interventions or animal groups and for younger charities compared to well-known ones. According to a recent survey, ACE’s annual influence per charity has varied from about $150,000 to more than $1,000,000. Some of those gifts might not be fully counterfactual. Assessing budget impact and change in recommendation status is something we need to examine further, so we’ll be expanding our impact assessment work to include more than just our quantitative counterfactual impact on funding. —Elisabeth

(Source)

How are allocations from ACE’s Recommended Charity Fund determined?

We have a Recommended Charity Fund disbursement model where we consider each recommended charity’s funding and what ACE’s marginal funding would be used for. Then, we have an internal discussion about where we should prioritize funding based on funding capacity, quantitative factors (marginal cost-effectiveness), and qualitative factors (theory of change). We plan to refine our process for allocating funds and will announce any changes once they are made. —Elisabeth

(Source)

What changes can we expect in the charity evaluation process for last year’s Recommended Charities?

The exact details of our 2025 evaluation process and methods are still to be determined. Barring any major strategic shifts in our Charity Evaluation program, we expect to keep our methods largely the same as in 2024, with refinements based on what we’ve learned. We’ll still ask charities for information that will allow us to conduct the theory of change analysis, create cost-effectiveness estimates, assess funding capacity, and examine organizational health. The process will begin with applications to be evaluated, as it did in 2024. —Elisabeth

(Source)

What information will charities be asked to provide with the new cost-effectiveness calculation? Will achievements still have a role?

You can refer to the cost-effectiveness analysis spreadsheets for this year’s evaluated charities to get a sense of the information we needed to make the calculations. We’ll likely still be asking for charities’ past achievements. If we stick with this year’s approach (which we think is likely at this point), we will aim to determine the suffering adjusted days (SADs) averted by those achievements per dollar spent, which requires knowing the benefits of charities’ programs as well as the expenses spent to achieve those benefits. —Elisabeth

(Source)

Do you ever wish there was a benchmark charity with a near-infinite funding gap (e.g., GiveDirectly in the global health sector) to be able to compare to? Is there anything like GiveDirectly in the animal welfare space?

Great question! In short, yes, we do, and no, there isn’t. We think GiveWell’s approach of using GiveDirectly as a benchmark makes sense for GiveWell, and we’ve had several team discussions about whether we could take a similar approach. One option is to use a standardized measurement of the number of animals helped or the degree of suffering averted, which would allow us to compare charities more easily.

This year, we used Ambitious Impact’s Suffering-Adjusted Days (SADs) model. While we found this helpful for this year’s evaluations, it’s not always possible to reach a meaningful SADs estimate given limitations such as the long-term or speculative nature of some charities’ programs, the lack of reliable data around charities’ achievements, the lack of evidence on the relative cost-effectiveness of different animal advocacy interventions, and the diverse range of programs conducted by the charities we evaluate. We’re also not aware of any charities in the animal advocacy space that share GiveDirectly’s room for additional funding and potential for scaleability.

Instead, we currently base our recommendation decisions on a set of decision guidelines that align with our evaluation criteria and use those to score charities against one another. It’s possible that in the future, a sufficiently scalable charity will emerge, and the animal advocacy movement will have sufficient evidence and data for us to produce reliable cost-effectiveness assessments for all the charities we evaluate, but at the moment, this doesn’t seem realistic. —Max

(Source)

What does the new decision-making process look like in terms of better accounting for the marginal cost-effectiveness of funding?

By using theory of change analysis more formally, we understand a charity’s work and its assumptions, limitations, and risks. This reduces our uncertainty about the scope of a charity’s work and their overall likelihood of achieving their desired impact. By doing a cost-effectiveness analysis that looks at the benefits to animals of a charity’s work divided by the cost of doing that work, we assess the current cost-effectiveness of a charity’s work (usually for select programs). Then when combined with our room for more funding assessment (which asks charities about their future plans), we assess our level of uncertainty about whether the plans are likely to be as cost-effective as the charity’s current work. Taken together, the three criteria together give us a good sense of marginal cost-effectiveness (i.e., where the next additional dollar would be best spent). —Elisabeth

Does your evaluation process shift at all each year in regard to prioritized regions or interventions?

We refine the methods of our evaluation process every year based on internal and external feedback in order to improve on the previous year and be more accurate in our assessments. We also update our position on the likely effectiveness of interventions based on new research and consider the particular situation of each country in our assessments. However, this year, we didn’t explicitly score or prioritize certain interventions and countries. Instead, we analyzed the impact of the specific work of each charity using our new evaluation criteria (see below). In general (with some exceptions), we continue to prioritize work on farmed animals and wild animals, interventions that are more institutional in scope, and countries that are more neglected or have higher levels of animal suffering. —Maria

(Source)

Could you give us a brief overview of how ACE’s evaluation process has evolved over time? What are some major differences between the evaluation process in your founding year versus 2024?

ACE’s methods to evaluate charities have changed a lot over the years. We used to have more criteria to evaluate charities and we have reduced that number of criteria over the years, focusing on the most important factors for making recommendation decisions. The biggest changes we made this year were introducing a process allowing interested charities to apply for evaluation (rather than ACE inviting charities to be evaluated), and updating our evaluation criteria. Specifically, we:

updated our cost-effectiveness methods (conducting more direct cost-effectiveness analyses, compared to last year’s scoring system that was based on less direct proxies for cost-effectiveness);
introduced a qualitative theory of change analysis that explores the evidence, reasoning, and limitations around charities’ programs in more detail; and
updated our room for more funding criterion to place more focus on the likely impact of charities’ future funding plans.

You can read more about our latest charity evaluation process here. —Maria

(Source)

Do you use any generative AI currently? Do you imagine any potential for it to assist your work?

Hi, great (and topical) question! Yes, some ACE staff use generative AI models such as ChatGPT and Claude to help generate ideas or to help draft lower-priority internal documents. However, we don’t use such models for external or high-priority documents given the various limitations of AI models (such as the risk of factual errors, biases, and plagiarism), and we also don’t input potentially sensitive information.

We apply a similar principle to image generation models. Given the risk of AI-generated images being seen as misleading in certain contexts—potentially casting doubt on real-life images, such as photographic evidence of farm investigations—we instead use images from public-domain sources. We prioritize ethically aligned sources, such as We Animals Media.

Personally, the most useful AI tool in my day-to-day work is Perplexity, which cites its responses and can be really helpful for locating research papers. I also find ChatGPT and Claude helpful for summarizing research, cleaning up documents, and advising on spreadsheet formulas. A newer tool is Google’s NotebookLM, which seems very useful for distilling information from a wide range of sources.

For more information, you can check out ACE’s Responsible AI Usage policy. We also have an internal document where staff share AI use cases with one another, so you could consider introducing something similar at your own organization if that sounds helpful! —Max

(Source)

When will questions and layout for applications be made available for 2025? How much time will charities have to provide information once these are made available?

We expect that evaluation applications will open in March and stay open for a month. Once a charity has applied and is successful, they move onto stage two, where we ask more detailed questions. We typically give charities around three weeks to gather the information requested to answer those questions. —Elisabeth

(Source)

How do you calculate cost-effectiveness for organizations that indirectly impact animal suffering?

That’s a great question and one that we spent a lot of time considering in this year’s round of evaluations. We aimed to use SADs in all cost-effectiveness analyses and attempted to find a way to quantify each charity’s impact using the SADs unit. We have found that for more indirect work, such as GFF’s programs, quantifying the number of animals affected is largely speculative and requires a number of assumptions. For these cases, we decided to not make the assumptions needed to estimate the SADs averted but to stop at an intermediate unit in the analysis. For GFF, this was the number of people reached through their programs per dollar. Our reasoning for avoiding highly speculative assumptions is based on one of our guiding principles, which is to follow a rigorous process and use logical reasoning and evidence to make decisions. For cases like GFF, we focused more on their Theory of Change analysis to guide our decision-making. We are excited about their work because China farms around 50% of the world’s farmed animals, and GFF has made inroads with getting animal welfare on the government’s agenda, which could have significant expected value in the long term (although we didn’t model this explicitly).

Overall, we believe that interventions with a long theory of change (such as some policy interventions) and meta-interventions are often too speculative to estimate the number of animals affected and therefore the SADs averted. This appears to be consistent with the existing research in the animal advocacy movement, where the existing cost-effectiveness estimates focus on direct interventions (corporate campaigns, institutional outreach) and avoid quantifying indirect interventions (research, movement building). We will review our methods in the coming months and will reconsider how we compare charities that do more indirect work. —Zuzana

(Source)

Your recommended charities include those evaluated with your new methodology and some re-recommended from last year, which were evaluated with less rigorous methodology. Are you concerned that these differences might result in funding less effective organizations instead of those that genuinely benefit animals?

Thank you for your question. We refine our methods each year, and we don’t think that recent changes mean that we can no longer rely on the decisions we made in 2023.

Specifically, regarding cost-effectiveness, in the past, ACE identified limitations of direct cost-effectiveness analyses and found it less helpful to estimate directly the number of animals helped per dollar. Instead, we began exploring ways to model cost-effectiveness, such as achievement scores and the Impact Potential criterion. Since then, the animal advocacy movement (namely Welfare Footprint Project, Ambitious Impact, and Rethink Priorities) has invested in research that enables quantifying animal suffering averted per dollar and in turn, we’ve evolved our methods. However, we think it is still remarkably challenging to do these calculations and draw conclusions from them, and that using proxies is still a reasonable approach.

Additionally, while we’ve introduced a theory of change criterion to formalize our assessment of charities’ assumptions, limitations, and risks, we have already been taking these factors into account during our decision-making in the past. Our other two criteria, room for more funding and organizational health, were included in our methods in both years.

In summary, while we see recent improvements as a step forward, we wouldn’t claim that 2023 charities were evaluated with a less rigorous methodology. —Zuzana

(Source)

Is there an active effort to promote lab-grown protein sources?

None of our current Recommended Charities work on cultivated protein sources, though we have previously recommended charities working on this (such as Good Food Institute and New Harvest) and awarded Movement Grants to projects in this area (such as Cellular Agriculture Australia). We’d certainly be open to considering charities and Movement Grant applicants working on this in the future. —Max

(Source)

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *