LAFF Society

CLIPPINGS

Is Charity Navigator About To Veer Off Course?

 

From The Hauser Center

Submitted by HHC Admin on March 8, 2010 – 5:11 pm2 Comments
By Steven Lawry
In response to this blog's invitation of a variety of views on Charity Navigator's decision to change its rating system (to reflect accountability and outcomes measures in addition to financial metrics like overhead),Steven Lawry provides his perspective on where this move might lead.
Charity Navigator is one of the best known and widely used charity rating services available in the US today. Its current rating system is based on the simple notion that charities committing a greater portion of their budgets to program activities as opposed to fund-raising and administrative costs are likely to have greater impact than those that don't. As such these charities are judged more deserving of donor support and rated accordingly. On December 1, 2009, Charity Navigator's CEO Ken Berger announced plans to expand Charity Navigator's ratings system to include two additional measures of organizational performance: accountability and outcomes. Is this cause for celebration? Creation of a fair and meaningful ratings system for accountability strikes me as achievable and desirable. But a simple system for rating outcomes is not, in my view, achievable. I worry that Charity Navigator is about to embark on an endeavor that has the potential to over-simplify complex questions of outcome assessment. Questions of financial management and accountability, though they present their own measurement and assessment difficulties, speak more directly to what can realistically be known about the prospects for a charity to be effective and successful. Put simply, strong organizations do good work. To insist that, as a condition of funding, a well-led and managed charity also show particular kinds of outcomes, often of inherently doubtful methodological foundation, suggests a lack of understanding of the complexity of the environments in which charities work and the very nature and purposes of charitable endeavor itself. Many good charities strive mightily to measure outcomes for their own management purposes. But even charities engaged in what appear to be simple, direct and easily measured activities find it difficult to accurately assess impact. And rarely do they claim exclusive credit for good outcomes. Let's take the example of a charity working in a low-income country to reduce child malnutrition by delivering food and nutrition training to mothers with children under five-years of age (a particularly vulnerable age group.) The charity may be well-managed and can demonstrate that a high proportion of their budget goes directly to program activities. But improvements in child nutrition, to be fair, are the result of the work of many, including other charities, public agencies and families and communities themselves. How do we isolate a single charity's contribution from those of others? Experienced managers don't try, because they know it's not possible and not a very good use of their time. In many settings, the food distributed by the charity might not be paid for by private donations at all but byUSAID or the UN World Food Program. Here private donations augment a large core budget provided by government donors. The charity would be required to work to USAID's or WFP's fairly rigorous and sometimes constraining guidelines. The charity is principally a contractor with very few degrees of freedom to manage innovatively. Any claims about its impact may be as much a reflection of management controls imposed by government donors than internally-generated policies and practices. Getting a fine or even rough measure of the distinctive contribution of one charity isolated from the work of others is difficult, to say the least. Many social scientists would argue that only randomized-control trials could possibly isolate the effects of the charity's contribution from that of other influences. These are expensive to carry out but may not offer better insight to a charity's performance than its experienced manager can provide by monitoring changes in children's weight and collecting feedback from various partners. This is what well-run, accountable charities do routinely. But in doing so, charitable managers know that improvements or setbacks are due only partially to their efforts. The problem of attribution of impact becomes even more complicated as the number of variables affecting problems a charity addresses but which are beyond its direct influence increase. I've spent time recently with the leaders of a charity working with young people at risk (aged 13-23) in a poor neighborhood in Boston. The charity provides a multitude of services: education, counseling, sports facilities, drug rehabilitation, hot lunches, job placement, referral services, and temporary refuge from abusive homes. Individual kids drop in and out and back into the center's programs as their personal needs and circumstances change. The charity has worked in the area for 30 years. The city manager and the local police chief will tell you that the presence of the charity explains, in their minds at least, why their community has the lowest level of youth encounters with the police of any community with comparable levels of income in Massachusetts. Yet this charity struggles mightily to generate the kind of quantitative indices demonstrating the impacts of its work, isolated somehow from the influences of schools, families, the job market and the police, that federal funding agencies increasingly insist upon. Not only is this unfair; it belies an ignorance of the uncertain and hardly measurable nature of the many influences that shape and touch our lives, whatever our social or economic status. This is a morass that Charity Navigator should strive to avoid. Will it rely on the overly narrow federal government's assessment of our Boston-based charity and other charities like it or will they check in with local police chiefs and city managers also? My two examples above speak to the difficulties of assessing the impact of charities dedicated to direct delivery of services. Many other nonprofit organizations don't provide client services at all, but advocate for social change (or the status quo), promote human rights and better governance, and call for government policies and funding that benefit the communities they care about. Donors give to these organizations because they share a commitment to their missions. But donors usually understand that positive outcomes are uncertain, the road is long, and that change, if it comes, will be the result of many influences, including sometimes unexpected changes in the larger political and social environment. Here, donors want to be assured that the charities they support are working as effectively as they possibly can. But wise donors often rightly have modest expectations about near-term positive outcomes. Outcomes assessment is a highly complicated, uncertain and increasingly contentious undertaking. Charities work on difficult, complex and sometimes intractable problems. Let's not reduce their appetite or ambition for working on the really hard problems in deference to easier problems that are more susceptible to quick impact and simple measurement. I would rather that Charity Navigator retain (and improve) its financial performance rating, add a measure for accountability, and drop any pretense that they can credibly score outcomes. Well-run, transparent and accountable organizations are making the most of their talent and funding to bring about positive outcomes. Charity Navigator will be doing service enough by drawing our attention to organizations that are well-run and well-governed. Steven Lawry, Senior Research Fellow at the Hauser Center for Nonprofit Organizations, is currently based in Juba, Southern Sudan, where he heads a USAID-funded project assisting the Government of Southern Sudan to develop a new land policy.

2 Comments ┬╗

  • This is a critical discussion. There is a great danger that we will simply trade one simplistic measure for another. The problem lies in reducing any complex activity down to a grade or to stars. No rating system can possibly capture the underlying complexity, and worse, a rating system enables the public addiction to simplicity. We must stop catering to this. Donors have to take time to learn about the organizations they are going to invest in, and once they're satisfied, trust those organizations to do what they do best. We need a national assessment apparatus that can do four things: 1) provide rich narrative, video and survey information, 2) update it on an annual basis, 3) provide it for a meaningful fraction of the 1.1 million nonprofits out there and 4) do it inside a user interface that makes people want to spend time there. None of this will be cheap. We keep looking for cheap solutions. Instead, we should invest a meaningful fraction of the $300 billion given to charity each year in some analysis of what that money os doing, or trying to do. And as well intentioned as Charity navigator is, its $1 million annual budget isn't close to what will be required.
  • Tom Kelly says:
    Lawry correctly notes the challenges of defining and measuring social impact across the sector, but donors and investors want more than simply knowing efficiency of expenditures–they do want to invest in results and outcomes- and emphasizing the need for result and performance measurement is a good thing. I agree that nonprofits need to strengthen their own accountability mechanisms (including transparency of reporting to customers/clients/community and not just funders), but I don't think the CharityNavigator intent is to go “off track” towards establishing causal linkages (which is the work of research and evaluation on more focused questions of attribution and effect) but to promote the answering of the question of whether a difference was made, whether there was a positive contribution (not necessarily a certain or sole attribution) to impact or change. Too many nonprofits cannot answer the question clearly for board members, investors, or the public: “What positive difference did we make?”

 

DISCLAIMER: The views expressed in these pages are the views of the authors and do not necessarily reflect the views of the LAFF Society.


 

Members log in to comment