Posts Tagged ‘rigor’

#FailEpic, continued

Friday, August 7th, 2015

I appreciate the lively response to my last post asking why it’s so difficult to talk about failure in philanthropy. Commenters brought up important points, including that it can be difficult to decide when failure has actually happened – when do you know to throw in the towel? – and that it’s not just admitting failure but learning from it that generates insight and improvement.

I would also note an incisive piece in Nonprofit Quarterly assessing the failure of the social impact bond designed to reduce juvenile recidivism on Rikers Island. Cohen and Zelnick rightly point out that what is being hailed as a partial success – that because the program did not hit its targets, taxpayers did not have to pay for it – masks a more complex reality. Recidivism was not reduced (no upside there), and taxpayer dollars were tapped in the form of in-kind time by city officials. This example reinforces one of the points made by a commenter on my original post: what counts as failure depends on who’s doing the telling, and when.

I see two strands of conversation worth pursuing, given the interest my original post has generated as part of an overall mini-trend toward more reckoning with failure in philanthropy.

One is to explore what it looks like to have candid conversations between funders and nonprofits about real issues of execution and responsibility (on all sides!) in a context beyond the one-on-one grant relationship. I come to this with an instinct that a more public version of such conversations would be salutary, but also deep wariness about doing it in a way that’s constructive instead of harmful.

  • Are there stages by which such conversations evolve? Do you need to start with self-reflection, then within your own organization, then within a trusted network of peers, then more publicly? That’s an awful lot of steps.
  • Perhaps the best starting place is not talking about failure within a particular grant relationship, but in the context of a topic of shared interest in which the participants don’t have a direct stake. One can imagine a study group dedicated to reviewing examples of initiatives that have failed, and seeking to generate and apply insight from them – with an audience of funders and nonprofits who aren’t part of that field. Might that be a less threatening way to get started?
  • Because trying to have a conversation within a field about what worked and what didn’t is incredibly difficult. I think about the “four pillars” strategy in the immigration reform movement, which national funders and nonprofits developed together after a failed attempt to pass comprehensive immigration reform in 2006-07. They analyzed why they lost and how they could overcome those disadvantages, and then moved resources and effort toward filling those gaps. What makes cases like that possible? Where else does this happen?

The other strand of conversation worth pursuing is to ask what it looks like within an organization, and specifically a foundation, to be open to acknowledging, learning from, and acting on failure. What values and motivations need to be in place? Who are the change agents and culture bearers? How do incentives need to change? Are there particular structures or systems that make it easier to learn from and act on failure? What do a higher risk tolerance and a culture of inquiry look like in practice? I feel like we know a lot about this in the field, but the threads of conversation aren’t necessarily organized.

  • Part of the challenge is, who owns failure within the institution? In other words, who’s responsible for identifying it, naming it, lifting it up, creating a safe space in which to discuss it, making sure meaning is derived, and then following through on application of that insight? Those responsibilities fall across a number of function – evaluation, HR, programs, senior leadership, board. What role should be the steward or the shepherd ensuring that those functions are integrated in pursuit of mining improvement from failure, and what resources or tools does that person or team need?

Thanks again to all have engaged on this topic, and to the organizations that have begun hosting conversations among funders about being more open about failure. Do the strands of conversation I suggest above seem relevant, and worth pursuing? What kinds of spaces could we create for more authentic funder-nonprofit dialogue? And how can we get clearer about the organizational culture needed to support openness about failure?

Share/Save/Email/Bookmark

What’s Strategy Got to Do With It? On the Social Sciences and Philanthropy

Thursday, August 29th, 2013

My first post on the Stanford Social Innovation Review opinion blog:

http://www.ssireview.org/blog/entry/whats_strategy_got_to_do_with_it

The Gambler

Thursday, June 7th, 2012

I’m wondering whether the key to “strategery” isn’t found in the wisdom of Kenny Rogers: “You’ve gotta know when to hold ’em, know when to fold ’em, know when to walk away, know when to run.”

The song is about a card player, who’s observing a series of numbers, but also a group of people. It’s said successful poker players read their opponents, not the cards.

This strikes me as a useful metaphor for strategy in philanthropy, particularly at a time when “metrics mania” has taken hold. To me, it becomes “mania” when metrics are driven by superstition: DATA take on a totemic power and aren’t understood either in themselves or in relation to their context.

It’s not enough to gather data, you have to know how to use them. Which means being clear about why you’re gathering them. Which means being clear about what you’re hoping to accomplish through the use of data.

Strategy in this respect is about the judgment of when to use different kinds of data, and how to balance them against each other. Context is everything. Decision-making is strategic when it’s data-driven, but even that phrase is a bit deceptive. It’s not the data doing the driving; they’re the fuel – you have to be the driver. But all too often we act as if we’re in one of those Google self-driving cars and try to have the data “speak for themselves.” Ain’t no such thing, my friends.

So think about Kenny Rogers the next time you’re wondering how to be more strategic in your giving. Read the numbers on the cards and do your calculations, but only as you read the players and the table.

The unbelievable truth?

Wednesday, December 29th, 2010

Provocative piece in a recent New Yorker (hat tip to Tactical Philanthropy) about an emerging doubt among scientists about the validity of many published results. The “decline effect” is that many results that initially appear robust and statistically valid (X drug helps lessen symptoms of Y disease in Z percent of patients), when replicated over time, either can’t be replicated, or the effect lessens (Z gets smaller or disappears).

The upshot?

The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

Interesting given how much weight is being given these days in philanthropy to randomized controlled trials and experimental design as the gold standard for evaluation, particularly in international development. Reminds us to be humble about our claims.

There are two ways this should happen: one is to be very explicit about our assumptions, and to make them publicly available. This was what I was taught in grad school: describe how you conceptualize, operationalize, and measure your variables, and talk about how you code them. And I studied one of the wanna-be sciences; I’m frankly shocked that such practices aren’t standard in medical research, if the article is to be believed.

The other way to be humble about our claims around evaluation is to triangulate: to put quantitative results in context. Another thing I learned in grad school was to specify mechanisms: in as much detail as you can, describe how you see the causal pathway working between the cause you posit and the effect you’re trying to explain. And harmonize the two: have quant and qual work with each other and reinforce each other.

As a new year approaches, always good to be reminded of the importance of humility. I’m often ambivalent about transparency, for a variety of complicated reasons. This kind of transparency, about methods and assumptions that back up claims of empirical “proof” – this I can get behind.

Here’s to a happy and healthy 2011 for one and all. I’ll resume my regular Tuesday-Wednesday-Thursday schedule next week.

The data-driven, multi-method, context-sensitive life

Tuesday, August 10th, 2010

In the wake of the Shirley Sherrod fiasco, this op-ed from Van Jones struck a chord: how easy it is today to tear someone down based on a single utterance, divorced of context. This piece from Marcia Stepanek about danah boyd’s (yup, that’s how she spells it) reflections on privacy drove the point home:

“The material that is being put up online is searchable by anyone, and it is being constantly accessed—out of context and without any level of nuance,” Boyd told attendees of last week’s Supernova Conference at The Wharton School in Philadelphia. “That kind of spotlight on people can be deeply devastating, and a type of exposure that may not be beneficial to society.” Put simply, Boyd said, “we can’t divorce information from interpretation … or we risk grave inaccuracy.”

Where are the search algorithms that take a result and put it in context? Is this the next frontier Google should be exploring (rather than “being evil“)? Or is that function one that used to be called journalism?

Methodology for evaluation is like this; it needs to be put into context: what are the assumptions being made, what happens to the results if those assumptions are relaxed? The data-driven life is about more than just numbers; the data-driven, multi-method life has to be about context.

The data-driven, multi-method life

Wednesday, August 4th, 2010

Multi-method research involves some mixture of qualitative, quantitative, and game-theoretical approaches. As I was coming up in grad school, this was increasingly becoming the norm in my department, UC Berkeley. In my own research, I combined archival research with some quantitative analysis – in part of data that I had gathered through that archival research, in part of a dataset that I created based on existing qualitative work. The qualitative work set up the quantitative analysis: I developed concepts and a theoretical framework, and examined them in a case study involving multiple episodes over time in one country. Based on that examination, I identified ways to operationalize the concepts for a broader set of countries, gathered that data, and used it to test the theoretical framework across a set of Latin American countries. In that same chapter, I did three case vignettes, looking at how my theoretical framework applied or did not in three other Latin American countries.

This is one reason I think the “data-driven life” is of necessity a multi-method one. Conceptualization and measurement are closely tied, and while measurement is viewed as quantitative, conceptualization is intensely qualitative. It’s important to understand and be clear about the conceptual frameworks underlying measurement when doing evaluation in the philanthropic and nonprofit sectors.

The data-driven life

Tuesday, August 3rd, 2010

Came across an article by this title in the NYT from a few months back, about people who itemize their activities or ideas and turn them into searchable databases. Interesting, but some basic misapperceptions about the nature of data, I think. For example:

If you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them.

And:

In other contexts, it is normal to seek data. A fetish for numbers is the defining trait of the modern manager. Corporate executives facing down hostile shareholders load their pockets full of numbers. So do politicians on the hustings, doctors counseling patients and fans abusing their local sports franchise on talk radio.

But data aren’t just numbers. And the opposite of numbers is not intuition.

A) Qualitative data can be systematized, coded, and made searchable.

B) Tools of quantitative data analysis are subject to the assumptions built into the equations, and those assumptions can be mighty hard to satisfy. And there’s an element of intuition and experimentation to the way those assumptions are made.

We need a more holistic view of what count as data. Yes, to the article’s point, more things than we think can be made into databases, but that only increases the need for interpretation. Data don’t speak for themselves….

What does it really mean to be methodologically rigorous?

Thursday, May 6th, 2010

One of the reasons I chose to get my doctorate in political science at UC Berkeley is because our department is known for being “methodologically plural,” meaning that multiple methods are embraced and taught: statistical analysis, game theory, survey analysis, case studies, comparative-historical analysis, and others.

I came into the program somewhat skeptical about the idea of social “science” – I wanted to study comparative politics, and this seemed to be the place to do it. But I learned something simple and profound about the scientific ideal: it’s about logic, consistency, clarity, and transparency. The ideal is that you make your methods of data collection and analysis clear enough that someone else could use your data, re-run the analysis, and get the same results. In practice, this meant thinking a lot about case selection, about the potential sources of error, and about the tools of data analysis.

What I took away was the idea that rigor is about making explicit what many take for granted: where did you get your information, how did you analyze it, how else could you have analyzed it, and how do your results follow from your analysis? With so much focus on data and metrics in the nonprofit sector and philanthropy, it’s important to remember that simple idea: rigor is not an elaborate technique or a fancy spreadsheet – it’s about honesty, with yourself and your audience, about the limitations, and the possibilities, of your work. If we can message that more effectively, it may be easier for some folks to get on the metrics bandwagon, and for the public at large to trust in the results of our work.