Resource Library

Survey Error: Why Development Professionals Under-Use Their Best Evaluation Tools

By Craig Charney | Insights | Series II | No. 3 | February 2014

Survey-Error-PhotoWe got a surprise from our Development Evaluation Survey: beneficiary surveys remain the least-used evaluation method, despite all the emphasis on better evaluation, results-based management, and value for money from funders. When we did a follow-up survey to find out why, we got even more surprising findings — and clues to improve use of the most powerful evaluation methods.

We learned in the Development Evaluation Survey that the traditional “two by two by two” evaluation — two evaluators visiting two locations for two weeks — remains alive and well. Charney’s evaluation survey, conducted among 624 development professionals with the Society for International Development — Washington branch (SIDW) last year, found that traditional, program-based evaluation methods are frequent. (See Chart 1 below.)

Figure 1: Development Professionals — Views of Evaluation Methods

Program-Based-Methods

The most common approach is simply visiting the site of a program, used in 57% of projects or proposals that respondents had worked on. The other program-based method, interviewing program officials, was cited by 50%. In one evaluation out of six, these were the only evaluation techniques used.

The good news: beneficiary based evaluations were included in five in six evaluations. However, the most commonly-used approach was also the least reliable: in-depth interviews with a limited number of selected beneficiaries were chosen in more than half the cases (51%). Quantitative surveys, or polls, were used in 47%; focus groups, a qualitative survey technique, in 42%.

Both polls and focus groups offer important advantages as evaluation techniques. They tap larger, more diverse groups of program beneficiaries, according to a structured and objective selection method, and thus provide greater confidence in their results. Of course, they cost more than a handful of in-depth interviews or a site visit — but they yield more too. (There are some instances, such as working with elites in non-permissive environments, where using in-depth interviews may be unavoidable, but that doesn’t hold in most.)

Surveys are under-appreciated

The Development Evaluation Survey gave us one hint as to why surveys are under-used: valuing them less.  They were not more common in projects with bigger budgets, as we might have expected. But they were more frequent in projects where the evaluation budget was bigger as a percent of the total. In other words, the most effective methods are more likely when people were more willing to commit resources to evaluation.

But this raises the question: why were they less likely to commit resources for surveys? We began tackling this issue in our follow-up survey for SID-W last December. (See Chart 2 below.)

Figure 2: Development Professionals — Views of Evaluation Methods

Development-Professionals

One obvious factor: program based methods are more familiar to development professionals. As the chart shows, the traditional evaluation approaches are the most familiar. Some 60% of development professionals polled felt familiar with using site visits, and 52% said the same about key informant interviews.

In contrast, the majority don’t know how to use surveys very well. Only 49% said they were very familiar with qualitative methods, and just 41% said that of polls. One clear finding: the least familiar methods, surveys, are also the least used — while development professionals often stick in their comfort zones with the familiar program-based methods.

Another discovery: polls in particular are under-rated by development professionals. When we asked them about the usefulness of each method, 67% thought site visits very useful, and nearly as many, 66%, said that of qualitative surveys. Some 60% said key informant interviews were — but less than half, 48% thought quantitative surveys were very useful.

Time to build survey knowledge

Think about the knowledge and perception gaps these findings highlight.

The technique that can provide the most precise indications of program need and effectiveness — quantitative survey research — is unfamiliar and under-appreciated by the majority of development professionals. Small wonder, then, that polls are also the most rarely used evaluation method.

In the case of qualitative surveys, development professionals are more aware of their potential value — but the majority don’t feel very confident about using them. This, too, is an evident reason why focus groups are used less often than program-based methods. (It also explains why using qualitative research was the second most popular topic respondents wanted to learn more about among 15 that we tested, with 55% of respondents very interested.)

Our take-away is that there’s a need to build awareness of what qualitative and quantitative surveys can do as well as how development professionals can use them. Charney Research will help. Starting in March we will run six monthly webinars on evaluation methods and publish a series of white papers and articles on the same themes.

If evaluation is to become the rigorous and useful management tool it ought to be, it’s essential to build the familiarity and understanding of survey techniques among those who need to put it to work. Otherwise, development professionals who stick with the old ways will quite literally continue to talk to themselves!