Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Search
Close this search box.

You Can’t Just Kick the Tires

The Kenyan Ranger school in Gilgil, Kenya is a hot and busy place where Kenyan soldiers can be found on the firing range, running through the obstacle course, and preparing for deployment. The school is filled with equipment, infrastructure, and trainers the United States has provided the Rangers as part of its effort to counter the Al Shabab jihadist fundamentalist group. Dexis assessment, monitoring, and evaluation (AM&E) experts engage with complex, remote, and important security cooperation programs like these to understand their impact and return on investment for the U.S. Government.

The traditional approach to program assessments has been to use teams of former military officers with a background in U.S. training doctrine that examine where the equipment is situated, how it’s being used, and who is using it. Essentially, they went out and kicked the tires. This methodology often lacks the rigor, flexibility, and sustained processes that are necessary to fully evaluate assistance programs in diverse cultural and geographic settings.

In contrast, the techniques Dexis used during its engagement in Kenya are an example of the kind of rigorous, comprehensive evaluation necessary to satisfy recent requirements set by Congress and the Department of Defense to conduct assessment and monitoring of security cooperation programs. Dexis teams utilize a broad suite of proven data collection methods to meet the increasingly sophisticated requirements of decision-makers – which include understanding long-term positive strategic change and increased political will in partner nations as well as the return on U.S. investment.

In Kenya, Dexis interviewed soldiers who have trained at the school in Gilgil as well as leaders who deploy them to assess the effectiveness of U.S. training, equipment, and advising. We tailored structured evaluation methods to the context for the best results.

Much of the process is intuitive to a seasoned expert but is worth making clear: it comes down to getting the right people in the right environment through the right format.

Evaluators must work to get the buy-in of those they are interviewing in order to broach difficult topics and receive honest answers. All stakeholders must be convinced of the value of evaluations and that their input will improve their programs. In Gilgil, the Rangers, their leadership, and U.S. Embassy staff were made to understand the importance of the evaluation. Like many stakeholders, participants were eager to showcase successes and discuss important changes once they understood the rationale for the evaluation.

Verifiable, structured methods of evaluation yield data that can be used to correctly assess the effectiveness of programs. These structured methods also allow people to examine how the data was obtained and employ the information for multiple applications rather than a single circumstance. In the case of a recent assignment in Africa, Dexis teams conducted their evaluation by entering qualitative data into a coded database so that relevant information could be quickly retrieved. For example, any future teams examining the effectiveness of casualty evacuations in Cameroon could find the information they were searching for by using search terms. Such carefully coded data is particularly important in the military, where frequent personnel changes can lead to gaps in institutional knowledge.

Another important component of effective assessments is crafting the right questions to ask the personnel on the ground. Questions need to be tailored to the profile of the person being interviewed. Soldiers, for example, may need to be asked different questions than their commanders. Ideally, the questions should be asked in the local language. In Kenya, troops were questioned by Dexis teams in Swahili as well as English. Interviews with military personnel are often most effective when questions are based on specific examples and measures of progress. For instance, asking about previous deployments and changes in work station are non-confrontational and allow interviewers to assess career development, application of specific skills, and the utility of training.

The need to customize assessments may seem like an obvious point but too often interviews are conducted in a “one size fits all” approach using check box surveys. While many evaluations use U.S. military guidelines as models, this approach can overlook the unique circumstance of partner nations and their military forces. To understand long-term impact, evaluators need the right balance between a process that is clear, simple, and robust, yet allows for tailoring to specific needs.

Evaluations must be robust and structured but, particularly in the case of security sector assistance, must also be flexible enough to engage with stakeholders in all environments. As in Gilgil, where Dexis teams spoke to troops in their operating environment, getting close to the situation is key. The team collected data across stakeholder groups in whichever setting they found them — from the air-conditioned office of the Commanding Officer to walking with soldiers and standing in the shade of a tree near the obstacle course.

Effectively evaluating security sector assistance is challenging but far from impossible. No longer can we simply kick the tires and speak with high-level leadership. Through a flexible yet structured approach, evaluations can be conducted in a way that meet the evolving nature of U.S. government requirements. Well-crafted data collection methods, including evidence-based questions, understanding the context for each program, building buy-in across all stakeholders, and committing to do this in a sustained and consistent manner allow decision-makers to understand the impact of their programs.


Jessica Lee, Ph.D is Deputy Director of the Monitoring, Evaluation and Learning Division at Dexis where she manages Dexis’ work in assessment, monitoring, and evaluation of security assistance programming.