Third-party monitoring is important in conflict environments, where it can be particularly difficult to know if programs are effective and resources are reaching their intended recipients. It is also especially challenging. Situations change rapidly, accurate information can be hard to come by, and access is often limited and dangerous. Third-party monitoring programs must be creative and adaptable in using available resources.
Through our recent work in Syria we’ve found several ways to make third-party monitoring in conflict environments more effective. Using varied data collection methods, avoiding time lags between activity completion and monitoring, and using monitoring findings to augment program learning all increase the chances of successful monitoring and improved programming. We learned these lessons through the recently concluded Syria Information Collection, Analysis, and Monitoring program (ICAM), which provided data collection, analysis, and third-party monitoring to a USAID/Office of Transition Initiative (OTI) program in Syria.
Between June 2015 and August 2018, ICAM monitored 218 activities, constituting nearly $30 million in assistance, providing USAID/OTI with an additional layer of validation that assistance was allocated and used in accordance with its intended purpose. ICAM’s output verification monitoring spanned a broad range of activity types across the provinces where assistance was delivered.
Here’s what we learned about conducting third-party monitoring in conflict-affected countries like Syria:
Complex, high-risk environments require varied data collection methods. Third-party monitoring in high-risk environments contends with a broad range of challenges. Security issues—such as airstrikes, shelling, presence of armed groups, and checkpoints—make output verification especially difficult. In addition to making field access more difficult, insecurity causes other cascading effects that impact data collection. Outputs are often temporarily or permanently moved offsite because of devolving security conditions or shifting needs on the ground. Incomplete or unavailable documentation is also a consistent issue. Sometimes this is a symptom of the conflict; for example, a grantee may only be able to provide partial documentation because activity files were destroyed in an airstrike. In other instances, it’s the result of clerical issues or concerns about sharing potentially sensitive project information.
However, a multifaceted monitoring approach that triangulates information can help overcome these pervasive challenges. Indeed, an analysis of ICAM’s reports shows no correlation between data collection issues and successful verification of outputs. Site visits, in-person grantee and stakeholder interviews, social media reviews, resident snap surveys, and remote interviews–when appropriate– enable third-party monitoring platforms to create a picture of the situation. Rather than taking a narrow view, categorically reporting that an output was not verified just because of a deficiency in one piece of data, this approach collects information from multiple data streams, allowing monitoring programs to assess the totality of data and develop a sufficient degree of confidence in their findings.
Third-party monitors should avoid undue time lags before verifying outputs. In intense, kinetic environments like Syria, there is an increasing likelihood of encountering data collection challenges as the time horizon expands. When there is a greater amount of time between activity completion and monitoring, there is a greater chance that security conditions and access may change, grantee points of contact may be unreachable, or outputs may be moved, damaged, or destroyed. The tradeoff is that a quick turnaround time increases the likelihood that data collection may be disrupted by last-minute developments that can emerge quickly in fluid environments like Syria.
To mitigate against this, third-party monitors should use available sources to identify when there are delays, modifications, or other changes that would affect monitoring. An open line of communication between the third-party monitoring platform and the program being monitored is also invaluable.
Third-party monitoring platforms also need to be structured to be fast and flexible, adapting to changing conditions on the ground. When needed, ICAM was able to mobilize field monitors in 24 hours or less to rapidly assess the status of an activity and communicate findings back to the donor. Programs should be structured in a way that allows for this type of expedited fieldwork and reporting.
Third-party monitoring platforms should contribute to program learning. Independent third-party monitoring platforms intrinsically must be firewalled from the programs that they monitor in order to remain credible and neutral. This does not mean, however, that third-party monitors cannot create formal and informal feedback loops to contribute to program learning. A foundational element of ICAM’s output verification process was the review and response phase. This formal feedback loop provided the program being monitored with the opportunity to accept or contest ICAM’s findings and provide additional clarifications as needed. This in turn provided the donor with even more information upon which to make a final determination, increasing the confidence of the findings.
Informal feedback loops functioning through the output verification reporting process can also benefit all stakeholders involved in the monitoring process. The third-party monitor can provide observations from field monitors, comments from grantees about the specifications of equipment, or anecdotal quotes from residents that can feed into the design of future activities. For its part, the program being monitored can provide recommendations to the third-party monitor about how to effectively communicate with grantees, or how to better monitor certain types of grants.
In addition, periodic analysis of these feedback loops may uncover trends that can help inform programming in real-time. Stakeholder and resident feedback about the quality of services or goods can provide valuable information to program staff. Although such feedback is anecdotal, when analyzed collectively, trends may emerge that can help with strategic and program-level analysis. Such feedback loops do not replace the field staff’s analytical capabilities, nor do they help identify windows of opportunity as effectively as in-depth research or substantive perceptions or thematic monitoring. Nonetheless, third-party monitoring mechanisms can and should be structured to include feedback loops that contribute to iterative action research models.
The views expressed in this publication do not necessarily reflect the views of the United States Agency for International Development or the United States Government.