« Back to Features

March 29, 2017

Employee Spotlight: Senior M&E Technical Advisor Dr. Jonathan Jones

Dr. Jonathan Jones, a Senior Monitoring and Evaluation Technical Advisor working for CAMRIS, gives us a peek into the role of M&E in successful program implementation. This interview was conducted in March 2017, shortly after Jonathan returned from working with the Department of State in Belize.  

Q: Prior to the role of Senior Monitoring and Evaluation Technical Advisor with CAMRIS, what is your background, previous work history?

A: I hold a PhD in Political Science from the University of Florida. My dissertation focused on social movements within a democratic system in West Bengal, India. I have fieldwork experience in nineteen developing countries around the world. Prior to working at CAMRIS, I was an Evaluation Officer at the International Republican Institute, a non-partisan international democracy assistance organization, for 3 years where I focused on bringing evidence and learning to program decisions. Prior to this, I worked at EnCompass LLC where I led evaluations of international development programs in many different sectors around the world. At IRI and EnCompass, I led capacity building activities for civil society organizations- helping organizations incorporate evaluative thinking into their programs. I am also a former Adjunct Professor of graduate classes at the universities of Georgetown and George Washington, where I taught courses focused on M&E foreign assistance, and the Former Chair of the American Evaluation Association International and Cross Cultural Thematic Interest Group.

 

Q: You were recently invited as an Expert Panelist at the Evaluators Institute to speak about “Project Management and Oversight for Evaluators”. What were some of the highlights from this panel that you found resonated with you?

A: We had very interesting discussions about the realities of managing complex evaluations, and real-world strategies to ensure that evaluations are aligned with need and positioned to be useful. Although this might not sound overly difficult, it actually is the essence of evaluation process and requires deep engagement with the key audiences throughout the entire evaluation process.  One issue that often comes up during an evaluation is shifting deadlines, with clients sometimes needing findings and recommendations earlier in the process than anticipated. We discussed how important it is for the evaluation team to continually adjust timelines to need, even if this means shifting those timelines to accommodate client decision making processes.  The worst outcome for an evaluator is producing a rigorous evaluation that is not used.   

 

Q: You were recently appointed the Co-Chair of the American Evaluation Association Local Arrangements Working Group. Can you tell us more about your role and the role of the committee?

A: As Co-Chair of LAWG, I am responsible for ensuring that the upcoming AEA conference in Washington, D.C. this November, leverages local people and resources. I also assist with local knowledge, information, and spreading the word to local groups about the conference. Basically, to make sure D.C. is showcased at the conference. I am also the former Chair of the American Evaluation Association International and Cross Cultural Thematic Interest Group (ICCE TIG), where I managed a thematic group of approximately 800 international members of the American Evaluation Association. My role included managing the international travel award process (given to about 6 international evaluators a year to attend the AEA Conference), and managing the proposal review process of submissions to the AEAZ Conference. It was a 3-year tenure.

 

Q: CAMRIS is currently working with USAID/Nepal on the Monitoring, Evaluation and Learning contract. What are some of the key components to this contract, and have there been any recent developments, achievements, or milestones?

A: Technically this contract has 3 components: 1) support to USAID and implementing partners to ensure that they are effectively monitoring the success of programs, 2) evaluations and assessments to inform learning, and 3) Strengthening knowledge management practices to ensure that evidence and learning inform decisions. In essence, we are tasked with ensuring that the Mission and partners have meaningful evidence on hand to inform strategy. We have recently completed several successful assessments and evaluations in different sectors in Nepal, and are currently in the midst of 3 complex mixed method data collection efforts for 3 different research projects. In Fall of 2016, CAMRIS facilitated a 2-day learning event that focused on lessons learned around MEL. The central goal of the summit was to promote a culture of collaboration, learning and adaptation (CLA) around MEL among USAID staff and implementing partners (IPs). Several successful capacity building events (40 participants approx.) for Mission and partners focused on topics such as: developing effective performance monitoring plans, and evaluation planning and management.

 

Q: Drawing from your experience in Monitoring and Evaluation, what do you believe drives the success of M&E programs?

A: I believe the success of M&E programs is entirely driven by ensuring that evaluation teams deliver credible evaluation products that will be directly used to inform decision making.  In many ways, I feel that the evaluation community has figured out how to do rigorous research within budget constraints. The critical next step for our field is to ensure that those research products are positioned for use. Getting to use requires difficult and often time-consuming processes that are absolutely important and must not be short-changed. For example, deep audience engagement during design helps ensure that the evaluation scope, questions, and methods resonate.  I often kick off an evaluation process by asking the funder and implementer to think about what a wildly successful evaluation process looks like to them, and tell us what we, as evaluators, can do to get there. 

Sometimes evaluators tend to develop esoteric questions that sound great, but do not resonate with funders and implementers.  Instead, I really like to start a design process by asking the key audience of the evaluation to fill in the blank for the following question: “I would really like to learn ________ about this program that will help me make good decisions about the program in the future”.  I learned this in a training at The Evaluators Institute, they offer great training programs by long-time evaluation experts and I encourage burgeoning evaluators to participate.

Also, I like to think about the evaluation report as a platform for learning.  Therefore, the final report should not be the final step. Ideally, evaluators will step in after the report and facilitate a learning workshop with the audience of the evaluation, with the objective of helping participants think through how the evidence in the evolution can inform program strategy going forward. The evaluation community has been talking about evaluation use for decades, however, in my experience, few evaluators actually put the needed work in to get there.  I think our field needs to be focusing much more on how to ensure our work is used.  This is not easy, but is very rewarding in the end.