Recent Changes

Tuesday, August 7

  1. user_del rcelaya rcelaya left opm
    8:51 am

Friday, May 18

  1. user_add CSmith7 CSmith7 joined opm
    6:46 am

Friday, April 25

  1. user_add jgoulden jgoulden joined opm
    1:20 pm
  2. user_add rcelaya rcelaya joined opm
    1:20 pm

Monday, March 3

  1. page LAC OP&L Case Study edited ... I (part 2) Information system on the intranet where data are inputted for each tool, facilita…
    ...
    I (part 2)
    Information system on the intranet where data are inputted for each tool, facilitating consolidated data reporting
    Central database
    and

    and

    analysis reports
    It is very important to define the spaces in which information will be analyzed (expanded SMT, programmatic teams, CO SMT) (insert chart here)
    Communication
    How the OP&L is communicated and the way that results are analyzed and shared has been a key component of ensuring a successful implementation.
    (view changes)
    9:02 am
  2. page LAC OP&L Case Study edited Organizational Performance at CARE: A Case Study of the LAC OP&L I. Impetus for Change As…
    Organizational Performance at CARE:
    A Case Study of the LAC OP&L
    I. Impetus for Change
    As the pace and scale of global interconnectedness and change proceeds ever more rapidly, and Western awareness of and concern for global inequity grows, so does the attention given to INGOs and their role in making demonstrable reductions in poverty. The changing international context demands that NGOs offer a raison d'être for their work, their methods, their results, and their very relevance. Scandals ranging from sexual affairs to illicit use of funds have tainted the moral fiber of NGOs and raised concerns about their value and effectiveness. Debates about the role INGOs should play as actors in the international sphere and their legitimacy in relation to governments, local organizations and civil society persist. Such concerns are significantly impacting the system of international development and aid. As a result, there have been repeated calls for increased transparency about how funds are used and increased accountability for how work is carried out in local communities. Such demands not only stem from external pressure; internally, too, organizations have become more and more frustrated by the gap between their visions and the reality of their operations on the ground.
    Much of the international community took the opportunity to reflect on the changing international context at the beginning of the 21st century. The great aid operations of previous decades had seemingly failed to deliver, and INGOs were left asking why. Similar reflections were occurring across the CARE world about their relevance and role, resulting in the new CI Vision and Mission adopted in 1999 and the subsequent development of the CI Programming Principles to help operationalize the new vision. Articulating that CARE is a part of a worldwide movement and will contribute to overarching goals set by the international community, the new vision and mission broke from past tendencies to ‘go it alone.’ Shortly thereafter, CARE USA developed new strategic plan that subscribed to the international community’s goal of reducing poverty by 50% by 2015, and then conducted the Aligning Work with Vision to put it into practice. The organizational climate was ripe for innovation and change.
    Along with their colleagues around the world, CARE staff in Latin America had been questioning their relevance. Despite five decades of work in the region, poverty in LAC had increased in the last decade. Staff wanted to determine how they could have a greater impact on poverty and contribute to the 2015 strategic target. How would they “act upon the CI Vision and Mission, taking into account the change processes, as we understand them today, taking place globally and regionally as well as the development issues and opportunities specific to the LAC Region?[1]” The result was the LAC Management Framework (LAC MF) adopted in 2002, which set forth a strategy to translate the global vision into practical and measurable actions at the regional level for the period 2000-2015. The framework identified five organizational capabilities[2] for the LAC region to develop or enhance over the first 3-5 years to better enable them to achieve the 15-year strategic target, and also identified seven accompanying conditions[3] necessary for success.
    II. Why measure?
    In 2003, the LAC Regional Management Unit and Country Directors recognized the need for a way to measure progress in implementing the LAC MF and assessing its effectiveness in achieving their long term strategic goals and vision. They found that CARE had inadequate tools and methods to do so, however. There was no existing method of determining what CARE’s contribution to the MDGs would be, how they would measure their progress toward the new strategy, and how they would be held accountable.
    The RMU approached Peter Buijs, Country Director of CARE Ecuador, and asked if he would attempt to create something, and he agreed. In 2004, Sofia Sprechmann joined the RMU, based out of Ecuador, and the two began the process of developing a system that would bring rigor to the process of analysis and setting targets associated with LAC’s strategy and long term goals. They wanted to provide CARE staff in the region with a coherent and integrated framework for planning, measuring and assessing organizational performance in a systematic manner at the country office and regional level. Most importantly, they wanted to use the findings to promote organizational learning.
    Peter and Sofia began by asking, “Where do we want to go, and how are we going to get there?” They had the 5 capabilities set out by the LAC framework, signaling organizational effectiveness, and the accompanying 7 enabling conditions, which were the efficiency piece. But they had to consider to what end LAC wanted to be effective and efficient. The LAC MF had recognized the need to develop a better understanding of the underlying causes of poverty (UCPs) in the region and to align their program strategy with the new CI programming principles and the MDGs in order to make a significant impact on poverty and contribution to the 2015 goal. Sofia and Peter decided to undergo this process in Ecuador and use the results to help them determine what good performance would mean.
    They took the MDGs as a starting point, but quickly encountered CARE Ecuador staff resistance, as they viewed the MDGs as US-driven, bureaucratic UN targets, and limited as solely socio-economic rights. So they turned to the CI programming principles and engaged Ecuador staff in an interactive process of analyzing the underlying causes of poverty in the country. They then set the resulting 6 causes within the MDG framework, linking them with specific indicators but with a rights emphasis. As a result of the process, CARE Ecuador realigned its program priorities and developed a program strategy common to all sectors, creating programmatic cohesiveness with all interventions based on addressing key factors that impede progress toward overcoming poverty. Once they had restructured their work around the new strategy, Ecuador was thus clear about what they wanted to contribute to, and defined good performance as: “the impact of our work is in line with our vision and the long-term strategic goals of the international community, and how we operate is consistent with and best supports the values and principles of our vision.” Sofia and Peter were then able to turn to designing a measurement system.
    III. Design
    In February 2004, a group of 6-8 staff[4] representing Atlanta, CI and LAC perspectives shut themselves in a room for several days and emerged with a list of 38 indicators. Going into the process, the group knew that they would use the LAC MF as the starting point and that they would also draw heavily upon the recently completed AWV and the newly published CI Programming Principles. They also knew that they wanted indicators associated with CARE’s effectiveness in achieving program impact, strategy, and governance and HR as well as indicators of CARE’s efficiency in finance and program administration. The result was a group of Impact and Effect indicators, assessing the extent to which CARE’s programs are contributing to MDGs and other targets; a section on Organizational Strategy, assessing the degree to which programs are aligned with CARE’s hypotheses about how to best impact poverty; and an Organizational Support section, measuring the efficiency and effectiveness of financial and administrative management and governance and HR.
    OLE_LINK6OLE_LINK5Bolivia, Ecuador, El Salvador and Nicaragua met to finalize the indicators and discuss methodologies. The indicators themselves were well-received, but the COs posed several questions about methodology. (what ?s) It was at this point that the group decided to link the OP&L with the annual planning process in order to help standardize AOPs in the region, which until this point had little consistency from one year to the next and from one CO to the next. Thus the OP&L would help with performance planning as well as performance monitoring, ensuring the data would be actionable and inform their work on an ongoing basis.
    The RMU and Ecuador unrolled the OP&L at the April 2004 LAC Regional Conference. Peter and Sofia had prepared the following documents in order to set a common framework for the application of the OP&L: a) Matrices detailing the importance of each indicator, its definition, the frequency of collection, the source of the data, the accompanying data collection tool, and questions to ask in analyzing the results; b) FAQs about the concept and the system; c) A revised AOP form using OP&L indicators as the starting point around which annual objectives would be set, with several indicators common to the region to be focused on by all COs and additional ones to be chosen by COs based on their particular context; and d) a programmatic realignment tool, which, based on Ecuador’s experience, provided step by step instructions of the process through which COs can (1) conduct analysis of UCPs, and (2) use the analysis to realign their programmatic strategies to selected MDG targets and other goals related to human positions and the enabling environment.
    Not surprisingly, some COs were supportive (?) and others were resistant (?). Regardless, the RMU had the group prioritize 14 of the indicators for the region to focus on in the coming year and set the Regional AOP accordingly, expecting each CO to do the same in forming their own AOPs. However, only Ecuador actually used the new AOP format, which had implications on the extent to which the OP&L was utilized at the CO level. At this point the RMU gave Sofia and Peter the go-ahead to rollout the OP&L in the region, beginning with gathering baseline data in each country.
    IV. Implementation Take One – Gathering the Baseline
    Each CO identified several staff to be on the baseline survey implementation team, responsible for reviewing all documentation, completing the survey tools and conducting interviews as required. The baseline implementers from South American COs attended a training in November 2004, which involved the participants in the process of developing data gathering tools. Most of the focus was placed on developing the Program Quality tools, which evaluates the design, implementation and evaluation of program initiatives. Baseline implementers from Central American COs attended a training in January 2005.
    Trained staff returned to their countries with a set of basic guidelines for the baseline collection, detailed matrices of the indicators, and the tools. The guidelines identified which indicators were required, which were ‘additionally recommended,’ and which were optional; set the timeframe for data reporting; and explained the process of preparation, implementation and follow-up to occur at the CO. All COs were required to collect data for the indicators prioritized in the regional AOP as well as a few additional indicators related to ongoing regional initiatives and financial viability. Most were program quality indicators, primarily focusing on the extent to which the CI Programming Principles were integrated into program initiatives, as well as four indicators around mobilizing resources, two on financial management and three on Governance and Human Resources. None of the impact indicators or indicators assessing external perspectives were required for the baseline.
    The baseline study was conducted between May and August 2005. Sofia and Peter decided not only to use the study to collect data to analyze the current state in the region but also to use the study itself to test the tools and processes and solicit feedback for fine-tuning. As there were no set guidelines, the tools were applied differently in each CO, with feedback on the clarity of language used, suggestions about methodology, etc encouraged. The RMU analyzed the various methods used in applying the tools to determine which would serve as the standard for ongoing collection, and they also incorporated much of the suggested revisions. One of the most important changes made was the scale used in the interview section of the tools. Initially interviewers noted “yes” or “no” for each question asked, but implementers suggested that a 1-5 scale would enhance the quality of the interviews.
    At the end of the baseline collection process, all information was emailed to Sofia and Peter, who compiled the data. However, they were only able to analyze and report on the data related to Program Quality, as no one applied the Financial Management, Resource Mobilization or Governance and HR tools correctly. Initially these tools were less developed than the programmatic ones, as they were formatted more as a list of what was needed rather than as tools with which to actually gather the data. Also, there was (and still continues to be) confusion around the Resource Mobilization tool, which tends to be automatically delegated to the Finance person without explanation, although it is really more of a program planning tool.
    Sofia aggregated the programmatic data and prepared a regional-level report. An analysis of baseline data (see presentation and report) was presented at the November 2005 LAC Leadership Team meeting and then again at the 2006 Regional Conference. The results enabled deeper discussions about UCPs and regional priorities, as they were able to view concrete data assessing the gap between current programs and desired implementation of Programming Principles. They found the information to be quite useful in developing plans for the coming year. For example, they were surprised by the low number of projects promoting empowerment, so they set an objective to improve in the regional AOP for the coming year.
    Unlike the previous years, all COs used the new tool for the 2007 AOP process. The tool was modified based on feedback; rather than setting an objective per indicator, the new format entails grouping several indicators and setting one objective around them. This format allows for more objectives to be set but also requires more reporting against indicators. All AOPs (regional and country-level) as well as IOPs include the same OP&L indicators. LAC senior leaders – country directors and RMU staff members – are evaluated twice per year based on common OP&L indicators, which is helping promote accountability. Also, doing so ensured that the OP&L will be an ongoing part of the CO operations throughout the year, as it has been integrated into the yearly cycle with set meetings and responsibilities laid out in each quarter (see annual cycle).
    V. Lessons learned
    After the conclusion of the baseline, Peter and Sofia believed that they were handing the OP&L off to the CDs to continue in their COs, assuming that the documents showing what to collect and how would serve as a training and explanation for implementation. However, this wasn’t the case. The application of the OP&L only continued in Ecuador. Based on document study, conversations with RMU staff and interviews, I believe that the OP&L failed to ‘stick’ for the following reasons:
    A. The RMU was driving the implementation of the OP&L, but initially there wasn’t a strong regional “do it” or much accountability in place. The RMU had contracted development of the system with Ecuador, and assumed that because it was a regional mandate, COs would implement it. But there were many other issues demanding CO attention at the time, with Central American COs focusing on integration and Peru and Haiti in flux. Without CO ownership, other issues were prioritized.
    B. There wasn’t enough attention given to the operationalization piece of the system. In order to take it back to their COs, a user guide or manual would have been helpful, but CDs only had the indicators and tools to rely on. Without any guidelines detailing process flows, roles and responsibilities or how to rollout to staff, CDs would have had to devote significant time and attention to understanding, explaining and embedding the OP&L. Also, it would have been difficult for COs to continue measuring OP&L indicators after the baseline because at that point only Ecuador had realigned its programs[5]. Many of the tools and indicators assumed that the shift had already occurred.
    C. Discussion and explanation about the OP&L only occurred at the regional level and not at individual COs, so only CDs, some ACDs and the baseline implementers were even aware of the new system. Only the baseline implementers were trained on its use and involved in collecting the data, not CO staff who would eventually be responsible for various aspects of it. Staff filled in the tools because their managers asked them to but really had no idea what they were filling them in for. They just inputted the data requested and sent it off to RMU to do the analysis, negating any reflection and rendering it useless as a learning process. This points to the importance of training staff on the concept of the system, not just on how to use it. Also, although the analysis of baseline data was useful at the regional level, few CDs presented the results to their COs. In Peru, where results were shared with staff, there was backlash against the language used in the report to describe results (poorly, worrisome, etc). Staff questioned the analysis process since they hadn’t been involved in collecting the initial data and were unable to see how it had been compiled and interpreted by the RMU.
    VI. Implementation Take Two: Peru
    At the time of this case study, only Ecuador and Peru have implemented the OP&L as a part of their ongoing operations. Under Peter and Sofia’s leadership, Ecuador continued to use the OP&L after the baseline collection and fully embedded it into the CO. However, the region needed to learn from a CO experience besides Ecuador, where the OP&L naturally continued to be implemented after the baseline due to strong leadership commitment. In 2007, Peru agreed to serve as a pilot for the region to determine how to unroll the OP&L to all staff and embed it in the ongoing operations of the CO. Peru’s experience will be used as a guide for how other COs in the region can fully implement the OP&L as well.
    A. Programmatic Realignment
    Peter OLE_LINK2OLE_LINK1: “Defining CO strategy around the UCPs is the key to making the OP&L work.” The UCP analysis and accompanying programmatic realignment is the most important first step in order to be able to fully apply the OP&L. Doing so is necessary in order to be able to aggregate project data to see the bigger picture, compare and share lessons across sectors, more strategically mobilize resources and link data to advocacy efforts. Peru conducted a UCP analysis in 2005 and in 2006 restructured their programmatic sectors based on the analysis.
    B. Embedding
    After agreeing to be the pilot, Peru formed an OP&L implementation team composed of the directors of each division (ACD Program, IT, HR, Finance), with Claudia (M&E coordinator for CARE Peru with OP&L as part of her JD) as the overall coordinator. The team first dedicated time to understanding the logic of the system and helped refine the data gathering tools specific to their division. Moving forward, team members are responsible for communicating the concept and importance of the OP&L to their divisions.
    The most important and useful role that the team played was identifying where the OP&L would be incorporated within the various areas of operations and mapping work and information flows. They then determined how to integrate the OP&L within already existing processes and set the roles and responsibilities for individuals and groups. Since the group focused on building it into the ongoing processes of their divisions, the amount that any individual has to do for OP&L data collection is not very burdensome.
    As the OP&L had initially called for, Peru integrated it into the annual planning process so that data could become actionable. However, Peru took this process one step further, not only taking the AOP for CARE Peru into account but also the field office plans, sector plans and regional plans. Doing so provides an opportunity for various levels of the CO reflect on information specific to their group and use it to analyze performance and articulate new strategies for the coming year. It also ensures that plans are aligned at all levels. The team set a timeline for the various activities of the OP&L to be implemented each year, leading up to the AOP:
    Jan – March - All tools applied, information collected and inputted.
    April – Data compiled and analyzed, reports written and sent.
    May – Reflection processes: 1) Directors for each region in Peru leads all staff
    discussion about results, they analyze the information, and come up with
    an AOP for their region based on the data; 2) Staff brainstorming meeting
    looking at information by sector, identifying strengths and weaknesses,
    discussing the how and why.
    June – Hold the bigger meeting discussing the AOP for the whole of CARE
    Peru, with the results from the May meetings informing the discussion.
    July – Apply.
    C. Applying the Tools
    The OP&L has 11 data gathering tools. The indicators are organized by tool; the tools are the results, with the indicators grouped under each tool as a way to measure the tool. For example: Tool A, Quality of Proposal (Design), has a list of relevant indicators, each to be individually measured and then looked at as whole to determine the overall quality of the proposal. One indicator, around empowerment for example, could fall under several tools – to what extent does the proposal include empowerment (tool A), and to what extent did the project succeed in incorporating empowerment (tool C).
    Organizing information this way can get quite confusing to staff who don’t necessarily know what tools A or C or F refer to, especially since the tools aren’t in logical order. In presenting and communicating information to staff, the IT coordinator has been working on organizing information by topic based on the balanced scorecard model.
    1. Programmatic Tools
    A – Design. A checklist that is used whenever a new project is designed, determining whether proposal ideas link with CO strategies and plans, whether they are consistent with the programming principles, etc. This is the tool that adds the most time and effort, because it has to be reviewed for each proposal. But as a CO Peru wanted to focus on increasing the quality of their proposals, so even though it’s time-consuming they have found it worthwhile.
    B – Implementation. This is a survey that is applied once a year to all projects under implementation to determine the extent to which they are upholding programming principles. There is currently a question about the timeline for this tool – if all tools are to be implemented once a year, this wouldn’t apply to all projects. The team is now determining how long the project needs to have been running before this is applicable (they’re currently thinking 6 months).
    C – Evaluation. This tool is applied to all projects after their evaluations and assesses the extent to which the project upheld the program principles and yielded expected impact. The data is based on the independent consultant’s evaluation report as well as the interview with the project team. This tool is very closely linked to Tool A because the indicators of the project’s success have to be built into the logframe. This is a very important tool, but at this point says nothing about the sustainability of the work over time, which Peru wants to eventually build in.
    H – Project Management – A checklist about whether or not projects are satisfactorily following project management standards, to be applied at the beginning of each new project or when the project has a new manager.
    Peru has primarily focused on programmatic tools A, B and C. These are the tools that they are already regularly implementing, with a new policy embedding the program tools into the DME flow of a project, and the ones they have spent the most time fine-tuning. Data is gathered for tools A, B and C through responses to a set of questions that accompany each indicator, to be answered by the project team. One of the major changes they made was to embed a reflection process into the interviews by adding the question “what can we do to improve” so that staff are learning while applying the tools. They also revised several of the questions asked in the interviews, adapting them more to the UCPs and programmatic priorities that emerged from the programmatic alignment.
    The data for tools A, B and C is collected by a facilitator, who asks each question to a group of relevant people (i.e. the project manager and staff), has each person rate the extent to which they believe the project has, for example, a strategy to promote a more equitable distribution of power (0-5), asks why they gave their rating, and talks it through with the group, documenting the conversation. The information emerging from the interviews is by its nature somewhat subjective, but the idea is that a group conversation helps because different people will have different perspectives and will be more honest around others. The facilitator determines the final result based on both the responses and the conversation around it. Click here for an example.
    Applying the programmatic tools cannot solely be one person’s responsibility. Peru has trained a network of 25 facilitators to apply tools A, B and C in the sub-offices. The DME Coordinator, Claudia, who has responsibilities for the OP&L built into her JD, initially travelled to all of the field offices to explain the OP&L. She identified staff with good communications skills and an understanding of IT to be facilitators and, with the agreement of their manager (it was presented as a development opportunity), embedded it in their job description. She gathered them together for a workshop and trained them on the system and their role, which includes facilitating the interviews for the programmatic tools and inputting the data into the IT system. She particularly focused on teaching them to apply the tools in ways to promote learning and reflection within the project team. They have an annual schedule of what is required of them and when. It has been important to have OP&L owners in the field offices, someone who is invested and involved and does troubleshooting and ongoing training. Moreover, they have been key for involving the field offices.
    One drawback noted about the tools is a lack of flexibility. For example, a sector head in Peru was writing a proposal for a project preventing women’s suicide and was struggling with tool A’s requirement for each proposal to account for an emergency. How does one include emergency preparedness component for this type of project? It isn’t applicable, and there isn’t an “n/a” option, so the rating about the overall quality of her proposal would go down. As with any measurement system, it would be helpful to include room for explanation for such situations.
    2. Alliances and External Perspectives Tools
    D – External Alliances. This tool monitors the number and types of alliances and assesses CARE’s opinion of various partnerships and their strategic purpose, to be used by the CD once a year.
    K – External Perspectives. Monitors the way external actors view CARE Peru and their contribution to reducing poverty. A sample of different types of partners will be selected each year, including community members. The facilitators in each region will be responsible for conducting the interviews with the selected partners once a year.
    Peru hasn’t applied these tools yet, but believes that they will be very useful in being more strategic about partnerships and in gaining partner perspectives of CARE.
    3. Finance Tools
    E – Resource Mobilization. Looks at current and projected diversity of resources, at resources per program priority, at UNR verses total budget, etc.
    G – Financial Management. Looks at the basics of financial management - SPC, whether or not projects are meeting budgets, amount CARE Peru has raised on its own, financial risk factors.
    Peru hasn’t found the finance tools to be very useful. The financial management data captured by the OP&L is not enough information for management to make decisions – it’s too basic, so the Finance Director still prepares much more detailed reports for senior management. As for the Resource Mobilization tool, Peru questions how to set resource targets for each sector (Do we set high goals to encourage us to get more resources, or do we set lower goals that are obtainable?). However, Ecuador has had great results from this tool, as it has helped them grow their budget and be more strategic about what money to pursue per sector.
    4. HR and Internal Governance Tools
    I - This tool hasn’t been finalized in Peru. It will eventually assess levels of participation, inclusion and diversity in CO governance and decision-making on an annual basis.
    J –Would gather individual employee opinions. Hasn’t been developed yet but would be implemented annually.
    Peru also doesn’t have the HR tools right yet. CARE Peru just got a new HR manager and they’re working to figure out what to include in the tool. They do have a good HR system that already captures a lot of data, so they’re trying to figure out what to pull out. In Ecuador, however, these tools have already proved quite useful. The turnover and GED numbers revealed that they were losing women much more often in their programs, prompting senior management to set an aggressive target for increasing female representation in mid/sr level management in program. Having the OP&L data front and center helped management make this decision and explain to staff the rationale behind their somewhat controversial actions. Also, the survey revealed that decision-making processes were rated very low on inclusivity, which has raised awareness about the power imbalance between women and men and between staff and senior management. Management acted on this data as well, conducting evaluations after every meeting to keep staff accountable to improving.
    There is an additional tool, F - Mobilization of Groups/Organizations, that assesses the number of groups or organizations mobilized to support initiatives to address the underlying causes of poverty. It is supposed to be applied annually by a team of CARE personnel. Peru has not yet used this tool.
    OLE_LINK4OLE_LINK3
    Peru recognized the important of involving the head of IT from the beginning. She has played a key role in conversations about operationalizing the OP&L beyond the specific IT requirements because she was able to identify process and information flows and utilize existing systems and information whenever possible. Peru thinks creating a web-based system is necessary, but they are being cautious about implementing it. There has been some staff resistance and there is low capacity.
    For now, only tools A, B and C have an IT component. Peru’s IT department expanded SGP, Peru’s project management system, to incorporate the OP&L programmatic tools, facilitating data entry, storage, analysis and reporting of OP&L data. SGP now pulls together all information about a project – the financials, OP&L tools and data, etc – and allows staff to attach documents and export updates and other information. It has also helped with tracking proposals – it can break them out by donor, region, sector etc and also shows a register of proposal ideas that don’t currently have a donor. Initially, responsibility for inputting the data was delegated to people familiar with web-based systems, such as assistants, who were paired with staff responsible for collecting in order to train them. Recently, however, Peru rolled out a new policy requiring project staff to use the system. They are still in the process of emphasizing that this is mandatory, and have had about 90% success.
    Peru has created an IT plan that will eventually incorporate all of the tools. They had two options: develop a new system or insert the various elements of the OP&L into the existing systems. They chose the second option, embedding the tools into their project system (SPG) and eventually doing the same with their HR (Adryan) and Finance (Scala) systems (see below). As for aggregating the information for the whole region, there is a proposal to create a central database that would import the data from each CO (who will also insert information into their existing systems) and share the information on Sharepoint. The proposal is currently being analyzed for feasibility. The one problem is that not all COs have all of the systems – for example, Central America doesn’t have a project system.
    Portal Per
    ú
    (indicadores
    DAO)
    SGP
    DAO
    Scala/SIC
    Eval 360
    Adryan
    Des
    ign:
    A, E
    Implementa
    tion
    B, H, D, F, K
    Evaluati
    on: C
    Governance: I
    (part 1), J
    G
    I (part 2)
    I (part 2)
    Information system on the intranet where data are inputted for each tool, facilitating consolidated data reporting
    Central database
    and
    analysis reports
    It is very important to define the spaces in which information will be analyzed (expanded SMT, programmatic teams, CO SMT)
    Communication
    How the OP&L is communicated and the way that results are analyzed and shared has been a key component of ensuring a successful implementation.
    Peru has found that communicating the concept is the first and most important piece. Initially, COs were handed the OP&L and staff never knew why they were doing it. It is very important to start with the concept and get staff to understand the rationale before requiring them to implement. Claudia, the DME coordinator who is now also responsible for the OP&L in Peru, explains, “Measurement is a foreign concept, and it is very difficult to implement a system like the OP&L when staff don’t naturally speak its language. Also, nobody knew what organizational performance was, so it was important to set a common definition and begin to show people that their work contributes to a bigger picture, that performance is about something more than just ‘my job.’ ”
    Claudia found that it is helpful not to inundate staff with all of the information about the new system at one time. She held one long meeting solely focusing on the concept of performance and impact, ensuring that the broad senior management team understood it well. She found that it was important to explain it in different ways – some people ‘got’ it through a written explanation, some through diagrams, and some by talking it through with her. Then she held a second session giving an overview of the OP&L, explaining the different dimensions and levels and how it all fit together. It has been difficult to emphasize to staff that the OP&L is an evaluating and improving tool; its purpose is to evaluate, not to collect more data for reporting requirements. Claudia explains it as a signal, flagging what’s going on and requiring staff to pay attention to the signal and go behind it to find out why. This concept has proven confusing to staff. Finally, a third meeting explained the details about what data would be collected and how to operationalize.
    Something overly complicated with a lot of frequency for data collection will immediately throw up barriers. The seemingly overwhelming list of indicators in the OP&L that has been shared with the broader CARE world has elicited such reactions. In reality, Peru believes that the OP&L is simpler than it appears because it gives people ample time to collect the information and is only done periodically, with certain indicators prioritized each year. Jay Goulden: “My first reaction to the OP&L, before I was in Peru, was that it was overly complicated and way too big. It’s important not to communicate something that way. When I got here and boiled it down to what we’re looking at per process per year, I realized it was much simpler.” Peru has ensured that the OP&L is implemented gradually and have communicated it as simply as possible to staff. Also, they have found it important to communicate that the OP&L is not the only measurement system, both for selling purposes (this isn’t the final word about your impact) and to eliminate confusion so that staff don’t think that this is replacing the DME system or any other required reporting.
    In regard to communicating the compiled and analyzed data, it has been most useful to explain the results in different ways at different levels. Claudia prepares reports on the OP&L information for different stakeholders – one high-level report with all of the aggregated data for CARE Peru, one report breaking out data by programmatic sector, and one report breaking out data for each region. Both the aggregated and the disaggregated information are important. The aggregated gives the big picture of performance and flags issues, the disaggregated gives the details. For example, unless the results of advocacy projects are pulled out of the combined results, it won’t be useful for learning any specifics about the performance of their advocacy work and how to improve it. Also, unlike the way the baseline information was presented, senior management presents the results of each step before showing the overall results so that staff can see where the final analysis came from and will trust it.
    Learning and Improvement
    Indicators have to be used as methods for learning and results have to be viewed as much more than mere evaluation. The OP&L and any like system is naturally a useful tool for management purposes, but to make it useful for other staff, it has to be a learning tool. The space for reflection has to be built into the tools and the process.
    Peru found that where staff really connected with the OP&L was the learning and improvement component; staff didn’t care much about measurement and impact. But they got excited and motivated when thinking about how this could be used to learn about how to improve their work. Sector-level staff want to use information to look at how the projects under their umbrella are doing and to consider their overall sector strategy. For project staff, applying the tools themselves can be useful for reflection, but a lot of it depends on the sub-office. Some emphasize discussion and analysis, while others solely focus on implementation. The tools are useful for improving, particularly for long-term projects, but project staff aren’t accustomed to this. They’re used to solely focusing on the logframe. It is thus vital that management at all levels understand the OP&L and use it to promote a culture of critical thinking and reflection.
    Project manager and sub-regional director in Peru:
    “It would be a good idea to bring all staff in the field offices together to ask questions about the OP&L, otherwise it will remain project-focused for them and they won’t see the bigger picture of what the information is adding up to. Getting everyone together to discuss the results will naturally ensure their involvement and will get them invested. It is vital to have workshops in the first few years especially so that people will see the importance of the OP&L when they see and discuss the results. If people in the field offices don’t feel empowered, and aren’t able to identify that the information collected about the projects came from them, bottom-up, they will dismiss it or react defensively to it. They have to have ownership over the information, be involved in the discussion, and then they will own the improvement.”
    The people working in the field must understand the system or they won’t provide good information. In Peru, currently field staff are only using the tools because senior management requires it. They don’t know what they’re doing them for because they don’t yet know what the OP&L is.
    Impact Pilot
    The OP&L is a performance system, not an impact system. It evaluates what projects should do in order to have an impact, and to what extent project succeeded in achieving desired impact, but it does not report on actual impact numbers. However, Peru has recognized the need for measuring both performance and impact in order to get the full picture. The ACD Program collected impact numbers around the MDGs last year for Peru’s annual report, and they were very useful, particularly as an evidence base for advocacy. Viewing the information from each sector enabled Peru to identify potential advocacy issues and scale them up. They have recently done so using a tool from ExpandNet that helps projects with advocacy potential scale up to the national level and connect with the government.
    Peru also piloted impact measurement with the Economic Development unit to test how they could measure the impact of the sectors and evaluate their implementation strategies. In 2008 they plan to replicate this work with other sectors. The pilot, which had a budget of $12,000 pulled from Peru’s unrestricted funds, took the following steps:
    - Developed a strategy for Economic Development. The idea is for all sectors to have programmatic strategies with annual objectives.
    - Set impact indicators around the strategy to measure progress.
    - Gathered a baseline to determine the current economic state in the regions where CARE works, drawing on both external data and an internal questionnaire (the same economic status questionnaire used by the government of Peru) applied in the areas where CARE works with a project sample and a control group. CARE Peru staff will go back in a few years to interview the same families and see if their economic development strategies are working, which are working best, what is working best in each region, etc.
    - Wrote a final report about the baseline results for each region and used this information to create an economic development model in Peru. Staff found this information invaluable and are excited to develop a strategy based on solid facts rather than just on their perception of what’s needed.
    - Put measurement software, standards and processes in place – built within SPG.
    - Use indicators to monitor impact in the meantime – total $ sales, access to markets, etc. They have developed a collection tool for each type of project (i.e. livestock, agriculture, etc) to be used by project staff 2x a year. Staff input the information on SPG, which both aggregates the numbers to give a picture of CARE Peru’s overall economic development and disaggregates them by project type, region, etc. This information is more quantifiable and less subjective than the way programmatic performance is measured in the OP&L and gives ongoing evidence of impact. It also gives programmatic results by sector, providing an important complement to the OP&L’s focus on individual programmatic initiatives.
    - Identify impact. The overall impact will be the amount of poor people whose poverty has been lessened by CARE’s projects, found by comparing the baseline data to the results in 2 years. The indicators will be monitoring this on a regular basis but the actual change made will be determined every 2 years.
    VII. Usefulness
    In Ecuador, the SMT and extended leadership team all know the OP&L well by now and consistently use it to manage CO and make decisions and improvement. The results are particularly exciting for them now, after several years, because they are able to view their progress over time, identify trends, etc. and can really get into discussions around performance and how to improve. In Peter Buijs’ opinion, perhaps the biggest gain is that the OP&L data has prompted more substantive discussions and enabled them to be more strategic about their CO operations. Beyond the management level, he believes that the OP&L’s usefulness is in the learning, which occurs through the application of the tools. For example, the steps required by tools A, B, and C makes project staff more reflective about each step of their work.
    At this point, however, the OP&L hasn’t been very useful for managing CARE Peru, because they don’t have the results yet. The baseline information at the end of this year will enable them to start learning and analyzing, but it will be more useful once they can compare over time. In the meantime, however, tools A, B and C have been the most helpful, because they’re serving as quality assurance for projects. They believe that applying the tools is already moving the CO toward being more reflective, complex and thoughtful.
    Besides the obvious usefulness of the OP&L (decision-making, learning, accountability, improvement), Peru staff involved in implementing the OP&L think that it will be helpful over time to connect project staff loyal to their individual project to the broader organization, placing them as part of a bigger whole, within CARE Peru and within LACRMU (and then, eventually, within the global CARE. Having the same system in each CO is thus vital, showing that CARE Peru doesn’t exist in a vacuum. Standardization is also vital for comparison’s sake; we need to speak the same language if we are to compare, and also if we are to share knowledge. Staff involved in implementing the OP&L are perhaps most excited about the potential for knowledge sharing, believing that doing so could transform the way CARE operates. They realize, however, the need to strike a balance between those who want to compare in order to compete (which they don’t view as completely bad, because it helps motivate) and those who want to compare in order to learn from others and improve. It will be a cultural shift in making CARE a learning organization.
    The concern mentioned by most of Peru staff was the risk of subjective results. Peru has been exploring ways to mitigate subjectivity in the tools themselves. One idea is around the data collecting interviews for tools A, B and C: not only should the people involved in the project answer the questions about its success, but also include someone from a different project, and vice versa. Since there’s a natural competition between projects, they believe that this ‘exchange’ could help eliminate the subjectivity, with the added benefits of promoting knowledge sharing across projects and involving field staff in work beyond their specific projects. Also, there are no common definitions. There is enough risk within the same CO that different people will interpret terms differently, but even more so across COs. For example, Ecuador and Peru define empowerment in completely different ways, so they are going to answer questions differently.
    The risk of subjectivity only increases as the data is aggregated beyond each CO. There will be some difficulty in aggregating the information up to the regional level, because of difference of implementation (the way Peru leadership encourages staff to critically look at projects will be different from the way another CO does it) and because of turnover (different people implement it and emphasize it in different ways). It will be impossible to ever have totally objective comparisons.
    VIII. Conclusion
    The external impetus for measuring performance and impact shows no sign of abating, with increasing pressure from donors and the public and a growing number of peer organizations with systems under development. Moreover, the drive within CARE has never been stronger. Reflected in the new strategic plan, the current executive and Board leadership places a high priority on quantitative data showing our performance and impact. The organization would do well to learn from the experience of developing and implementing the OP&L, the most comprehensive performance measurement system existing at CARE.
    What began as a measurement tool for the LAC MF has evolved into an overarching measurement framework useful for assessing the performance of each CO and the overall region. The OP&L enables all COs in LACRMU to operate within a common framework, aligning plans and sharing a set of objectives and indicators around regional priorities. Using the data to analyze the performance and progress of the region has been key to developing a cohesive regional strategy and an important tool for flagging areas for improvement. As evidenced in Ecuador, and as anticipated by Peru, the data will also enable COs to be much more strategic about their operations and able to flag and act upon issues as they arise. The learning component is perhaps most exciting, as it encourages staff to reflect on the quality of their work and make changes to improve.
    While its initial rollout to the region failed to continue as a part of ongoing CO operations, Peru is successfully operationalizing the OP&L by involving staff from each unit, embedding it in the process flows of each department, setting policies around its use, clearly identifying roles and responsibilities throughout the CO, creating space for reflection and learning throughout the year and firmly linking it to the planning process. Most importantly, they are building a common understanding around the concept of and rationale for performance measurement and developing staff capacity to implement data gathering tools and reflect on and learn from the results.
    [1] LAC Mgmt Framework.
    [2] 1) Developing and promoting learning processes; 2) influencing public policy and attitudes; 3) expanding and deepening inter-institutional relationships; 4) integrating within local society; and 5) mobilizing new and diverse resources.
    **[3]** The conditions that LAC identified for improvement in order to realize the organizational capabilities: Re-conceptualizing the purpose of projects and programs; “mastering the basics;” reducing workload; improving communications and coordination; becoming increasingly more inter-dependent; sharing leadership; and re-defining the roles and functions of executive and senior managers.
    [4] Participants were Peter Buijs, Sofia Sprechmann, Colin Beckwith, ?
    [5] At the time of this case study, all COs in both Central and South America have completed the UCP analysis.

    (view changes)
    8:53 am
  3. page space.menu edited ... Ecuador Peru LAC OP&L Case Study Other resources
    ...
    Ecuador
    Peru
    LAC OP&L Case Study
    Other resources
    (view changes)
    8:52 am
  4. page Peru edited {OP&L Case Study.doc} First First phase of Programmatic alignment to MDGs, other impact in…
    {OP&L Case Study.doc} FirstFirst phase of
    Programmatic alignment to MDGs, other impact indicators and underlying causes of poverty
    Development of impact measurement systems that measure impact at the project and national levels.
    (view changes)
    8:49 am
  5. page Peru edited First {OP&L Case Study.doc} First phase of Programmatic alignment to MDGs, other impact in…
    First {OP&L Case Study.doc} First phase of
    Programmatic alignment to MDGs, other impact indicators and underlying causes of poverty
    Development of impact measurement systems that measure impact at the project and national levels.
    ...
    Identification of roles and functions for individual and groups
    Utility of the system – for what has it been useful?
    To read the case study in its entirety: {OP&L Case Study.doc}
    (view changes)
    8:48 am

More