IMDS supports RS in structuring the monitoring system of the Student Trajectory Protection Policy
The monitoring and evaluation of the Student Trajectory Protection Policy have guided the IMDS agenda in recent months. In October, the collaboration with teams from the Department of Education of Rio Grande do Sul resulted in the first joint workshop between the institutions. The meeting aimed to discuss ways to support the implementation of the policy, whose design was subsidized by the diagram of causes of school abandonment and dropout elaborated from this partnership.
Committed to supporting public policies capable of reducing barriers to social mobility, IMDS advanced to the continuity stage of the Theory of Change (ToC), an essential tool to guide the development of the monitoring plan. In the last week of November, the institute’s team met again with the Department of Education to monitor the implementation of the policy and evaluate its first results.
The first day of the workshop began with the general construction of the monitoring plan, from the definition of the indicators to how this data will be used by the management. It started from what had already been produced previously from ToC, with the logical chain organized into inputs, activities, products, results and final impact. Therefore, a central point was to highlight how monitoring exists to identify where possible failures are, what type they are, and how to quickly correct them to prevent the policy from losing or reducing effectiveness.
With the concepts already understood, the workshop moved on to a discussion about the choice of what to monitor. In this stage, the participants were divided into three groups to review 13 previously prioritized activities. The institute’s team then introduced the concept of “evaluative questions”, responsible for guiding the entire logic of the system. They establish the order of defining the question, the indicator that best represents it, understanding how to interpret it and, finally, what actions to take.
Together, the two groups concluded that a good indicator must be specific, measurable, feasible, attributable to the policy and have a defined periodicity. Therefore, very generic indicators, such as “prepared teachers” or “infrequent students”, need to be detailed, presenting a numerator, denominator, and clear criteria, so that monitoring is configured in the best possible way.
The first day ended with the groups preparing their own evaluative questions related to activities and risks. From this first step, participants had to cross-reference their questions with existing indicators and decide whether to create new indicators or remove those that sounded disconnected.
The next day, the teams opened the workshop by revisiting the importance of the evaluative questions as guides for the entire process. It was discussed how many of the questions formulated the day before had been too broad, and that it would be necessary to bring more precision. Based on these questions, one of the answers formulated was that the measurement of engagement, adherence, and quality of policy implementation are absolute and need to be translated into observable and measurable behavior.
Another issue that helps monitoring is to define the periods, units, frequency of collection, cut-off criteria and the exact way of calculating each indicator, so that it exists at its highest level of precision. Indicators that depend on non-existent information, or indicators that seem simple, but have no operational definition, for example, make it difficult to accurately organize monitoring.
In parallel, the challenge goes beyond measuring the data but involves building an institutional routine that looks at and analyzes the results obtained, identifying possible bottlenecks and proposing adjustments. This also implies ensuring that schools and regional bodies understand and use the information correctly. After that, the group worked directly on the material, reviewing the indicators listed for each activity, adjusting nomenclatures and discussing whether each one really answered the evaluative questions.
The final part of the second day was dedicated to the organization of what would come later. Once the understanding was consolidated that many indicators would still need to be technically specified at a later stage, the IMDS team asked the groups to finalize their evaluative questions and the corresponding indicators. They also reinforced that the process does not end there: the monitoring system is continuous, undergoes adjustments and evolves as the implementation is initiated and new needs appear.
At the end of the collaborative workshop, the groups were faced with a logical sequence of questions and indicators, a deeper understanding of the challenges, and a more tangible perspective on how the policy monitoring system works. Although it requires considerable time for collection, analysis, and action, the initiative is the factor that differentiates successful policies from those that do not achieve the desired impact.