Monitoring and Evaluation

cost effectiveness

Evaluation of Programmes cannot always rely on a perfect design because of time and resources constraints. Most health care programmes do not have baselines and control groups. Furthermore, the data are of poor quality because it is based on routine data collection, which is not a top priority among health care workers. Conducting evaluations of health care programmes is therefore more a question of knowledge on how to use already available and imperfect data. Producing a perfect study design would be too costly and for this reason it is usually confined to academic studies and trials.

Examples include the following:

(a) Evaluation of the use of smartphone to monitor tuberculosis (TB) treatment. I have evaluated the pilot test managed by the University Research Corporation (URC) to use the smartphones to collect data on new cases of TB entering treatment and to monitor their retention.

(b) Improving the Monitoring and Evaluation strategy of the URC programme.  URC is managing a USAID funded programme to strengthen the effectiveness of TB treatment in South Africa. I provided technical assistance on how to get most out of the data that URC is collecting through their management information systems.

(c) Evaluation of the TB Programme in Umzinyathi District. I managed the evaluation of the TB programme in this district of KwaZulu-Natal. The evaluation teams visited all the TB clinics to collect data on infrastructure, staffing, methods used to trace defaulters and other factors that were not available from the electronic TB register. The information was used to predict which factors were significantly associated with treatment outcomes. One operational outcome of this analysis was the identification of the critical levels of workload in terms of patients per staff that, at parity of other factors, were associated with treatment success rate. This helped to identify management strategies to improve the effectiveness of the TB programme and to rank the clinic sites that were more at risk for poor performance due to their excessive workload.

(d) Building theoretical models of efficiency of health programmes. Many evidence based interventions fail to produce results because they are missing a model of efficiency underpinning the probability of getting the expected outcomes. I have built a theoretical model of efficiency that was tested by evaluating the experience of the first two years of antiretroviral treatment in KwaZulu-Natal.

(e) Evaluation of the ART in KwaZulu-Natal. The design of the evaluation was based on the hypothesis that the efficiency of the health care system in maintaining patients on ART depends on certain characteristics of the patients and the clinic sites. I managed a retrospective study to identify the factors related to patients and clinic sites that were significant predictors of ART effectiveness. The most important finding was the identification of patients' workload in terms of annual new patients per staff and the type of staff that, at parity of other factors, were significantly associated with patients' retention. This allowed to identify alternative management strategies that were associated with different probabilities of retaining patients. A Monte Carlo simulation allowed to estimate the robustness of the predictions that such strategies would have had on patients' retention and to estimate the relative incremental cost effectiveness ratios.

(f) Economic evaluation of potential alternative revascularization interventions for Angina in the United Kingdom. This is based on secondary data on costs and effectiveness derived from the literature and on the theoretical management decision trees that are considered in the model.