Date: Tuesday, June 3, 2025
This post was originally released on AEA365 in 2022, and was so popular the first time around, it is being reshared from the archives. Do you have a post you’d like to share with the AEA365 community? We’re currently accepting posts and would love to hear from you. Email your draft to Liz DiLuzio at AEA365@eval.org
Greetings. We are Annette Ghee and Emily Carnahan, monitoring and evaluation (M&E) professionals with a focus on digital health – the use of technology to enhance health outcomes. Annette works with the Digital for Development and Innovation team at World Vision International and Emily is on the Digital Square team at PATH. Although we focus on low- and middle-income (LMIC) settings outside of the US, the learnings we describe apply to any low-resource setting and any programmatic sector.
As evaluators, we often consider digital tools to streamline data collection, boost data quality, and facilitate timely data use for enhancing programs. But how often do we consider digital tools as a program intervention?
Digital tools can serve a dual role to meet our program’s M&E needs while improving the same program’s quality and efficiency. Here are some examples:
Multipurpose digital tools are expanding globally and meet a broad range of needs. In LMICs, many governments are leading by developing digital health strategies that discourage one-off deployments and encourage the use of tools deemed “global goods” that integrate with existing systems.
As evaluators, we must avoid parallel data collection systems and instead, deploy tools that can interoperate with existing data systems. This requires intentionality, familiarity with the data ecosystem and an eye to partnering to spot strategic convergence.
Evidence for the utility of digital tools is evolving but has been hampered by a fundamental point of confusion. Many M&E professionals don’t recognize that digital tools with M&E functionality must also be monitored and evaluated. This confusion is understandable – M&E of an M&E tool???
Resources to assess the performance of digital tools exist. The Digital Health Atlas and landscapes (e.g., on community health and COVID-19) can provide information on existing tools. M&E of Digital Health guidance and maturity models can support standardized assessments of digital tools.
As evaluators using and evaluating digital tools, we have a responsibility to learn from others’ experience and familiarize ourselves with resources from the digital health community to maximize the impact of our M&E and our programs.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.