I don’t think anyone can deny that the world of ICT4D and mobile for development is growing. Development players are starting to embrace mobile technology, and the number of M4D projects (as well as donor funding for M4D and ICT4D projects) is also growing. But with this welcome embracing of mobile and of mobile products and services comes a healthy dose of skepticism – ‘does it work? What evidence is there that an mHealth services improves lives?’ These are valid questions – at the moment, the evidence base is small, and the role of monitoring and evaluation (M&E) to build a proof case that mobile services do change lives in the development context becomes crucial. I don’t mean using mobile platforms to conduct the M&E (as in data collection, for example, although that is potentially an important element) – I mean actually gathering evidence that a mobile service is doing what it set out to do.
However – there seems to be a fear of this type of M&E. I’ve spoken to many people recently who are building their M&E plans into their mobile product, project or service, but it is making them nervous as they are often unsure what they need to measure, how they measure it who they need to talk to and how the backend and / or commercial data fits in. M&E is seen as not only as a chore but also something to be feared because it’s seen as an unknown.
Here’s the secret: M&E of a mobile service / platform is like the M&E of any other development project. I think one of the reasons people get nervous is because they assume that you need to use mobile in a clever way to gather the data. You don’t. You can, of course, if you want to, but you don’t have to. Gathering evidence that a mLearning service works, for example, is like gathering evidence of any education project: does the project have an impact on the lives of the people you’re trying to reach? Is there any evidence of improved learning outcomes? Longer term, is there any evidence of sustained change? Find the active users of the mobile service and measure any changes through them to build up your evidence base. This can be done through face-to-face interviews, surveys, or even over the phone depending on the context. There doesn’t have to be any fancy technology involved, unless you want there to be.
Of course, there are many other things to consider that do make the M&E justifiably tricky – for example, the challenges of conducting a baseline before the service launches (and finding the actual respondents), or combining the commercial KPIs and backend data if it’s a commercial service with the social KPIs. Some of the struggles people in the ICT4D community have reported to me often hinge on the M&E partner involved – typically, this either involves hiring an M&E partner who has plenty of experience in the social sphere (and who often veer very much on the academic side of things), or hiring a market research agency who are excellent on the commercial side but not so strong on the development or social side. And so the M&E often ends up being not quite right, too academic, too commercial, or just not what the organization needs in order to demonstrate commercial or social impact.
I think that this will, however, start to change – as more ICT4D and M&E players get more familiar with each other and the juxtaposition of the social and the commercial, and the evidence base for mobile services continues to grow, the evidence base on how to conduct good M&E for mobile services will also grow. I also try to incorporate operational learnings into the M&E for the services I work on: as well as showing outcomes for the user, and building the commercial business case, it helps to show what works and what doesn’t work in actually implementing the service, to share with the ICT4D community so we can learn from each other. The same goes for M&E principles: as we get better at it, and we share our learnings, we’ll realize it wasn’t so scary after all.