Data has come into prominence in the impact space, especially since 2020, when the lockdowns imposed due to COVID-19 limited nonprofits’ on-ground engagement with communities. It ushered in the use of digital channels, which resulted in the generation and use of more data.
Through our work at Project Tech4Dev, we noticed that during this time some nonprofits progressed significantly in the way they use data. For example, The Apprentice Project, an education nonprofit, switched from interacting with students in person to using an online chatbot-based programme. The chatbot-based engagement generated rich data on student interactions with the chatbot. They began monitoring this engagement and testing the success of different chat interactions with students. Through this, they were able to iterate and improve their programmes and outcomes and communicate this to different stakeholders. We saw that, in a similar way, other nonprofits also started understanding that ‘data is important’ (read: ‘Data dashboards are important’).
Nonprofits still use data in a limited way
This holds true across the different steps involved in data collection, storage, and analysis. Nonprofits will often collect data using multiple tools, such as Excel sheets, offline survey tools, chatbots, website forms, or mobile apps. These data sets generally remain siloed, stored in the location where they were collected, and each is representative of only a fraction of the nonprofit’s total impact.
It’s not just collection and storage that are fragmented; data is also analysed and used in a constrained manner. Every quarter, a nonprofit’s monitoring and evaluation (M&E) team or programme manager will spend a stressful week or two manually bringing together data from various locations and cleaning, consolidating, and analysing it. This data is then used to prepare a few graphs or a dashboard for funder reports. Funders and nonprofit leadership review these to check progress against organisational and grant aims. Sometimes the data will also inform a redirection in strategy.
Given this top-down approach to data, it is no surprise that traditionally organisations have only become more data savvy when their funders and learning/experiment grant proposals have pushed for more detailed analyses, such as cohort tracking, churn and retention analysis, A/B tests, and impact studies.
But this effort should be led by nonprofits, not funders.
A new way of looking at data
As a sector, we need to spend less time managing and reporting data, and more time learning from and iterating with it. We need to understand what data can do for us, what it can be used towards, and who can benefit from it.
To elaborate, data management need not be a repetitive, time-consuming, people-intensive exercise; tech can automate most, if not all, of it. When these processes are automated, organisations and their teams can spend more time drawing inferences from data.
But a few shifts in perspective need to accompany, or even precede, automation. These include understanding that:
- Data is not just for quarterly evaluation. It is for daily, weekly, monthly monitoring and learning.
- Data is not just for funders or external reporting; it can also be tremendously useful for internal learning.
- Data is not just for leadership. Democratising access to relevant data across all levels and functions of the organisation enables data-driven learning and decision-making at every stage.
What nonprofits do vs what they can do
Here’s what a nonprofit’s relationship with data might look like.
Take the example of a nonprofit that runs an after-school financial planning programme in schools across three districts. To do this, the nonprofit engages volunteers who run classes twice a week throughout the school year, and programme coordinators who manage the volunteers. The nonprofit has a two-person central M&E team that is responsible for tracking the progress of the programme delivery and impact outcomes.
Here are all the ways in which this nonprofit collects data.
- Students are required to complete a weekly quiz via Google Forms.
- Volunteers log curriculum completion and student attendance via a separate Google Form.
- Student enrolment data is stored on Excel spreadsheets.
- Annual baseline and endline surveys are conducted on another survey tool.
If this nonprofit were data-led, they would have an automated system bringing together data from the various sources—baseline and endline surveys, weekly quizzes, enrolment—and a method to present a holistic view of the nonprofit’s impact in one or more data dashboards.
This dashboard would be ‘live’ with data as current as from the previous day. The concerned stakeholders would be able to see relevant data via these dashboards and inform programme practice and strategic decision-making.
For instance:
- A volunteer could log in and see data for the students and classes that they are teaching, and figure out which students and classes to focus on in the following week.
- The programme coordinator could see how all their volunteers are performing and take action in the schools or clusters where programme implementation needs support.
- Similarly, central M&E and leadership could analyse data across the three districts, solicit feedback from their team, and think of how to improve the programme.
- The team could choose to alter their intervention in one district and track whether it leads to improved outcomes before adopting the change across all districts.
- At the end of the quarter, the nonprofit wouldn’t need to scramble to put a report together as charts and graphs would be readily available for analyses. Reports could be set up to be automatically e-mailed to funders, or funders could access a dashboard view at any time.
Becoming data-led
A nonprofit’s data flow can be understood through some crucial components.
- Collection: The process of gathering and measuring information on key indicators for the intervention via tools for data collection.
- Storage: This is the location where the data is stored. It could be local to the data collection tool through which the data is generated or a separate location where all data is stored.
- Extraction: The process of pulling data from one or more sources and sending it to a destination.
- Transformation: The process of cleaning, consolidation, and computation of data.
- Visualisation: The conversion of data into readable charts and graphs for decision-making, often on a data dashboard.
In many cases, as in the previous example, the desired output is a live dashboard. Therefore, as a first step, most nonprofits tend to adopt dashboarding tools such as Looker Studio, Power BI, Tableau, Metabase, or Superset. These are also known as business intelligence or BI tools and can be used by nonprofits to connect their various data sources (Google Forms, survey data, Excel data, etc.) and to consolidate and visualise their data. The BI tool is hence single-handedly responsible for extraction, transformation, and visualisation of data.
A nonprofit may choose to develop a custom data pipeline, which is often expensive and inefficient.
There are certain challenges that arise when BI tools are used. The more common ones are associated with errors in data, or when the data is not clean (key ID fields are incorrect, some fields are not filled, or some data is missing). Time is lost in manually checking and cleaning the source data.
In addition, visualisation tools have limits or payment tiers for bringing together more data. For example, if an organisation starts collecting data from more sources, that is, more sheets and/or forms, it might need to upgrade to a higher payment tier.
In addition, if it scales its operations—for example, if it goes from working with a few thousand students in one state to tens of thousands across multiple states—the dashboards become slow to load as they take time to transform so much data.
Data dashboards may also have trouble dealing with complex queries. To continue with the example of an education nonprofit: If this organisation wants to check for student churn/retention, the complexity of data computation increases, which adds to the loading time on the dashboard.
While some of this can be solved by more efficient systems for data collection, it is at this point that nonprofits would consider upgrading to a higher version of their BI tool or switch to an end-to-end commercial solution, which is rarely affordable or relevant. Alternatively, a nonprofit may choose to develop a custom data pipeline, which is often expensive and inefficient.
Data management for nonprofits
Dalgo, developed by Project Tech4Dev, is an open-source platform that aims at reducing the time that nonprofits spend on managing data. It enables them to set up a central warehouse for storing data. The organisations can automate data management by configuring a daily or weekly schedule to extract data from various sources into the warehouse, and transform that data into the desired final tables, which can be used as required—for dashboards, apps, or even to download and analyse the data themselves.
As a result of this, most of the heavy lifting is done outside of the visualisation tool, which is then left to do what it does best: visualise data. Dalgo handles all data extraction and transformation on an automated schedule, which leads to quick-to-load dashboards and dependable data pipelines.
As more and more nonprofit organisations aim to use data effectively for learning and experimentation, the automation of data management becomes vital. Nonprofits need to ease the data management load on their team and utilise their time in drawing inferences from the data.
Things nonprofits should keep in mind on their data journey
If you’re a nonprofit at any stage in your data and tech journey, here are a few key considerations drawn from our work with partner nonprofits.
- Review the data you’re collecting: Identify what is absolutely necessary. Do not collect data that you will not end up using.
- Build data dashboards: This will inform your organisation on key metrics for operations and impact. Use this data to learn where/how you can do better.
- Run frequent iterations/experiments: Do this with your programme and use data to track its success against key metrics.
- Democratise access to data: Ensure that all stakeholders have access to relevant and timely data that helps inform their work.
- Focus on data privacy and security: Ensure that all sensitive beneficiary or nonprofit data is stored securely and that access to this data is controlled.
- Plan to build internal capacity: Building an M&E team is a great first step; you might eventually need to hire a data engineer.
- Automate, but avoid building custom data solutions: These become increasingly complicated over time and require an increasing amount of engineering effort to sustain. Explore and use what already exists.
—
Know more
- Read this article about how to democratise data in your organisation.
- Read this article to learn more about how to build an effective data dashboard.
- Read this article to learn more about the right and wrong types of data for your nonprofit.
Do more
- Visit data.org to take a data maturity assessment.
- Reach out to the author at support@dalgo.in for a quick chat about your data challenges and needs.