Evaluation of Immunization Data Quality: Findings from the DQA in 27 Countries

Immunization data isn’t just numbers, it’s the backbone of every vaccine delivery strategy, funding request, and public health response. When that data is incomplete, inconsistent, or inflated, it doesn’t just slow progress; it misleads decision-makers and puts lives at risk. That’s why the quality of immunization reporting systems has become just as important as the vaccines themselves.

In the early 2000s, growing concerns over the reliability of national immunization reports led to a bold, global effort: the Data Quality Audit (DQA). Spearheaded by WHO and GAVI, the DQA aimed to uncover gaps in how countries recorded, reported, and used immunization data, starting with a focused evaluation across 27 countries in 2002 and 2003.

The results revealed just how far off reported numbers could be from reality, highlighting both the challenges and opportunities in strengthening the data foundations of vaccine programs worldwide.

The Origins of DQA: A Global Push for Accountability

By the early 2000s, global immunization partners were increasingly concerned about the reliability of administrative coverage data. With DTP-3 coverage widely used as a benchmark for national program performance, there was growing pressure to verify whether reported numbers accurately reflected vaccines delivered on the ground.

In response, the World Health Organization (WHO), in collaboration with GAVI and its partners, launched the Data Quality Audit (DQA) process as part of broader efforts to improve accountability and data transparency in the Expanded Programme on Immunization (EPI).

The DQA was designed as a standardized external audit tool to assess the quality of national immunization reporting systems. It focused on verifying the number of DTP-3 doses reported at the national level by tracing data backwards—through district records and all the way to source documentation in health facilities. Between 2002 and 2003, 27 countries underwent a DQA.

The goal wasn’t just to check the math but to identify structural weaknesses in data systems, highlight gaps in reporting practices, and help countries develop targeted improvement plans. The result was a much-needed global spotlight on the disconnect between reported performance and on-the-ground reality.

Methodology: How the Audits Were Conducted

The Data Quality Audit (DQA) followed a rigorous, multi-tiered process aimed at evaluating both the accuracy of reported data and the strength of the systems that produced it. The cornerstone of the audit was a metric called the Verification Factor (VF)—a ratio that compared the number of DTP-3 doses reported by the national immunization program to the number verified through documentation at health facilities. A VF between 85% and 115% was considered acceptable, while values outside this range flagged serious discrepancies.

Auditors selected a sample of health districts and traced immunization data through each level of the reporting chain, from facility-level tally sheets and registers to district summaries and national records. In addition to verifying counts, they assessed the availability and quality of documentation, the use of monitoring tools (like charts and feedback reports), and whether stock records and adverse event tracking were in place. This holistic approach allowed the DQA to identify not just reporting errors, but deeper systemic weaknesses—such as poor supervision, lack of standardization, or delays in data flow—that hindered accurate reporting and program performance.

Key Findings from the 27 Country Audits

The 2002–2003 DQA cycle uncovered critical weaknesses across multiple countries’ immunization reporting systems. Below are some of the most striking and recurrent issues identified:

  • 16 out of 27 countries had a Verification Factor (VF) below 85%, signaling major gaps between reported and documented DTP-3 doses.
  • Tally sheets were missing or incomplete in over half of the participating countries.
  • Vaccination registers were often outdated, inconsistently filled, or entirely absent.
  • Monthly summaries at district and national levels frequently conflicted with raw facility data.
  • Monitoring charts were rarely used or not visibly maintained at health posts.
  • Vaccine stock records lacked consistency, and in some cases, were not kept at all.
  • Adverse event tracking was minimal to nonexistent at the health facility level.

These findings went beyond minor reporting errors—they reflected deeper structural and operational issues across all tiers of data flow. Health workers lacked the tools and support needed to collect reliable data, while supervisory systems failed to catch or correct inaccuracies before they escalated to national reports.

Perhaps more concerning was the culture surrounding data use. The audits revealed that even when monitoring tools existed, they were often treated as bureaucratic checkboxes rather than decision-making aids. The absence of feedback mechanisms meant health workers rarely knew how their data was being used—or whether it made a difference at all. In that context, inaccuracies became normalized, and opportunities to strengthen program effectiveness were missed.

Modern Tools Inspired by DQA Learnings

These modern solutions go beyond paper-based verification, embracing digital systems, automation, and systems-level diagnostics to build smarter, faster, and more resilient immunization data environments.

From Verification to Systems Thinking: The Rise of IISA

The Data Quality Audit was a starting point—but as the need for more holistic assessments became clear, the Immunization Information System Assessment (IISA) was introduced. Developed by WHO and CDC, IISA moves beyond tally sheet checks to examine the entire architecture of a country’s immunization data system. It covers areas like governance, interoperability, human resource capacity, and digital infrastructure.

IISA provides countries with a comprehensive diagnostic tool that guides long-term planning and system-wide improvements. Unlike DQA or DQS, which zeroed in on reported coverage figures, IISA evaluates how information flows across platforms and decision-making levels. It helps identify bottlenecks, integration gaps, and sustainability challenges—providing a strategic roadmap, not just a score.

Leveraging Digital Tools for Real-Time Visibility

Digital transformation has been one of the most significant evolutions since the DQA era. Platforms like DHIS2 (District Health Information Software 2) are now widely adopted across low- and middle-income countries to standardize, aggregate, and analyze health data. Many countries have customized DHIS2 to integrate immunization-specific modules, allowing real-time reporting, error detection, and automated feedback.

In addition, mobile apps, cloud-based dashboards, and geo-tagged reporting systems are empowering frontline health workers to submit data instantly and access performance trends at their fingertips. These tools directly address the delays, discrepancies, and documentation issues flagged in early audits—transforming immunization reporting into a faster, more transparent, and more responsive process.

Building Feedback and Supervision into the System

One of the key takeaways from the DQA findings was the lack of feedback loops and supervisory engagement. Modern tools are closing that gap by embedding routine validation, visualization, and decision-support features into reporting workflows. Dashboards now flag outliers, late submissions, and mismatches in real time—enabling district managers to act quickly and correct issues before they escalate.

Digital supervision checklists, SMS-based alerts, and visual data summaries have also made it easier to monitor performance and conduct targeted follow-ups. By making data actionable—not just reportable—these systems are fostering a culture where immunization data is used continuously to guide day-to-day decisions, not just annual reports.

Country Response and System Improvements

While the DQAs revealed serious flaws in immunization data systems, they also served as powerful catalysts for reform. Most countries that participated didn’t view the audits as punitive, they used them as an opportunity to take action. Audit findings triggered a range of corrective steps, including national workshops, updated forms, and targeted training sessions for frontline health workers and district supervisors.

Some countries revised their data collection tools entirely, replacing outdated or overly complex tally sheets with simplified registers that emphasized completeness and accuracy. Others focused on strengthening feedback loops between national and local levels, introducing regular data review meetings or feedback reports that helped health staff understand performance trends. In several cases, the DQA prompted countries to assign data focal persons at the district level, responsible for ensuring consistency across reporting tools, summaries, and monthly submissions.

While these improvements varied in scale and speed, the audits clearly helped elevate the importance of data quality within national immunization programs. Countries began to recognize that good data wasn’t just a reporting requirement, it was a strategic asset. This shift paved the way for deeper structural reforms and laid the groundwork for the next evolution in quality monitoring: the Data Quality Self-Assessment (DQS).

From DQA to DQS: Evolving Toward Local Ownership

Following the early DQAs, it became clear that countries needed more than periodic external audits—they needed a system they could own, repeat, and learn from. That’s where the Data Quality Self-Assessment (DQS) came in. Developed by WHO and partners as a next-generation tool, DQS built upon the DQA model but shifted the focus inward, enabling countries to evaluate their own immunization data systems through participatory workshops, hands-on evaluations, and internal action planning.

Unlike the DQA, which relied on external teams and a fixed audit window, the DQS encouraged routine, program-led reviews that could be embedded into national health strategies. It wasn’t just about checking numbers—it was about empowering health workers, district managers, and national program leads to reflect on their own data practices and take ownership of the results. The process promoted peer learning, fostered accountability at every level, and reduced dependence on external validation.

More importantly, DQS helped embed the concept of data quality into the culture of immunization programs. As countries adopted this tool, many began linking data assessments with performance-based financing, supervisory checklists, and electronic reporting systems. Over time, what started as a technical tool became a powerful instrument for building sustainable, self-correcting data systems that could support both local planning and global reporting.

The Enduring Value of Getting the Numbers Right

The DQA process may have started as a one-time audit, but its legacy endures. By shining a spotlight on inconsistencies in immunization reporting, the audits helped shift global attention toward data quality as a foundation of effective health systems. Countries that participated weren’t just critiqued—they were given the insights and tools to strengthen their programs, from the health facility level all the way to national dashboards. In doing so, the DQA didn’t just improve data accuracy, it helped build momentum for a culture of accountability, feedback, and continuous improvement.

Today, the principles of the DQA live on in tools like DQS, IISA, and DHIS2, which continue to guide national immunization programs toward smarter and more sustainable practices. These innovations reflect a deeper understanding: that quality data isn’t optional, it’s essential. It empowers health workers, informs investments, and ultimately ensures that every vaccine reaches the children who need it most. Getting the numbers right isn’t just about better reports—it’s about better outcomes.