Data Invisible Groups and Data Minimization in the Deployment of AI Solutions
UNESCO
Client:

Data Invisible Groups and Data Minimization in the Deployment of AI Solutions
Artificial Intelligence’s (AI) swift development has transformed every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights for improved decision-making. While AI’s deployment and uptake undoubtedly provide humanity with numerous opportunities to address global challenges, the data used for AI systems can create risks that must be addressed to avoid undesirable outcomes. This is because the biggest challenge to AI is that it seeks to mimic humans, which are inherently flawed. UNESCO hired Tambourine Innovation Ventures (TIV) to author a policy brief to provide its member states with a better understanding of the need for greater transparency in data usage for AI solutions.
The policy brief presented a case for data sharing and minimization, with sensitivity to those historically excluded, to ensure that governments fulfilled their commitments outlined in Our Common Future. The brief argued that these individuals and communities would likely remain invisible without the creation of inclusive data systems guided by data minimization and sharing principles. "Data invisibility" was treated as a corollary of the digital divide across many countries of the Global South and is likely to affect traditionally underserved and marginalized communities such as women, girls, indigenous peoples, religious and linguistic minorities, the elderly, refugees, and migrant workers. TIV’s brief identified discrimination in three main areas – punishment and policing, essential services and support, and movement and border control.
The critical challenges to Our Common Future posed by the effective contemporary deployment of AI for good were explored, exposing many inequalities and exclusionary practices toward data invisible groups. TIV’s brief showed that regulators and policymakers were at a critical juncture in regulating AI. Without proper regulations, AI might be used to harden lines of difference because present-day data overlooks the impact of AI on marginalized groups. To curb the negative effects of inadequate data and AI, the brief advocated for iterative and adaptable governance and regulatory frameworks for AI and big data that went hand in hand with the pace of AI development. To achieve this, the data minimization principle was presented, requiring organizations to ensure that the personal data used was adequate, relevant, and limited to what was essential for the purposes for which it was processed.
TIV’s experts further supplemented this by highlighting privacy-preserving methods, such as differential privacy, federated learning, and anonymization, ensuring a minimum set of principles and standards for data governance. In their policy brief, TIV’s experts advocated for reshaping the data landscape to close the gap in the data discourse by promoting data collaboratives, data stewards, and regulatory data sandboxes. The brief aimed to improve decision-making and policymaking and create capacity in countries with emerging AI and data capabilities.
