
At the CDAC Public Forum combined with the fome Symposium, the session “AI in Crisis – What Next?” examined the social and ethical implications of artificial intelligence in humanitarian and crisis contexts. Within this, the breakout session “Inclusive AI: Languages, Literacy & Community Voice,” facilitated by Digital Empowerment Foundation, focused on a core concern: how AI systems intersect with linguistic diversity, social and digital divides, and community agency.
The discussion was anchored in a simple but critical question: How can AI support linguistic diversity and community participation rather than marginalising non-dominant voices, and what kind of AI literacy do frontline workers and affected communities actually need?
The conversation brought together civil society organisations, media practitioners, technologists, researchers, and humanitarian actors. What emerged was a grounded and, at times, uncomfortable reflection on power, ownership, and participation in the current AI ecosystem.

Dr. Arpita Kanjilal leading a session on Inclusive AI: Languages, Literacy & Community Voice
A Reality Beyond the AI Hype
India and much of the world is often portrayed through headlines celebrating rapid digital growth, innovation, and scale. Yet this narrative hides deeper inequalities. 70 percent of India is rural. Across the Global South, access to digital technologies continues to be shaped by language, literacy, infrastructure, gender, and social power. These realities frame how communities experience AI, not as opportunity alone, but often as exclusion.
Linguistic Exclusion Is Structural
Globally, there are more than 7,000 languages. India alone is home to 700–800 languages, many spoken by Indigenous, tribal, and marginalised communities. Yet AI systems today are trained primarily on a handful of dominant, standardised languages.
Participants shared examples from multiple regions: endangered languages with only a few thousand speakers; languages spoken by millions, such as Gondi, still treated as “low-resource”; dialect mismatches like Dari being replaced by Farsi; and even European languages such as Finnish and Irish receiving limited AI support.
These gaps are not accidental or purely technical. They reflect long-standing social hierarchies that decide which languages are valued, digitised, and resourced. Even the distinction between “language” and “dialect” is power-laden, with real consequences for digital inclusion.
The AI Divide Builds on the Digital Divide
Before discussing algorithms or data, participants repeatedly returned to basic barriers. Unstable or absent internet connectivity. Low levels of digital, media, and AI literacy. Gendered access to devices. Internet shutdowns during crises. Without addressing these foundational issues, inclusive AI risks remaining an elite conversation, disconnected from community realities.
Data, Power, and Extractive Models
A central concern in the session was data ownership. Civil society organisations and public-interest media often hold rich language archives, including audio recordings, oral histories, community journalism, and translations. These datasets could support AI tools for underrepresented languages. Yet communities face difficult questions. How do we assess the value of our data? What are fair legal and ethical terms for sharing it? Who benefits when data is made open, and who loses control?

Participants shared experiences of large funders and institutions seeking access to data without clear commitments to community benefit or long-term sustainability. This raised urgent questions about extractive AI models and the political economy of language data. Inclusive AI, the collective agreed, is not just a technical challenge. It is an economic, legal, and ethical one.
From Human in the Loop to Community in the Loop
AI discourse often emphasises the idea of “humans in the loop.” Participants challenged this framing. Without social intelligence, cultural context, and trusted local intermediaries, AI systems fail to work meaningfully. Too often, technologies are designed elsewhere, deployed at scale, and expected to function in contexts they were never built for. A meaningful inclusion requires communities themselves to be in the loop, as co-designers, translators, validators, and decision-makers.
Inclusion Without Integrity Can Cause Harm
The collective also acknowledged a difficult reality: expanding AI into more languages can amplify harm if safeguards are missing. Participants discussed misinformation and deepfakes circulating in local languages, AI-generated content appearing more authentic, and the increased risk of polarisation and hate speech. Language inclusion without accountability can deepen information disorder.
DEF’s experience shows that community-based media and information literacy is essential. Training rural women as fact-checkers, using culturally familiar references, and grounding AI literacy in everyday life are practical ways to counter these risks.

Pathways Forward
Despite the challenges, participants identified clear pathways. Community-led AI use cases show promise, especially where local technologists build tools based on community needs rather than commercial scale. These efforts require sustained funding and recognition.
There was also strong support for coalition-based approaches that bring together community media, civil society organisations, technologists, legal and ethics experts, academia, and public institutions. Publicly funded computing infrastructure and universities can play a critical role in ensuring AI development serves the public good rather than commercial extraction.
Grounding the AI Future in Community Realities
Perhaps the most honest reflection from the session was a shared discomfort. Communities are repeatedly asked to adapt to AI systems they did not design, control, or consent to. As civil society, we must ask not only how to make AI more inclusive, but whether, where, and on whose terms AI should be used at all. Inclusion cannot be an afterthought.
At the Digital Empowerment Foundation, we sincerely believe AI must strengthen communities, not weaken them. This means prioritising access, agency, ownership, and voice, especially for those historically excluded from digital futures. Inclusive AI begins not with code, but with communities.
To know more about DEF’s Just AI – Data & Algorithms for Communities Initiative, please visit: https://justai.defindia.org/










