
May, 2025

Cover Story
Digital Op-Ed
AI-detection tools must compel us to question—exactly whose tongue the AI speaks? Since large language models mimic human voices trained for their algorithms, it is ironic when one is accused of imitating the machine. Since the machine has learned from us, this tongue is undoubtedly ours. It fuels one to ponder whether a more profound bias in our digital tools leads us to suspect those who manage to sound articulate. More importantly, who does society think deserves to sound like a wordsmith?
Read more
Stories from the Ground
Access and Infrastructure
Education and Empowerment
Market & Social Enterprises
Governance and Citizen Services
Research and Advocacy
Regular Features
Expert Says

Addressing Bias and Discrimination: Building Responsible AI with Big Data
At the Digital Citizen Summit 2024, this thought-provoking lightning session dives into the critical issue of bias and discrimination in AI systems and the role of big data in fostering responsible innovation. Delivered by Rahul Paith, CEO of MATH, the session highlights how biases can seep into AI models at multiple levels—data collection, design, and societal usage—resulting in flawed outcomes that impact healthcare, recruitment, and financial systems.
Rahul underscores the social responsibility of addressing data biases, emphasising that creating fair and accurate AI solutions is not solely the job of AI engineers but requires collective effort from society at large. The session demonstrates how biased data can perpetuate inequalities, misdiagnosis, and flawed decisions through examples in healthcare diagnostics, hiring systems, and fintech solutions. The session calls for a holistic approach to remove biases at every stage—from data generation to algorithm design—ensuring AI systems serve society responsibly and equitably. A must-watch for anyone committed to ethical AI development and tackling discrimination in the digital age.
eChampion