Roundtables: Why It’s So Hard to Make Welfare AI Fair

MIT Technology Review - AI
Jul 30, 2025 14:53
MIT Technology Review
1 views
airesearchtechnology

Summary

Amsterdam’s attempt to use algorithms for fairer welfare assessments still resulted in bias, highlighting persistent challenges in eliminating discrimination from AI systems. Experts discuss why these efforts failed and question whether true fairness in algorithmic decision-making is achievable. The case underscores ongoing concerns about bias and accountability in AI applications for social services.

Amsterdam tried using algorithms to fairly assess welfare applicants, but bias still crept in. Why did Amsterdam fail? And more important, can this ever be done right? Hear from MIT Technology Review editor Amanda Silverman, investigative reporter Eileen Guo, and Lighthouse Reports investigative reporter Gabriel Geiger as they explore if algorithms can ever be fair. Speakers:…