US House AI Task Force is the latest authority to address algorithms and racism
On May 7, 2021, the US House of Representative’s Task Force on Artificial Intelligence (AI) held a hearing on “Equitable Algorithms: How Human-Centered AI can Address Systemic Racism and Racial Justice in Housing and Financial Services.” 1 It was the latest among several federal, state and international governmental initiatives calling for fair, transparent and accountable AI in the financial and consumer sectors, and urging all AI actors to address inequitable outcomes. This hearing focused on ways that the public and private sectors can use AI to address systemic racism and optimize fairness. Among the views expressed:
- AI and machine learning (ML) models can improve efficiencies, help to tackle critical societal problems and reduce costs, but they can also introduce risks of amplified bias.
- AI and ML models can be particularly problematic due to their lack of explainability and transparency, especially when models are trained on biased data sets, engineers are not trained to recognize red flags and regulators are not equipped to handle them.
- US regulatory frameworks need to keep pace with AI/ML developments, including methodologies to test AI models for discrimination, while finding ways use AI to improve outcomes through innovation. New federal legislation may be needed.
- AI actors should expect more US governmental initiatives, including enforcement actions, aimed at regulating inequitable outcomes in AI and ML models.
Learn more.