In a panel moderated by Geotech Center Associate Director and Resident Fellow Trisha Ray, experts Evi Fuelle, Lilian Olivia Orero, and Venkatesh Krishnamoorthy, discussed recent advancements in human-centered AI and strategies for its future integration across various fields. Evi Fuelle, global policy director at Credo AI, began the discussion by highlighting the Biden administration’s executive orders on the safe, secure, and trustworthy development and use of artificial intelligence and advancing racial equity and support for underserved communities through the federal government. Fuelle emphasized that these orders and the 2022 Blueprint for an AI Bill of Rights signaled the US government’s commitment to prioritizing human-centered protections and principles in AI development. At the same time, Fuelle emphasized the ongoing necessity for increased collaboration between industry and government for responsible AI and underscored the importance of implementing robust system tests and audits to enable independent evaluation. Finally, Fuelle also advocated for creating a registry or inventory of AI use cases to make the varied uses of AI visible to users and facilitate impact assessment.
Moving beyond domestic frameworks, Lilian Olivia Orero, an advocate of the High Court of Kenya and the founder of SafeOnline Women Kenya, discussed the AI lifecycle from a non-Western perspective. Orero highlighted that AI algorithms developed in the Global North are often trained on data that does not reflect the African context, exacerbating the digital exclusion of marginalized communities. She stressed the need to integrate diversity into AI design during development and to consider infrastructural barriers and cultural practices during deployment. In this vein, she pointed out the significance of institutional and organizational investment in local human-centered AI research, such as the Dedan Kimathi University of Technology in Kenya, which is exploring how AI applications can benefit gender equality in STEM fields. Similarly, Venkatesh Krishnamoorthy, country manager for India at BSA The Software Alliance, underscored the industry’s responsibility to balance innovation with risk management across every level of the AI value chain. He highlighted that neither developers nor policymakers could address high-risk applications alone. Instead, he advocated for collaborative efforts between private and public sectors to optimize the management of critical applications. Krishnamoorthy also highlighted his work on BSA’s policy solutions for building responsible AI, a comprehensive set of recommendations to address major AI issues and establish a risk management framework.
Concluding the discussion, all speakers agreed that human-centered AI requires industry, government and civil society to work in lockstep. They also emphasized the need to ask foundational questions: What does good AI look like? Can strong cybersecurity and privacy laws help address harms from AI deployment? What are the fundamental principles that could guide auditing processes to ensure that AI systems are aligned with values such as privacy, equity, and social justice?
To keep up with the GeoTech Center’s latest work on artificial intelligence, join our mailing list below:
Stay connected
Sign up to join the Atlantic Council’s AI mailing list.
Related resources
Learn more from the resources referenced in AI Connect II Webinar 3.
- Revising Privacy Impact Assessments to Account for Unique AI risks
- Explaining the classification of AI Use Cases as “High-Risk”
- Artificial Intelligence 4 Development (AI4D) Africa
- Smart Africa Alliance
- The African AI Regulatory Landscape
- Responsible AI in Africa – Challenges and Opportunities