The fourth AI Connect webinar was held on June 30th, 2022, and featured three presentations from academic, government, and private sector representatives on the topic of democratic principles throughout AI lifecycles.
Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET), started by presenting on the three core tenets of AI safety: robustness, assurance, and specification. Respectively, these concepts ensure that AI systems operate safely and reliably under a range of real-world circumstances, are predictable and understandable for human operators, and behave in accordance with the operator’s intentions.
Subsequently, two speakers from the National Institute of Standards and Technology (NIST) – Reva Schwartz, Research Scientist, and Mark Latonero, Senior Policy Advisor for AI and International Cooperation – presented on NIST’s AI Risk Management Framework that is currently in development. The framework will serve as a benchmark for mapping, measuring, and managing risks associated with AI technology among its many stakeholders. Its participatory design approach allows for stakeholder and industry participation in the tech development process to ensure meaningful harm reduction and mitigation standards in AI systems.
Finally, Jen Gennai, Founder & Director of Responsible Innovation at Google, presented on the company’s principles, processes, and procedures that ensure that Google is a responsible steward of AI technology. According to her presentation, this approach includes the use of ethical consultants, regular spot testing of the company’s responsible principles, and ensuring technical interoperability and external explainability of the technology to ensure constant accountability.