Mitigation of the risks posed by AI will require a substantial increase in the number of individuals with a deep understanding of the legal, policy, and technical frameworks related to AI governance. In other words, there’s a global shortage of experts who can help AI labs, civil society organizations, and legislative bodies identify and reduce near- and long-term AI risks. Reducing that shortage requires the development of educational opportunities for students around the world. That’s why I dedicated my summer research fellowship with LPP to creating a template syllabus for a course on legal and policy tools to reduce such risks.
The syllabus, “Regulating AI: Legal and Policy Perspectives,” is live on the H20 Open Casebook Platform. Academics will benefit from several features of the syllabus, which includes a guide for evaluating students, recommended and required readings, pre-recorded lectures by renowned scholars, and sample exercises to improve and evaluate student understanding. Note that the syllabus errs on the side of covering more content than most professors will have time to cover in a standard semester; this excess of content empowers professors to select the content most aligned with the interests and capacity of their students.
Ongoing changes to our understanding of the risks posed by AI, as well as growing analysis of those risks, mean this syllabus will remain a perpetual work in progress. I acknowledge that upon publication it became out of date. The evolving nature of this field necessitates a community of scholars and experts to keep this syllabus up to date. I am actively seeking individuals to join the following efforts:
- Record lectures on syllabus topics
- Lectures can span 25 to 40 minutes
- If you’d prefer to record an interview with me in lieu of a lecture, that’s an option as well
- Join a listserv of scholars in order to:
- Update recommended readings
- Distribute exercises
- Share exemplary student projects and proposals
A number of dedicated AI scholars have already recorded or will soon record lectures for this effort. Lecturers currently include:
- Juliette Kayyem (Harvard)
- Rebecca Crootof (Richmond)
- Anat Lior (Drexel)
- Carlos Ignacio Gutierrez (Future of Life)
- Daniel Shin (William & Mary)
- Iria Giuffrida (William & Mary)
- Shin-Shin Hua (CSER)
My interest in building and maintaining this syllabus stems from a concern that AI governance may lack participation from the representatives of the communities most likely to bear the brunt of AI risks. Yet, this regulatory task necessarily requires consideration of highly local factors. Though this first iteration of the syllabus admittedly caters to U.S. audiences, it marks a substantial improvement on other such syllabi for two reasons: first, it’s easily modifiable by scholars around the world thanks to the H20 platform; second and relatedly, it’s meant to create a community of scholars dedicated to democratizing AI expertise.
This work aligns with prior efforts by LPP team members. For example, Research Affiliate Cecil Abungu created and coordinates the ILINA Program, which is “dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing well-being today, preventing and mitigating global catastrophic risks and ensuring that the long-term future goes well.” Likewise, Director Christoph Winter helped launch the AI Futures Fellowship, which will bring together an array of international students and scholars to collaborate on critical analyses of AI risks.
AI governance is an all-hands-on-deck effort. I’m grateful for the support and encouragement I’ve received to help bring more students, scholars, and experts into this important challenge. Please reach out if you’d like to lend a hand! email@example.com