header gradient
header gradient

About Us

The Legal Priorities Project is an independent, global research project founded by researchers at Harvard University. Our current focus lies on researching and advising on the legal and regulatory challenges posed by artificial intelligence. We believe that sound legal and regulatory analysis will promote security, welfare, and the rule of law.

News

Summer Research Fellowship in Law & AI 2024: Apply by February 16

We are excited to announce that applications for our Summer Research Fellowship in Law & AI 2024 are now open. For 8–12 weeks, participants will work with researchers at LPP to explore questions around how the law can help to ensure safe, beneficial outcomes from transformative artificial intelligence. Fellows will receive a stipend of $10,000.

You can find all details and apply here by February 16.

We look forward to receiving your application!

AI Insight Forum: Privacy and liability (Statement to the U.S. Senate)

On November 8, our Head of Strategy, Mackenzie Arnold, spoke before the US Senate’s bipartisan AI Insight Forum on Privacy and Liability, convened by Senate Majority Leader Chuck Schumer. For our perspective on how Congress can meet the unique challenges that AI presents to liability law, read the written statement here.

In our statement, we note that:

  • Liability is a critical tool for addressing risks posed by AI systems today and in the future, compensating victims, correcting market inefficiencies, and driving safety innovation.
  • In some respects, existing law will function well by default.
  • In others, artificial intelligence will present unusual challenges to liability law that may lead to inconsistency and uncertainty, penalize the wrong actors, and leave victims uncompensated.
  • Courts, limited to the specific cases and facts at hand, may be slow to respond, creating a need for federal and state legislative action.

We then make several recommendations for how Congress could respond to these challenges

  • First, to prevent and deter harm from malicious and criminal misuse of AI, we recommend that Congress (1) hold developers and some deployers strictly liable for attacks on critical infrastructure and harms that result from biological, chemical, radiological, or nuclear weapons, (2) create strong incentives to secure and protect model weights, and (3) create duties to test for capabilities that could be misused and implement safeguards that cannot be easily removed.
  • Second, to address unexpected capabilities and failure modes, we recommend that Congress (1) adjust obligations to account for whether a harm can be contained and remedied, (2) create a duty to test for emergent capabilities, including agentic behavior, (3) create duties to monitor, report, and respond to post-deployment harms, and (4) ensure that the law impute liability to a responsible human or corporate actor, where models act without human oversight.
  • Third, to ensure that the costs to society are borne by those responsible and most able to prevent harm, we recommend that Congress (1) establish joint and several liability for harms involving AI systems, (2) consider some limitations on the ability of powerful developers to avoid responsibility through indemnification, and (3) clarify that AI systems are products subject to products liability law.
  • Finally, to ensure that federal law does not obstruct the functioning of the liability system, we recommend that Congress (1) include a savings clause in any federal legislation to avoid preemption and (2) clarify that Section 230 does not apply to generative AI.

Annual Report 2022

We’re excited to share our Annual Report for 2022! Our report outlines key successes, statistics, bottlenecks, and issues from our work in 2022. It also summarizes our priorities for 2023 and our methodology for updating our priorities.

Thank you for your continued support as we work toward our goals for 2023 and beyond!

New Legal Priorities Blog

We just launched the Legal Priorities Blog. Our blog will feature shorter pieces by LPP teammates and invited authors, as well as some organizational updates. Have a look at some of the pieces published:

Stay tuned for more updates!

New research

Have a look at some of our latest publications:

  • Ordinary meaning of existential risk investigates the ordinary meaning of legally relevant concepts in the existential risk literature, such as “existential risk,” “global catastrophic risk,” and “extreme risk.” It aims to provide crucial insights for those tasked with drafting and interpreting existential risk laws.
  • Value alignment for advanced artificial judicial intelligence (forthcoming in American Philosophical Quarterly) draws on AI safety literature to discuss how advanced artificial judicial intelligence (AAJI) can be aligned with judicial values using specification and assurance mechanisms. It outlines various research directions where scholars of law and philosophy might offer particular insight.
  • The intuitive appeal of legal protection for future generations (forthcoming in Essays on Longtermism) presents work from empirical studies indicating that the principles underlying the legal protection of future generations are widely endorsed by legal experts and laypeople, independent of demographic factors such as gender, culture, and politics.
  • Algorithmic black swans (forthcoming in Washington University Law Review) outlines five principles to guide the regulation of algorithmic black swans and to mitigate their harms.
  • Existential advocacy (forthcoming in Georgetown Journal of Legal Ethics) reports findings from a qualitative study of the legal advocates working to mitigate existential risk.

…and more!

Stay up to date

If you are interested in our work, subscribe to our newsletter! Roughly once a month, you will receive a short email with our latest publications, open positions, and upcoming events. You can read our past newsletters here.