The paper argues that policy makers should not use the term artificial intelligence (AI) to define the material scope of AI regulations. The argument is developed by proposing a number of requirements for legal definitions, surveying existing AI definitions, and then discussing the extent to which they meet the proposed requirements. It is shown that existing definitions of AI do not meet the most important requirements for legal definitions. Next, the paper suggests that policy makers should instead deploy a risk-based definition of AI. Rather than using the term AI, they should focus on the specific risks they want to reduce. It is shown that the requirements for legal definitions can be better met by considering the main causes of relevant risks: certain technical approaches (e.g. reinforcement learning), applications (e.g. facial recognition), and capabilities (e.g. the ability to physically interact with the environment). Finally, the paper discusses the extent to which this approach can also be applied to more advanced AI systems.