Report outlines security threats from malicious use of AI

The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation and several other academic and civil society organisations released a report on ‘The malicious use of artificial intelligence: forecasting, prevention, and mitigation’. The report outlines security threats that could be generated by malicious use of artificial intelligence (AI) systems in three main areas: digital security (e.g. using AI to automate tasks involved in carrying out cyber-attacks), physical security (e.g. using AI to carry out attack with drones or other physical systems), and political security (e.g. using AI to carry out surveillance, persuasion, and deception). Several high-level recommendations are made on how to better forecast, prevent, and mitigate such threats: strengthened collaboration between policymakers and researchers; researchers and engineers   in AI to take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities; identifying best practices in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI; and expanding the range of stakeholders and domain experts involved in discussions of these challenges.

Go to Top