The Open Access Publisher and Free Library
04-terrorism.jpg

TERRORISM

TERRORISM-DOMESTIC-INTERNATIONAL-RADICALIZATION-WAR

Posts tagged AI
Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence

By James Black, Mattias Eken, Jacob Parakilas, Stuart Dee, Conlan Ellis, Kiran Suman-Chauhan, Ryan J. Bain, Harper Fine, Maria Chiara Aquilino, Melusine Lebret, et al.

Artificial intelligence (AI) holds the potential to usher in transformative changes across all aspects of society, economy and policy, including in the realm of defence and security. The United Kingdom (UK) aspires to be a leading player in the rollout of AI for civil and commercial applications, and in the responsible development of defence AI. This necessitates a clear and nuanced understanding of the emerging risks and opportunities associated with the military use of AI, as well as how the UK can best work with others to mitigate or exploit these risks and opportunities.

In March 2024, the Defence AI & Autonomy Unit (DAU) of the UK Ministry of Defence (MOD), and the Foreign, Commonwealth and Development Office (FCDO) jointly commissioned a short scoping study from RAND Europe. The goal was to provide an initial exploration of ways in which military use of AI might generate risks and opportunities at the strategic level – conscious that much of the research to date has focused on the tactical level or on non-military topics (e.g. AI safety). Follow-on work will then explore these issues in more detail to inform the UK strategy for international engagement on these issues.

This technical report aims to set a baseline of understanding of strategic risks and opportunities emerging from military use of AI. The summary report focuses on high-level findings for decision makers.

Key Findings

One of the most important findings of this study is deep uncertainty around AI impacts; an initial prioritisation is possible, but this should be iterated as evidence improves.

The RAND team identified priority issues demanding urgent action. Whether these manifest as risks or opportunities will depend on how quickly and effectively states adapt to intensifying competition over and through AI.

RAND - Sep 6, 2024

Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security

DREXEL, BILL; WITHERS, CALEB

From the document: "Since ChatGPT [Chat Generative Pre-Trained Transformer] was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT's record-breaking speed of adoption--logging 100 million users in just two months--gave an unprecedented number of individuals direct, tangible experience with the capabilities of today's state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. [...] This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI's catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks."

CENTER FOR A NEW AMERICAN SECURITY. UN, 2024. 42p.

Terrorism, Extremism, Disinformation and Artificial Intelligence: A Primer for Policy Practitioners

By GANDHI, MILAN

From the document: "Focussing on current and emerging issues, this policy briefing paper ('Paper') surveys the ways in which technologies under the umbrella of artificial intelligence ('AI') may interact with democracy and, specifically, extremism, mis/disinformation, and illegal and 'legal but harmful' content online. The Paper considers examples of how AI technologies can be used to mislead and harm citizens and how AI technologies can be used to detect and counter the same or associated harms, exploring risks to democracy and human rights emerging across the spectrum. [...] Given the immense scope and potential impacts of AI on different facets of democracy and human rights, the Paper does not consider every relevant or potential AI use case, nor the long-term horizon. For example, AI-powered kinetic weapons and cyber-attacks are not discussed. Moreover, the Paper is limited in examining questions at the intersection of AI and economics and AI and geopolitics, though both intersections have important implications for democracy in the digital age. Finally, the Paper only briefly discusses how AI and outputs such as deepfakes may exacerbate broader societal concerns relating to political trust and polarisation. Although there is a likelihood that aspects of the Paper will be out-of-date the moment it is published given the speed at which new issues, rules and innovations are emerging, the Paper is intended to empower policymakers, especially those working on mis/disinformation, hate, extremism and terrorism specifically, as well as security, democracy and human rights more broadly. It provides explanations of core concerns related to AI and links them to practical examples and possible public policy solutions."

INSTITUTE FOR STRATEGIC DIALOGUE. 2024.

Using Artificial Intelligence and Machine Learning to Identify Terrorist Content Online

By MACDONALD, STUART KEITH, 1979-; MATTHEIS, ASHLEY A.; WELLS, DAVID

From the document: "Online terrorist propaganda has been an important policy concern for at least the past decade. [...] [T]he EU Commission launched a call for proposals for projects aimed at supporting small companies in implementing the Regulation. Three projects were funded under this call. This report forms part of one of these projects, which is entitled Tech Against Terrorism Europe. It is important to note at the outset that the focus of this report is the use of AI and machine learning to identify terrorist content online using content-based approaches. Accordingly, the following are outside the scope of the report: [1] The moderation of so-called borderline content, i.e., content that does not violate a platform's Terms of Service but which is nevertheless regarded as potentially harmful. [2] The identification of individuals on a radicalisation trajectory, which is a different - and even more difficult - task; and, [3] The use of behaviour-based cues, such as abnormal posting volume and use of unrelated, trending hashtags, to identify accounts that are sharing terrorist content. This includes approaches based on recidivism. The report begins, in section 2, by explaining the terms AI, machine learning and terrorist content online. Readers that are already familiar with these concepts may wish to move straight to section 3, which discusses the two main content-based approaches to the automated identification of terrorist content online: matching-based approaches and classification-based ones. Having explained the limitations of each approach, section 4 details two ways in which it is necessary to supplement automated tools. Section 5 then addresses issues of resource, before the report concludes with three recommendations."

TECH AGAINST TERRORISM; TECH AGAINST TERRORISM EUROPE.. 2023.. 32p.