The Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged Artificial Intelligence
THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY: SHIFTING THE OFFENSE- DEFENSE BALANCE

By: Jennifer Tang, Tiffany Saade, and Steve Kelly

Cutting-edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.

The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.

This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem.

The Institute for Security and Technology, October 2024

States of Surveillance: Ethnographies of New Technologies in Policing and Justice

Edited by Maya Avis, Daniel Marciniak and Maria Sapignoli   

Recent discussions on big data surveillance and artificial intelligence in governance have opened up an opportunity to think about the role of technology in the production of the knowledge states use to govern. The contributions in this volume examine the socio-technical assemblages that underpin the surveillance carried out by criminal justice institutions – particularly the digital tools that form the engine room of modern state bureaucracies. Drawing on ethnographic research in contexts from across the globe, the contributions to this volume engage with technology’s promises of transformation, scrutinise established ways of thinking that become embedded through technologies, critically consider the dynamics that shape the political economy driving the expansion of security technologies, and examine how those at the margins navigate experiences of surveillance. The book is intended for an interdisciplinary academic audience interested in ethnographic approaches to the study of surveillance technologies in policing and justice. Concrete case studies provide students, practitioners, and activists from a broad range of backgrounds with nuanced entry points to the debate.

London; New York: Routledge, 2025. 201p.

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.

Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

By Carter, Sarah R.; Wheeler, Nicole E.; Chwalek, Sabrina; Isaac, Christopher R.; Yassif, Jaime

From the document: "Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists' abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment. Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities--AI tools and technologies that enable the engineering of living systems--also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe. [...] To address the pressing need to govern AI-bio capabilities, this report explores three key questions: [1] What are current and anticipated AI capabilities for engineering living systems? [2] What are the biosecurity implications of these developments? [3] What are the most promising options for governing this important technology that will effectively guard against misuse while enabling beneficial applications? To answer these questions, this report presents key findings informed by interviews with more than 30 individuals with expertise in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies."

Nuclear Threat Initiative. 2023. 88p.

Principles for Reducing AI Cyber Risk in Critical Infrastructure: A Prioritization Approach

By SLEDJESKI, CHRISTOPHER L.

From the document: "Artificial Intelligence (AI) brings many benefits, but disruption of AI could, in the future, generate impacts on scales and in ways not previously imagined. These impacts, at a societal level and in the context of critical infrastructure, include disruptions to National Critical Functions. A prioritized risk-based approach is essential in any attempt to apply cybersecurity requirements to AI used in critical infrastructure functions. The topics of critical infrastructure and AI are simply too vast to meaningfully address otherwise. The National Institute of Standards and Technology (NIST) defines cyber secure AI systems as those that can 'maintain confidentiality, integrity and availability through protection mechanisms that prevent unauthorized access and use.' Cybersecurity incidents that impact AI in critical infrastructure could impact the availability, reliability, and safety of these vital services. [...] This paper was prompted by questions presented to MITRE about to what extent the original NIST Cybersecurity Risk Framework, and the efforts that accompanied its release, enabled a regulatory approach that could serve as a model for AI regulation in critical infrastructure. The NIST Cybersecurity Risk Framework was created a decade ago as a requirement of Executive Order (EO) 13636. When this framework was paired with the list of cyber-dependent entities identified under the EO, it provided a voluntary approach for how Sector Risk Management Agencies (SRMAs) prioritize and enhance the cybersecurity of their respective sectors."

MITRE CORPORATION. 2023. 18p.