The Open Access Publisher and Free Library
03-crime prevention.jpg

CRIME PREVENTION

CRIME PREVENTION-POLICING-CRIME REDUCTION-POLITICS

Posts tagged AI
AI and Policing: The Benefits and Challenges of Artificial Intelligence for Law Enforcement

By Security Insight

Artificial Intelligence (AI) technology can completely transform policing; from advanced criminal analytics that reveal trends in vast amounts of data, to biometrics that allow the prompt and unique identification of criminals. With the AI and policing report, produced through the Observatory function of the Europol Innovation Lab, we aim to provide insight into the present and future capabilities that AI offers, projecting a course for a more efficient, responsive, and effective law enforcement model. This report offers an in-depth exploration of the applications and implications of AI in the field of law enforcement, underpinned by the European Union's regulatory framework. It also looks at concerns about data bias, fairness, and potential threats to privacy, accountability, human rights protection, and discrimination, which are particularly relevant in the background of the EU's Artificial Intelligence Act.

 The Hague: EUROPOL, 2024. 61p.

AI and Policing: The Benefits and Challenges of Artificial Intelligence for Law Enforcement

By The Europol Innovation Lab ( © European Union Agency for Law Enforcement Cooperation, 2024

This report aims to provide the law enforcement community with a comprehensive understanding of the various applications and uses of artificial intelligence (AI) in their daily operations. It seeks to serve as a textbook for internal security practitioners, offering guidance on how to responsibly and compliantly implement AI technologies. In addition to showcasing the potential benefits and innovative applications of AI, such as AI-driven data analytics, the report also aims to raise awareness about the potential pitfalls and ethical considerations of AI use in law enforcement. By addressing these challenges, the report endeavours to equip law enforcement professionals with the knowledge necessary to navigate the complexities of AI, ensuring its effective and ethical deployment in their work. The report focuses on large and complex data sets, open-source intelligence (OSINT) and natural language processing (NLP). It also delves into the realm of digital forensics, computer vision, biometrics, and touches on the potential of generative AI. The use of AI by law enforcement is increasingly scrutinised due to its ethical and societal dimensions. The report attempts to address concerns about data bias, fairness, and potential encroachments on privacy, accountability, human rights protection and discrimination. These concerns become particularly relevant in the context of the EU’s Artificial Intelligence Act (EU AI Act), an overview of which is detailed in this report, as well as its broader context. The report emphasises the significance of the forthcoming regulation, detailing its objectives, scope, and principal provisions. The Act’s implications for law enforcement agencies are also discussed, emphasising the balance between fostering innovation and ensuring ethical use beyond compliance. Central to the report is the assessment of how law enforcement can maintain a delicate balance between leveraging AI’s benefits and addressing its inherent restrictions. Strategies for addressing bias, privacy concerns, and the pivotal role of accountability frameworks, are elaborated. The report highlights the importance of innovative regulatory environments. The concluding section forecasts the trajectory of AI in law enforcement, underscoring the potential technological advancements on the horizon. It also emphasises the need for public trust and acceptance, and the importance of collaboration and knowledge sharing. This comprehensive document serves as both a guide and a reflective tool for stakeholders vested in the confluence of AI and law enforcement within the European landscape.  

Luxembourg: Publications Office of the European Union, 2 : 2024. 61p.

Using Artificial Intelligence and Quantum Computing to Enhance U.S. Department of Homeland Security Mission Capabilities

By Robles, Nicolas M.; Alhajjar, Elie; Geneson, Jessie; Moon, Alvin; Adams, Christopher Scott; Leuschner, Kristin; Steier, Joshua

From the webpage description: "Building on research on quantum machine learning, researchers investigated the effect of quantum-enhanced artificial intelligence within the context of the six U.S. Department of Homeland Security (DHS) missions. For each mission, the authors illustrate how quantum boosts could help DHS perform its computational duties more efficiently. They also explain some situations in which quantum computing does not provide benefits over classical computing. Last, they provide recommendations to DHS on how to leverage quantum computing. This paper should be of interest to policymakers, researchers, and others working on quantum computing or artificial intelligence."

RAND CORPORATION. 27 AUG, 2024.

Hacking Minds and Machines: Foreign Interference in the Digital Era

KOVALCIKOVA, NAD'A; FILIPOVA, RUMENA VALENTINOVA, 1989-; HOGEVEEN, BART; KARÁSKOVÁ, IVANA; PAWLAK, PATRYK; SALVI, ANDREA

From the document: "This 'Chaillot Paper' delves into the phenomenon of foreign interference and the risk it poses to democratic societies. It explores the interplay between information manipulation and disruptive cyber operations, revealing their role as complementary components within a broader strategy. Dedicated chapters examine how interference manifests across various sectors, including social, political, economic, digital and security domains, describing existing tools and evolving policy responses. Each case study follows a clear structure, presenting an incident, its effects and the implemented responses. The volume concludes by identifying convergences and divergences across the cases studied, and highlights foreign interference as a critical and growing threat to global security. It offers targeted recommendations on how the EU can significantly bolster its defences and resilience against this threat."

INSTITUTE FOR SECURITY STUDIES (PARIS, FRANCE). AUG, 2024. 67p.

AI and the Evolution of Biological National Security Risks: Capabilities, Thresholds, and Interventions

DREXEL, BILL; WITHERS, CALEB

From the document: "In 2020, COVID-19 brought the world to its knees, with nearly 29 million estimated deaths, acute social and political disruptions, and vast economic fallout. However, the event's impact could have been far worse if the virus had been more lethal, more transmissible, or both. For decades, experts have warned that humanity is entering an era of potential catastrophic pandemics that would make COVID-19 appear mild in comparison. History is well acquainted with such instances, not least the 1918 Spanish Flu, the Black Death, and the Plague of Justinian--each of which would have dwarfed COVID-19's deaths if scaled to today's populations. Equally concerning, many experts have sounded alarms of possible deliberate bioattacks in the years ahead. [...] This report aims to clearly assess AI's impact on the risks of biocatastrophe. It first considers the history and existing risk landscape in American biosecurity independent of AI disruptions. Drawing on a sister report, 'Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security,' this study then considers how AI is impacting biorisks across four dimensions of AI safety: new capabilities, technical challenges, integration into complex systems, and conditions of AI development. Building on this analysis, the report identifies areas of future capability development that may substantially alter the risks of large-scale biological catastrophes worthy of monitoring as the technology continues to evolve. Finally, the report recommends actionable steps for policymakers to address current and near-term risks of biocatastrophes."

CENTER FOR A NEW AMERICAN SECURITY. 2024.

Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security

DREXEL, BILL; WITHERS, CALEB

From the document: "Since ChatGPT [Chat Generative Pre-Trained Transformer] was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT's record-breaking speed of adoption--logging 100 million users in just two months--gave an unprecedented number of individuals direct, tangible experience with the capabilities of today's state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. [...] This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI's catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks."

CENTER FOR A NEW AMERICAN SECURITY. JUN, 2024.

Artificial Intelligence Index Report 2024

MASLEJ, NESTOR; FATTORINI, LOREDANA; PERRAULT, RAYMOND; PARLI, VANESSA; REUEL, ANKA; BRYNJOLFSSON, ERIK

From the document: "Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI's influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI's impact on science and medicine. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI." See pages 10 and 11 for a full list of contributors.

STANFORD UNIVERSITY. HUMAN-CENTERED ARTIFICIAL INTELLIGENCE. 2024. 502p.

Detecting AI Fingerprints: A Guide to Watermarking and Beyond

By Srinivasan, Siddarth

From the document: "Over the last year, generative AI [artificial intelligence] tools have made the jump from research prototype to commercial product. Generative AI models like OpenAI's ChatGPT [Chat Generative Pre-trained Transformer] [hyperlink] and Google's Gemini [hyperlink] can now generate realistic text and images that are often indistinguishable from human-authored content, with generative AI for audio [hyperlink] and video [hyperlink] not far behind. Given these advances, it's no longer surprising to see AI-generated images of public figures go viral [hyperlink] or AI-generated reviews and comments on digital platforms. As such, generative AI models are raising concerns about the credibility of digital content and the ease of producing harmful content going forward. [...] There are several ideas for how to tell whether a given piece of content--be it text, image, audio, or video--originates from a machine or a human. This report explores what makes for a good AI detection tool, how the oft-touted approach of 'watermarking' fares on various technical and policy-relevant criteria, governance of watermarking protocols, what policy objectives need to be met to promote watermark-based AI detection, and how watermarking stacks up against other suggested approaches like content provenance."

Washington. DC. Brookings Institution. 2024.

Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems

By Toner, Helen; Ji, Jessica; Bansemer, John; Lim, Lucy; Painter, Chris; Corley, Courtney D.; Whittlestone, Jess; Botvinick, Matt; Rodriguez, Mikel; Shankar Siva Kumar, Ram

From the document: "AI is experiencing a moment of profound change, capturing unprecedented public attention and becoming increasingly sophisticated. As AI becomes more powerful, and in some cases more general in its capabilities, it may become capable of posing novel risks in domains such as bioweapons development, cybersecurity, and beyond. Two features of the current AI landscape are especially challenging from a policy perspective: the rapid pace at which research is advancing, and the recent development of more general-purpose AI systems, which--unlike most AI systems, which are narrowly focused on a single task--can be adapted to many different use cases. These two elements add new layers of difficulty to existing AI ethics and safety problems. In July 2023, Georgetown University's Center for Security and Emerging Technology (CSET) and Google DeepMind hosted a virtual roundtable to discuss the implications and governance of the advancing AI research frontier, particularly with regard to general-purpose AI models. The objective of the roundtable was to help bridge the gap between the state of the current conversation and the reality of AI technology at the research frontier, which has potentially widespread implications for both national security and society at large."

Georgetown University. Walsh School Of Foreign Service. Center For Security And Emerging Technology . 2023. 23p.

Surveillance for Sale: The Underregulated Relationship between U.S. Data Brokers and Domestic and Foreign Government Agencies

By Caitlin Chin

Ten years ago, when whistleblower Edward Snowden revealed that U.S. government agencies had intercepted bulk telephone and internet communications from numerous individuals around the world, President Barack Obama acknowledged a long-standing yet unsettled dilemma: “You can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience. There are trade-offs involved.” Snowden’s disclosures reignited robust debates over the appropriate balance between an individual’s right to privacy and the state’s interest in protecting economic and national security—in particular, where to place limitations on the U.S. government’s ability to compel access to signals intelligence held by private companies. These debates continue today, but the internet landscape—and subsequently, the relationship between the U.S. government and private sector—has evolved substantially since 2013. U.S. government agencies still routinely mandate private companies like Verizon and Google hand over customers’ personal information and issue non-disclosure orders to prevent these companies from informing individuals about such access. But the volume and technical complexity of the data ecosystem have exploded over the past decade, spurred by the rising ubiquity of algorithmic profiling in the U.S. private sector. As a result, U.S. government agencies have increasingly turned to “voluntary” mechanisms to access data from private companies, such as purchasing smartphone geolocation history from third-party data brokers and deriving insights from publicly available social media posts, without the formal use of a warrant, subpoena, or court order. In June 2023, the Office of the Director of National Intelligence (ODNI) declassified a report from January 2022—one of the first public efforts to examine the “large amount” of commercially available information that federal national security agencies purchase. In this report, ODNI recognizes that sensitive personal information both “clearly provides intelligence value” but also increases the risk of harmful outcomes like blackmail or harassment. Despite the potential for abuse, the declassified report reveals that some intelligence community elements have not established proper privacy and civil liberties guardrails for commercially acquired information and that even ODNI lacks awareness of the full scope of data brokerage contracts across its 18 units. Critically, the report recognizes that modern advancements in data collection have outpaced existing legal safeguards: “Today’s CAI [commercially available information] is more revealing, available on more people (in bulk), less possible to avoid, and less well understood than traditional PAI [publicly available information].” The ODNI report demonstrates how the traditional view of the privacy-security trade-off is becoming increasingly nuanced, especially as gaps in outdated federal law around data collection and transfers expand the number of actors and risk vectors involved. National Security Adviser Jake Sullivan recently noted that there are also geopolitical implications to consider: “Our strategic competitors see big data as a strategic asset.” When Congress banned the popular mobile app TikTok on government devices in the 2023 National Defense Authorization Act (NDAA), it cited fears that the Chinese Communist Party (CCP) could use the video-hosting app to spy on Americans. However, the NDAA did not address how numerous other smartphone apps, beyond TikTok, share personal information with data brokers—which, in turn, could transfer it to adversarial entities. In 2013, over 250,000 website privacy policies acknowledged sharing data with other companies; since then, this number inevitably has increased. In a digitized society, unchecked data collection has become a vulnerability for U.S. national security—not merely, as some once viewed, a strength. The reinvigorated focus on TikTok’s data collection practices creates a certain paradox. While politicians have expressed concerns about Chinese government surveillance through mobile apps, U.S. government agencies have purchased access to smartphone geolocation data and social media images related to millions of Americans from data brokers without a warrant. The U.S. government has simultaneously treated TikTok as a national security risk and a handy source of information, reportedly issuing the app over 1,500 legal requests for data in 2021 alone. It is also important to note that national security is not the only value that can come into tension with information privacy, as unfettered data collection carries broader implications for civil rights, algorithmic fairness, free expression, and international commerce, affecting individuals both within and outside the United States.

Washington, DC: The Center for Strategic and International Studies (CSIS) 2023. 60p.

De-Risking Authoritarian AI: A Balanced Approach to Protecting Our Digital Ecosystems

By Gilding, Simeon

From the document: "It seems like an age since we worried about China's dominion over the world's 5G [fifth generation] networks. These days, the digital authoritarian threat feels decidedly steampunk--Russian missiles powered by washing-machine chips and stately Chinese surveillance balloons. And, meanwhile, our short attention spans are centred (ironically) on TikTok--an algorithmically addictive short video app owned by Chinese technology company ByteDance. More broadly, there are widespread concerns that 'large language model' (LLM) generative AI such as ChatGPT [Chat Generative Pre-Trained Transformer] will despoil our student youth, replace our jobs and outrun the regulatory capacity of the democracies. [...] This report is broken down into six sections. The first section highlights our dependency on AI-enabled products and services. The second examines China's efforts to export AI-enabled products and services and promote its model of digitally enabled authoritarianism, in competition with the US and the norms and values of democracy. This section also surveys PRC [People's Republic of China] laws compelling tech-sector cooperation and explains the nature of the threat, giving three examples of Chinese AI-enabled products of potential concern. It also explains why India is particularly vulnerable to the threat. In the third section, the report looks at the two key democratic responses to the challenge of AI: on the one hand, US efforts to counter both China's development of advanced AI technologies and the threat from Chinese technology already present in the US digital ecosystem; on the other, a draft EU Regulation to protect the fundamental rights of EU citizens from the pernicious effects of AI. The fourth section of the report proposes a framework for triaging and managing the risk of China's authoritarian AI-enabled products and services embedded in democratic digital ecosystems. The final section acknowledges complementary efforts to mitigate the PRC threat to democracies' digital ecosystems."

Scaling Trust on the Web

By Sugarman, Eli; Daniel, Michael; François, Camille; Chowdhury, A. K. M. Azam; Chowdhury, Rumman; Willner, Dave; Roth, Yoel

From the document: "Digital technologies continue to evolve at breakneck speed, unleashing a dizzying array of society-wide impacts in their wake. In the last quarter of 2022 alone: Meta, Accenture, and Microsoft announced a massive partnership to establish immersive spaces for enterprise environments; Elon Musk took over Twitter; the third-largest cryptocurrency exchange in the world collapsed overnight; the European Union's landmark Digital Services Act came into force; and generative artificial intelligence ('GAI') tools were released to the public for the first time. Within a fifty-day span, the outline of a new internet age came into sharper focus. In December 2022, the Atlantic Council's Digital Forensic Research Lab began to assemble a diverse array of experts who could generate an action-oriented agenda for future online spaces that can better protect users' rights, support innovation, and incorporate trust and safety principles--and do so quickly. [...] The task force specifically considered the emerging field of 'trust and safety' (T&S) and how it can be leveraged moving forward. That field provides deep insights into the complex dynamics that have underpinned building, maintaining, and growing online spaces to date. Moreover, the work of T&S practitioners, in concert with civil society and other counterparts, now rests at the heart of transformative new regulatory models that will help define how technology is developed in the twenty-first century. 'This executive report captures the task force's key findings and provides a short overview of the truths, trends, risks, and opportunities that task force members believe will influence the building of online spaces in the immediate, near, and medium term. It also summarizes the task force's recommendations for specific, actionable interventions that could help to overcome systems gaps the task force identified.'"

Atlantic Council Of The United States. Digital Forensic Research Lab. 2023. 150p.

Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks

Edited by Christian RathgebRuben TolosanaRuben Vera-Rodriguez, and Christoph Busch

This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area.

Cham: Springer Nature, 2022. 481p.