The Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged Technology
States of Surveillance: Ethnographies of New Technologies in Policing and Justice

Edited by Maya Avis, Daniel Marciniak and Maria Sapignoli   

Recent discussions on big data surveillance and artificial intelligence in governance have opened up an opportunity to think about the role of technology in the production of the knowledge states use to govern. The contributions in this volume examine the socio-technical assemblages that underpin the surveillance carried out by criminal justice institutions – particularly the digital tools that form the engine room of modern state bureaucracies. Drawing on ethnographic research in contexts from across the globe, the contributions to this volume engage with technology’s promises of transformation, scrutinise established ways of thinking that become embedded through technologies, critically consider the dynamics that shape the political economy driving the expansion of security technologies, and examine how those at the margins navigate experiences of surveillance. The book is intended for an interdisciplinary academic audience interested in ethnographic approaches to the study of surveillance technologies in policing and justice. Concrete case studies provide students, practitioners, and activists from a broad range of backgrounds with nuanced entry points to the debate.

London; New York: Routledge, 2025. 201p.

It’s Everyone’s Problem: Mainstreaming Responses to Technology-Facilitated Gender Based Violence

By Nina Jankowicz, Isabella Gomez-O’Keefe, Lauren Hoffman

Technology facilitated gender-based violence (TFGBV) is not an in­tractable problem. But it must no longer be the responsibility solely of women’s advocacy groups. Others – technology companies, governments, civil society organizations, law enforcement, businesses, schools – must step up and work in unison to combat TFGBV in order to reflect its main­streamed effects on society. 

This report, drawing on a case study around the online harassment of Australian eSafety Commissioner Julie Inman Grant, assesses the state of research on TFGBV as well as recent global policy progress made on this issue, and offers a number of practical solutions to make women and girls safer online.

The authors argue that TFGBV must be mainstreamed to be mitigated, centering women’s experiences in broader policy debates. Technology companies, governments, civic tech organizations, law enforcement, employers, schools, and others must mainstream their work to combat TFGBV to reflect its mainstreamed effects on society. To this end, the authors recommend a number of practical solutions to the specific and pressing issues that women and girls face online today. Addressing the urgent changes described here will not only make women and girls safer and ensure their voices are heard, but also improve the safety and free expression for everyone who uses the internet, building more robust, representative democracies.

The recommendations are presented under the following themes:

  • Ensuring platform accountability and action

  • Urgently addressing deepfake image-based sexual abuse

  • Supporting victims and survivors of TFGBV

  • Deepening research and mainstreaming advocacy.

New York: Institute of Global Politics, 2024. 41p.

It’s everyone’s problem: mainstreaming responses to technology-facilitated gender-based violence

By Nina JankowiczIsabella Gomez-O’Keefe, Lauren Hoffman and Andrea Vidal Becker

Technology facilitated gender-based violence (TFGBV) is not an in­tractable problem. But it must no longer be the responsibility solely of women’s advocacy groups. Others – technology companies, governments, civil society organizations, law enforcement, businesses, schools – must step up and work in unison to combat TFGBV in order to to reflect its main­streamed effects on society. This report, drawing on a case study around the online harassment of Australian eSafety Commissioner Julie Inman Grant, assesses the state of research on TFGBV as well as recent global policy progress made on this issue, and offers a number of practical solutions to make women and girls safer online. The authors argue that TFGBV must be mainstreamed to be mitigated, centering women’s experiences in broader policy debates. Technology companies, governments, civic tech organizations, law enforcement, employers, schools, and others must mainstream their work to combat TFGBV to reflect its mainstreamed effects on society. To this end, the authors recommend a number of practical solutions to the specific and pressing issues that women and girls face online today. Addressing the urgent changes described here will not only make women and girls safer and ensure their voices are heard, but also improve the safety and free expression for everyone who uses the internet, building more robust, representative democracies.

The recommendations are presented under the following themes:

  • Ensuring platform accountability and action

  • Urgently addressing deepfake image-based sexual abuse

  • Supporting victims and survivors of TFGBV

  • Deepening research and mainstreaming advocacy.

New York: Columbia University, Institute of Global Politics, 2024. 41p.

Cutting the Head off the Snake: Addressing the Role Technology Plays in the County Lines Model 

By Joe Caluori, Violette Gadenne, Elen Kirk, and Beth Mooney

The National Crime Agency published its first intelligence assessment of county lines in 2015.  Ever since, there has been a growing interest in county lines from the media, public policy, and the world of research. Crest Advisory has published research that has contributed to the body of evidence, among other things, on the socio-economic determinants of individual vulnerability to exploitation, shining light on ways to mitigate those risks. This project, however, takes a different approach, by honing in on the specific role played by technology in county lines. In this report, ‘technology’ is used to refer to electronic or digital devices or services predominantly those used for personal communication. By including devices and services in our definition we incorporate both physical hardware such as mobile phones or smartwatches, and software such as applications provided by social media platforms. Technology plays an ever-increasingly important role in our day-to-day lives. Data from the Office for National Statistics (ONS) show that the proportion of adults who use the internet daily has increased from 35 percent of the UK population in 2006 to 89 percent in 2020. The nature of this usage has also changed dramatically, with social media increasing in influence significantly over the years. In 2021 TikTok, a social media platform designed for sharing short videos, overtook Google for the first time as the most popular site worldwide.5 Just as modern technologies are now an essential aspect of modern society, technology is intrinsic to the county lines model. The mobile phone, or the ‘line’ it facilitates, enables communications between those running the lines, those distributing the drugs, and those buying and using the drugs. Current approaches to disrupting county lines rely heavily on mobile communications technology (e.g. cell site analysis, or digital forensics gained from burner phones, personal smartphones, or other digital devices). However, the role of technology as an enabler of child criminal exploitation (CCE) is both under-represented and poorly understood in published research and literature. The Government has announced an intention to “cut the head off the snake” of county 6 lines. To understand what is required to do this, it is necessary to explore the dynamics of the country lines model, as well as examine its weaknesses. There is an acute need to better understand and monitor technological evolutions within county lines and analyze their implications for CCE. Only by understanding and responding to the role of technology can the Government and law enforcement leaders produce an effective national plan to ‘cut the head off the snake’ of county lines. Recent public policy developments have put the role technology plays in enabling crime in sharp focus. Social media and online platforms have seen perhaps the most dramatic rise in interest. Even though at the time this report is being drafted, the Online Safety Bill has been put on hold, much ink has been spilled on its value, its potential impact on privacy, and what should be included in such legislation. High-profile cases, such as the events leading to the 6th January attack on the United States Capitol in 2021, have shown the potential harm that can be caused by online communication. More generally, as we become more and more dependent on tech for all aspects of our lives, it is crucial that law enforcement keeps pace with its development concerning crime

London: Crest Advisory, 2022. 37p.

Future Crimes: Inside The Digital Underground And The Battle For Our Connected World

MAY CONTAIN MARKUP

Marc Goodman

In "Future Crimes: Inside The Digital Underground And The Battle For Our Connected World," author Marc Goodman delves into the dark and complex world of cybercrime. He explores the ways in which technology has transformed criminal activities, from hacking and identity theft to cyberterrorism and digital espionage. Goodman sheds light on the threats that the digital age poses to individuals, organizations, and governments, urging readers to become more vigilant and informed about cybersecurity. Through detailed research and gripping real-life stories, "Future Crimes" offers a compelling and sobering look at the vulnerabilities of our interconnected world.

ANCHOR BOOKS. A Division of Penguin Random House LLC New York. 2016. 601p.

Going Dark: The Inverse Relationship between Online and On-the-Ground Pre-offence Behaviours in Targeted Attackers

By Julia Kupper and Reid Meloy

This pilot study examines the correlation of online and on-the-ground behaviours of three lone-actor terrorists prior to their intended and planned attacks on soft targets in North America and Europe: the Pittsburgh synagogue shooter, the Buffalo supermarket shooter and the Bratislava bar shooter. The activities were examined with the definition of the proximal warning indicator energy burst from the Terrorist Radicalization Assessment Protocol (TRAP-18), originally defined as an acceleration in frequency or variety of preparatory behaviours related to the target. An extensive quantitative and qualitative assessment of primary and secondary sources was conducted, including raw data from different tech platforms (Gab, Discord and Twitter–now X) and open-source materials, such as criminal complaints, superseding indictments and court trial transcripts. Preliminary findings of this small sample suggest an inverse relationship between the online and offline behaviours across all three perpetrators. The average point of time between the decision to attack and the actual attack was five months, with an elevation of digital activities in the three months leading up to the incident, along with some indications of offline planning. In the week prior to the event, social media activity decreased–specifically on the day before the acts of violence with two subjects going completely dark–while terrestrial preparations increased. On the actual day of the incident, all assailants accelerated their tactical on-the-ground actions and resurfaced in the online sphere to publish their final messages in the minutes or hours prior to the attack. It appears that the energy burst behaviours in the digital sphere and the offline actions can be measured in both frequency and variety. Operational implications of this negative correlation are suggested for intelligence analysts, counter-terrorism investigators and threat assessors.

London: The Global Network on Extremism and Technology (GNET), 2023. 36p.

Deepfakes on Trial: A Call To Expand the Trial Judge’s Gatekeeping Role To Protect Legal Proceedings from Technological Fakery

By Rebecca A. Delfino

Deepfakes—audiovisual recordings created using artificial intelligence (AI) technology to believably map one person’s movements and words onto another—are ubiquitous. They have permeated societal and civic spaces from entertainment, news, and social media to politics. And now deepfakes are invading the courts, threatening our justice system’s truth-seeking function. Ways deepfakes could infect a court proceeding run the gamut and include parties fabricating evidence to win a civil action, government actors wrongfully securing criminal convictions, and lawyers purposely exploiting a lay jury’s suspicions about evidence. As deepfake technology improves and it becomes harder to tell what is real, juries may start questioning the authenticity of properly admitted evidence, which in turn may have a corrosive effect on the justice system. No evidentiary procedure explicitly governs the presentation of deepfake evidence in court. The existing legal standards governing the authentication of evidence are inadequate because they were developed before the advent of deepfake technology. As a result, they do not solve the urgent problem of how to determine when an audiovisual image is fake and when it is not. Although legal scholarship and the popular media have addressed certain facets of deepfakes in the last several years, there has been no commentary on the procedural aspects of deepfake evidence in court. Absent from the discussion is who gets to decide whether a deepfake is authentic. This Article addresses the matters that prior academic scholarship on deepfakes obscures. It is the first to propose a new addition to the Federal Rules of Evidence reflecting a novel reallocation of fact-determining responsibilities from the jury to the judge, treating the question of deepfake authenticity as one for the court to decide as an expanded gatekeeping function under the Rules. The challenges of deepfakes—problems of proof, the “deepfake defense,” and juror skepticism—can be best addressed by amending the Rules for authenticating digital audiovisual evidence, instructing the jury on its use of that evidence, and limiting counsel’s efforts to exploit the existence of deepfakes.

Hastings Law Journal, 2023. 57p.

Challenge Trial Judges Face When Authenticating Video Evidence in the Age of Deepfakes

By Taurus Myhand

The proliferation of deepfake videos has resulted in rapid improvements in the technology used to create them. Although the use of fake videos and images are not new, advances in artificial intelligence have made deepfakes easier to make and harder to detect. Basic human perception is no longer sufficient to detect deepfakes. Yet, under the current construction of the Federal Rules of Evidence, trials judges are expected to do just that. Trial judges face a daunting challenge when applying the current evidence authentication standards to video evidence in this new reality of widely available deepfake videos. This article examines the gatekeeping role trial judges must perform in light of the unique challenges posed by deepfake video evidence. This article further examines why the jury instruction approach and the rule change approaches proposed by other scholars are insufficient to combat the grave threat of false video evidence. This article concludes with a discussion of the affidavit of forensic analysis approach, a robust response to the authentication challenges posed by deepfakes. The AFA approach preserves most of the current construction of the Federal Rules of Evidence while reviving the gatekeeping role of the trial judge in determining the admissibility of video evidence. The AFA will provide the trial judges with the tools necessary to detect and exclude deepfake videos without leaving an everlasting taint on the juries that would have otherwise seen the falsified videos.

Widener Law Review, 2023. 19p.

Spaceless violence: Women’s experiences of technology-facilitated domestic violence in regional, rural and remote areas

By Bridget Harris & Delaine Woodlock

This project explored the impact of technology on victim–survivors of intimate partner violence in regional, rural or remote areas who are socially or geographically isolated. Specifically, it considered the ways that perpetrators use technology to abuse and stalk women, and how technology is used by victim–survivors to seek information, support and safety. Interviews and focus groups with 13 women were conducted in regional, rural and remote Victoria, New South Wales and Queensland. The findings showed that perpetrators used technology to control and intimidate women and their children. While this impacted women and children’s lives in significant ways, causing fear and isolation, the use of technology was often not viewed as a serious form of abuse by justice agents. 

Australia, Institute of Criminology. 2022, 81pg