Criminal violence - Property Crime: Fraud - Money laundering - theft and robbery - homicide - rape - extortion - arson — Read-Me.Org -Open Access to All
Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts by Guest User
Online Safety and the Regulation of Gaming Platforms and Services

By Ellen Jacobs, Ella Meyer, Helena Schwertheim, Melanie Döring and Terra Rolfe

The global gaming industry is now worth more than both the film and music industries combined, with an estimated 3.2 billion gamers worldwide. As such, greater attention has paid in recent years to the online safety risks associated with gaming. This includes both gaming-specific companies and the wider ecosystem of gaming-adjacent social media platforms, particularly in the context of online hate and misogyny, extremism and radicalisation, and child safety issues (such as grooming and cyberbullying). Significant progress has been made in understanding how online harms are perpetrated in online gaming spaces. Recognising these risks, policymakers have crafted new digital and online safety regulations such as the EU’s Digital Services Act (DSA) and the UK’s Online Safety Act (UK OSA) to increasingly apply to gaming or gaming-adjacent companies. However, such regulations are still in the early stages of implementation and enforcement, and the extent to which gaming companies or services are within scope can be unclear. This policy brief provides a summary of the current evidence on the nature and extent of these risks and highlights remaining gaps and challenges to building out this evidence base. It also provides an overview of existing government approaches to enhancing online safety in gaming, including both regulatory and non-regulatory efforts, as well as industry and civil society initiatives. Special attention is given to existing regulatory frameworks in the EU (DSA, Terrorist Content Online Regulation), the UK (UK OSA) and Australia (Online Safety Act), to understand how and how far they may provide higher standards of online safety to gamers. Finally, the brief explores both existing and proposed mitigation strategies to enhance online safety in gaming. Throughout, the brief provides recommendations for governments, regulators, researchers and industry. The DPL supports collaboration through a multi-stakeholder approach to develop a better understanding of the risks posed in online gaming spaces and how best to mitigate them

Amman Berlin London Paris Washington DC: Institute for Strategic Dialogue (2024) 47p

Social media: the good, the bad, and the ugly

By Joint Select Committee on Social Media and Australian Society

This report focusses on the impacts of social media and Australian society. It examines the influence of social media on users' health and wellbeing, particularly on vulnerable cohorts of people, but also how social media can provide users with positive connection, a sense of community, a place for expression and instant access to information and entertainment.

The Committee heard that balancing these conflicting realities is a wicked problem.

The report addresses both the need for immediate action, and the need for a sustained digital reform agenda. It supports protecting Australians through a statutory duty of care by digital platforms, education support and digital competency, greater protections of personal information, independent research, data gathering and reporting, and giving users greater control over what they see on social media.

This report puts Big Tech on notice—social media companies are not immune from the need to have a social licence to operate.

Recommendations for the Australian Government

  1. Consider options for greater enforceability of Australian laws for social media platforms.

  2. Introduce a single and overarching statutory duty of care onto digital platforms for the wellbeing of their Australian users.

  3. Introduce legislative provisions to enable effective, mandatory data access for independent researchers and public interest organisations, and an auditing process by appropriate regulators.

  4. As part of its regulatory framework, ensures that social media platforms introduce measures that allow users greater control over what user-generated content and paid content they see by having the ability to alter, reset, or turn off their personal algorithms and recommender systems.

  5. Prioritise proposals from the Privacy Act review relating to greater protections for the personal information of Australians and children.

  6. Any features of the Australian Government's regulatory framework that will affect young people be codesigned with young people.

  7. Support research and data gathering regarding the impact of social media on health and wellbeing to build on the evidence base for policy development.

  8. One of the roles of the previously recommended Digital Affairs Ministry should be to develop, coordinate and manage funding allocated for education to enhance digital competency and online safety skills.

  9. Reports to both Houses of Parliament the results of its age assurance trial.

  10. Industry be required to incorporate safety by design principles in all current and future platform technology.

  11. Introduce legislative provisions requiring social media platforms to have a transparent complaints mechanism.

  12. Ensures adequate resourcing for the Office of the eSafety Commissioner to discharge its evolving functions.

Parliament of Australia, 18 NOV 2024

Crypto Tax Evasion

By Tom G. Meling, Magne Mogstad, and Arnstein Vestre

We quantify the extent of crypto tax noncompliance and evasion, and assess the efficacy of alternative tax enforcement interventions. The context of the study is Norway. This context allows us to address key measurement challenges by combining de-anonymized crypto trading data with individual tax returns, survey data, and information from tax enforcement interventions. We find that crypto tax noncompliance is pervasive, even among investors trading on exchanges that share identifiable trading data with tax authorities. However, since most crypto investors owe little in crypto-related taxes, enforcement strategies need to be well-targeted or cheap for benefits to outweigh costs.

Chicago: University of Chicago, The Becker Friedman Institute for Economics (BFI) , 2024. 69p.

Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.

Cryptographic security: Critical to Europe's digital sovereignty

By Stefano De Luca with Tristan Marcelin; Graphics: Samy Chahr

By the 2030s, quantum computers might compromise traditional cryptography, putting digital infrastructure at high risk in the European Union (EU) and around the world. Specifically, it is expected that quantum computers' unique capabilities will allow them to solve complex mathematical problems, such as breaking the traditional cryptographic systems used universally. The confidentiality, integrity and authenticity of sensitive data – including health, financial, security and defence information – will be exposed to threats from any actor possessing a sufficiently powerful quantum computer. There is a pressing need for the EU to start preparing its digital assets to face this risk. Post-quantum cryptography (which uses classical computer properties) and quantum cryptography (which uses quantum mechanical properties) are the two types of critical technology able to protect digital infrastructure from quantum computer attacks. Robust post-quantum cryptography algorithms have been identified, but swift and efficient implementation is crucial before malicious actors exploit the power of quantum computers. Experts stress the need for quantum preparedness to be put in place now, with some of them even warning of a 'quantum cybersecurity Armageddon'. Several countries are adopting strategies to address post-quantum cryptography. The EU is working with Member States and the United States to speed up the transition to post-quantum cryptography, and is also exploring long-term quantum cryptography initiatives.

Brussels: EPRS | European Parliamentary Research Service, 2024. 8p.

“I’ve seen horrible things”: Children’s experiences of the online world

By The Children's Commission for England...

“I think the Government should do more about protecting children on the internet. Of course, it is very hard but just educating about the dangers of the internet is not enough” – Girl, 17. A year has passed since the Online Safety Act 2023 became law. This Act, a landmark piece of legislation, was welcomed by the Children’s Commissioner, following her extensive campaigning, as an important step towards a new era of the online world: one that presented an opportunity for children to learn, play and develop there safely. One year on, the legislation has yet to be implemented and important decisions regarding what those regulations will look like remain unclear. This report illustrates the extent to which children are still experiencing harm online. It sets the Children’s Commissioner’s expectations for the future of online safety policy making, and bolder steps towards robustly protecting children online. This report draws on the responses of 253,000 children and adults to The Big Ambition: a large-scale consultation of children in England carried out between September 2023 and January 2024. 2 The survey asked a broad set of questions about their lives, and in response, children shared their views on what they think needs to change to make their lives better. One of the areas they wanted action on was online safety. Children told the Children’s Commissioner’s Office that some children are more vulnerable to online harms than others, and that a variety of content and non-content factors cause them harm online. They also shared their views on who should take responsibility and make the online world safer for them. This report sets out what they said:

London: The Children's Commissioner for England, 2024. 80p.

Through the Chat Window and Into the Real World: Preparing for AI Agents

By: Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee

The concept of artificial intelligence systems that actively pursue goals—known as AI “agents”—is not new. But over the last year or two, progress in large language models (LLMs) has sparked a wave of excitement among AI developers about the possibility of creating sophisticated, general-purpose AI agents in the near future. Startups and major technology companies have announced their intent to build and sell AI agents that can act as personal assistants, virtual employees, software engineers, and more. While current systems remain somewhat rudimentary, they are improving quickly. Widespread deployment of highly capable AI agents could have transformative effects on society and the economy. This workshop report describes findings from a recent CSET-led workshop on the policy implications of increasingly “agentic” AI systems.

In the absence of a consensus definition of an “agent,” we describe four characteristics of increasingly agentic AI systems: they pursue more complex goals in more complex environments, exhibiting independent planning and adaptation to directly take actions in virtual or real-world environments. These characteristics help to establish how, for example, a cyber-offense agent that could autonomously carry out a cyber intrusion would be more agentic than a chatbot advising a human hacker. A “CEO-AI” that could run a company without human intervention would likewise be more agentic than an AI acting as a personal assistant.

At present, general-purpose LLM-based agents are the subject of significant interest among AI developers and investors. These agents consist of an advanced LLM (or multimodal model) that uses “scaffolding” software to interface with external environments and tools such as a browser or code interpreter. Proof-of-concept products that can, for example, write code, order food deliveries, and help manage customer relationships are already on the market, and many relevant players believe that the coming years will see rapid progress.

In addition to the many potential benefits that AI agents will likely bring, they may also exacerbate a range of existing AI-related issues and even create new challenges. The ability of agents to pursue complex goals without human intervention could lead to more serious accidents; facilitate misuse by scammers, cybercriminals, and others; and create new challenges in allocating responsibility when harms materialize. Existing data governance and privacy issues may be heightened by developers’ interest in using data to create agents that can be tailored to a specific user or context. If highly capable agents reach widespread use, users may become vulnerable to skill fade and dependency, agents may collude with one another in undesirable ways, and significant labor impacts could materialize as an increasing range of currently human-performed tasks become automated.

To manage these challenges, our workshop participants discussed three categories of interventions:

  • Measurement and evaluation: At present, our ability to assess the capabilities and real-world impacts of AI agents is very limited. Developing better methodologies to track improvements in the capabilities of AI agents themselves, and to collect ecological data about their impacts on the world, would make it more feasible to anticipate and adapt to future progress.

  • Technical guardrails: Governance objectives such as visibility, control, trustworthiness, as well as security and privacy can be supported by the thoughtful design of AI agents and the technical ecosystems around them. However, there may be trade-offs between different objectives. For example, many mechanisms that would promote visibility into and control over the operations of AI agents may be in tension with design choices that would prioritize privacy and security.

  • Legal guardrails: Many existing areas of law—including agency law, corporate law, contract law, criminal law, tort law, property law, and insurance law—will play a role in how the impacts of AI agents are managed. Areas where contention may arise when attempting to apply existing legal doctrines include questions about the “state of mind” of AI agents, the legal personhood of AI agents, how industry standards could be used to evaluate negligence, and how existing principal-agent frameworks should apply in situations involving AI agents.

While it is far from clear how AI agents will develop, the level of interest and investment in this technology from AI developers means that policymakers should understand the potential implications and intervention points. For now, valuable steps could include improving measurement and evaluation of AI agents’ capabilities and impacts, deeper consideration of how technical guardrails can support multiple governance objectives, and analysis of how existing legal doctrines may need to be adjusted or updated to handle more sophisticated AI agents.

Center for Security and Emerging Technology, October 2024

Bytes and Battles: Inclusion of Data Governance in Responsible Military AI

By: Yasmin Afina and Sarah Grand-Clément

Data plays a critical role in the training, testing and use of artificial intelligence (AI), including in the military domain. Research and development for AI-enabled military solutions is proceeding at breakneck speed, and the important role data plays in shaping these technologies has implications and, at times, raises concerns. These issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning). Yet pathways and governance solutions to address these issues remain scarce and very much underexplored.

This paper aims to fill this gap by first providing a comprehensive overview on data issues surrounding the development, deployment and use of AI. It then explores data governance practices from civilian applications to identify lessons for military applications, as well as highlight any limitations to such an approach. The paper concludes with an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain.

CIGI Papers No. 308 — October 2024

Voting System Security Measures

By: US Election Assistance Commission

The security of voting systems is essential to a trustworthy election. Every state and local jurisdiction utilizes common-sense procedures and tools to safeguard the voting process. Common best practices include using locks, tamper-evident seals, security cameras, system testing before and after elections, audits, and physical and cybersecurity access controls. This guide outlines some of the many best practices local election officials follow to secure voting systems through an election cycle. It's important to note this is a broad list of common security measures and procedures to protect the integrity of an election. The types of security measures may vary based on the voting systems in use in state and local jurisdictions.

United States. Election Assistance Commission, Oct 2024

THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY: SHIFTING THE OFFENSE- DEFENSE BALANCE

By: Jennifer Tang, Tiffany Saade, and Steve Kelly

Cutting-edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.

The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.

This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem.

The Institute for Security and Technology, October 2024

Cyber Technology in Federal Crime

By: Carlton W. Reeves, Luis Felipe Restrepo, Laura E. Mate, Claire Murray, Claria Horn Boom, John Gleeson, Candice C. Wong, Patricia K. Cushwa, and Scott A.C. Meisler

The use of cyber technologies, such as cryptocurrency and the dark web, provides new and evolving means to commit crimes and avoid detection. These technologies are used to commit a variety of federal offenses. The dark web is sometimes used to create, hide, or access websites containing child pornography. Illegal drugs and firearms are sometimes sold through dark websites. Cryptocurrency is sometimes used to facilitate these crimes. [...] Regardless of the type of crime involved, the relative anonymity these technologies provide to their users creates challenges for the investigation and prosecution of the crimes committed with them. The use of cyber technology to commit crimes transcends national borders. As Interpol has found, this causes investigative and legal challenges that can be difficult to overcome. United States government agencies, such as the Federal Bureau of Investigation and the Financial Crimes Enforcement Network, have reported on the increasing threats from these technologies and estimated yearly losses in the billions from the crimes committed with these technologies. There has been little analysis on the individuals sentenced for a federal offense who use these technologies for illegal purposes, the offenses they committed, and trends in these areas over time. In developing this report, the United States Sentencing Commission ('the Commission') collected information on individuals sentenced for offenses using cryptocurrency, the dark web, and hacking for fiscal years 2014 through 2021."

United States Sentencing Commission Sep. 2024

The Global Flow of Information: Legal, Social, and Cultural Perspectives

By Ramesh Subramanian, Eddan Katz

The Internet has been integral to the globalization of a range of goods and production, from intellectual property and scientific research to political discourse and cultural symbols. Yet the ease with which it allows information to flow at a global level presents enormous regulatory challenges. Understanding if, when, and how the law should regulate online, international flows of information requires a firm grasp of past, present, and future patterns of information flow, and their political, economic, social, and cultural consequences.

In The Global Flow of Information, specialists from law, economics, public policy, international studies, and other disciplines probe the issues that lie at the intersection of globalization, law, and technology, and pay particular attention to the wider contextual question of Internet regulation in a globalized world. While individual essays examine everything from the pharmaceutical industry to television to “information warfare” against suspected enemies of the state, all contributors address the fundamental question of whether or not the flow of information across national borders can be controlled, and what role the law should play in regulating global information flows.

New York: NYU Press, 2011.

Changing perceptions of biometric technologies

By Christie Franks and Russell G Smith

Identity crime and misuse cost the Australian economy an estimated $3.1b in 2018–19 (Smith & Franks 2020). Protecting individuals’ personal identification information and finding secure ways to verify identities has become an increased priority as the impact of identity crime continues to grow in Australia and worldwide. Biometric technologies for identity verification provide an enhanced security solution, although implementation of biometric systems within Australian society has met with varying degrees of acceptance. Since 2013, the Australian Institute of Criminology (AIC) has conducted online surveys to gain a greater understanding of identity crime and misuse in Australia. These surveys have asked about respondents’ experience of identity crime and also their previous use of, and future willingness to use, biometric technologies to safeguard their personal information. This report presents both qualitative and quantitative research findings obtained from a sample of respondents in the most recent surveys concerning their experiences of biometrics and perceptions as to its role in identity security.

Research Report no. 20. Canberra: Australian Institute of Criminology. 2021. 76p.