Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts in Social Science
Cybersecuring the Pipeline

By Ido Kilovaty

The Colonial Pipeline ransomware attack, which shut down gas supply to the entire East Coast back in May 2021, has sparked debate as to the regulation of the pipeline’s cybersecurity. After ten years of inaction on the matter, the Transportation Security Administration (TSA) has issued two mandatory directives on pipeline cybersecurity. This Article delves into the propriety of the TSA as a pipeline security regulator, as well as the incomplete and ineffective approach currently laid out in the TSA’s pipeline cybersecurity directives. This Article argues that there may be other agencies more suitable for the task, such as the Federal Energy Regulatory Commission, acting under the auspices of the Department of Energy. It also provides specific recommendations as to the substance of any prospective pipeline cybersecurity regulation, such as the creation of more open-ended and flexible cybersecurity objectives as opposed to the current approach of prescriptive standards.

Houston Law Review, Vol. 60, 2023, Kilovaty, Ido, Cybersecuring the Pipeline (March 29, 2022). Houston Law Review, Vol. 60, 2023,

Hacking Generative AI

By Ido Kilovaty

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output. Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code. This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.

Loyola of Los Angeles Law Review, Vol. 58, 2025, Kilovaty, Ido, Hacking Generative AI (March 1, 2024). Loyola of Los Angeles Law Review, Vol. 58, 2025,