Crimebots and Lawbots: Cyberwarfare Powered by Generative Artificial Intelligence
DOI:
https://doi.org/10.14738/tecs.1302.18401Keywords:
Cybercrime Law, Lawyers Justice, Criminals Malpractice, Rules of Professional ConductAbstract
Crimebots are fueling the cybercrime pandemic by exploiting artificial intelligence (AI) to facilitate crimes such as fraud, misrepresentation, extortion, blackmail, identity theft, and security breaches. These AI-driven criminal activities pose a significant threat to individuals, businesses, online transactions, and even the integrity of the legal system. Crimebots enable unjust exonerations and wrongful convictions by fabricating evidence, creating deepfake alibis, and generating misleading crime reconstructions. In response, lawbots have emerged as a counterforce, designed to uphold justice. Legal professionals use lawbots to collect and analyze evidence, streamline legal processes, and enhance the administration of justice. To mitigate the risks posed by both crimebots and lawbots, many jurisdictions have established ethical guidelines promoting the responsible use of AI by lawyers and clients. Approximately 1.34% of lawyers have been involved in AI-related legal disputes, often revolving around issues such as fees, conflicts of interest, negligence, ethical violations, evidence tampering, and discrimination. Additional concerns include fraud, confidentiality breaches, harassment, and the misuse of AI for criminal purposes. For lawbots to succeed in the ongoing battle against crimebots, strict adherence to complex AI regulations is essential. Ensuring compliance with these guidelines minimizes malpractice risks, prevents professional sanctions, preserves client trust, and upholds the ethical and legal professional standards of excellence.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Peter E. Murray

This work is licensed under a Creative Commons Attribution 4.0 International License.