Emerging Threats in AI: A Detailed Review of Misuses and Risks Across Modern AI Technologies

Seghid, N, Iqbal, F, Al-Room, K and MacDermott, Á orcid iconORCID: 0000-0001-8939-4664 Emerging Threats in AI: A Detailed Review of Misuses and Risks Across Modern AI Technologies. Frontiers in Communications and Networks. ISSN 2673-530X (Accepted)

[thumbnail of Emerging Threats in AI-A Detailed Review of Misuse and Risks Across Modern AI Technologies AM.pdf]
Preview
Text
Emerging Threats in AI-A Detailed Review of Misuse and Risks Across Modern AI Technologies AM.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

The swift evolution of artificial intelligence technologies (AI) has introduced unparalleled capabilities, alongside critical vulnerabilities that can be exploited maliciously or cause unintended harm. While numerous efforts have emerged to govern AI risks, there remains a lack of comprehensive analysis of how AI systems are actively being misused. This paper offers an in-depth review of AI misuses across modern technologies, analyzing attack mechanisms, documented incidents, and emerging threat vectors. We provide a brief review of AI risk repositories and existing taxonomic approaches to set the context, and then synthesize them into a comprehensive categorization of AI misuse across nine primary domains: (1) Adversarial Threats, (2) Privacy Violations, (3) Disinformation, Deception & Propaganda, (4) Bias & Discrimination, (5) System Safety & Reliability Failures, (6) Socioeconomic Exploitation & Inequality, (7) Environmental & Ecological Misuse, (8) Autonomy & Weaponization, and (9) Human Interaction & Psychological Harm. Across these domains, we identify and analyze distinct categories of AI misuses and risks, providing technical depth on exploitation mechanisms, documented cases with quantified impacts, and the latest developments including large language model vulnerabilities and multimodal attack vectors. We also assess the effectiveness of current mitigation strategies and countermeasures, evaluating technical security frameworks (e.g. MITRE ATLAS, OWASP Top 10 for Large Language Models (LLMs), MAESTRO), regulatory approaches (e.g. EU AI Act, NIST AI RMF), and compliance standards. Our analysis reveals significant gaps between AI capabilities and robustness of defensive measures, with adversaries holding persistent advantages across most attack categories. This work contributes to the field by: (1) systematically consolidating fragmented AI risk and misuse taxonomies and repositories, (2) developing a unified taxonomy of AI misuse patterns grounded in both theoretical models and empirical incident data, (3) critically evaluating the effectiveness of existing mitigation strategies, and (4) identifying priority research gaps to foster the development of more robust, ethical, and secure AI systems.

Item Type: Article
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
T Technology > T Technology (General) > T58.5 Information Technology
Divisions: Computer Science and Mathematics
Publisher: Frontiers Media
Date of acceptance: 29 December 2025
Date of first compliant Open Access: 14 January 2026
Date Deposited: 14 Jan 2026 11:51
Last Modified: 14 Jan 2026 11:51
URI: https://researchonline.ljmu.ac.uk/id/eprint/27905
View Item View Item