Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

When ChatGPT goes rogue: exploring the potential cybersecurity threats of AI-powered conversational chatbots

Iqbal, F, Samsom, F, Kamoun, F and MacDermott, Á (2023) When ChatGPT goes rogue: exploring the potential cybersecurity threats of AI-powered conversational chatbots. Frontiers in Communications and Networks, 4.

[img]
Preview
Text
When ChatGPT Goes Rogue Exploring the Potential Cybersecurity Threats of AI-powered Conversational Chatbots.pdf - Published Version
Available under License Creative Commons Attribution.

Download (5MB) | Preview

Abstract

ChatGPT has garnered significant interest since its release in November 2022 and it has showcased a strong versatility in terms of potential applications across various industries and domains. Defensive cybersecurity is a particular area where ChatGPT has demonstrated considerable potential thanks to its ability to provide customized cybersecurity awareness training and its capability to assess security vulnerabilities and provide concrete recommendations to remediate them. However, the offensive use of ChatGPT (and AI-powered conversational agents, in general) remains an underexplored research topic. This preliminary study aims to shed light on the potential weaponization of ChatGPT to facilitate and initiate cyberattacks. We briefly review the defensive usage of ChatGPT in cybersecurity, then, through practical examples and use-case scenarios, we illustrate the potential misuse of ChatGPT to launch hacking and cybercrime activities. We discuss the practical implications of our study and provide some recommendations for future research.

Item Type: Article
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computer Science & Mathematics
Publisher: Frontiers Media
SWORD Depositor: A Symplectic
Date Deposited: 29 Jul 2024 15:36
Last Modified: 29 Jul 2024 15:45
DOI or ID number: 10.3389/frcmn.2023.1220243
URI: https://researchonline.ljmu.ac.uk/id/eprint/23817
View Item View Item