ChatGPT popularity raises cybersecurity concerns

As ChatGPT becomes a household topic of discussion, it also raises serious concerns over cybersecurity, such as attackers using the chatbot to write phishing emails and codes.

Security experts have expressed unease and optimism, in equal measures, on the potential risks associated with ChatGPT. ChatGPT (Generative Pre-Trained Transformer) is

an artificial intelligence (AI)-powered chatbot launched in November 2022 by OpenAI, which can comprehend and generate natural language or human text. It is a tool that is trained

on large amounts of text data and uses an algorithm known as Transformer to learn how to generate text that is similar to human conversation. Touted as the “smartest

chatbot ever made”, ChatGPT has the ability to generate human-like text responses to prompts. This makes it useful for a wide range of applications, such as creating chatbots for

customer service, generating responses to questions in online forums, or even creating personalised content for social media posts. With breakthrough advances in technology, the

inevitable security concerns are never far behind. “While ChatGPT attempts to limit malicious input and output, the reality is that cyber criminals are already looking

at unique ways to leverage the tool for nefarious purposes. It isn’t hard to create hyper realistic phishing emails or exploit code, for example, simply by changing the user input

or slightly adapting the output generated,” said Steve Povolny, principal engineer and director at cybersecurity firm Trellix.