Helping to share the web since 1996


AI language tools like ChatGPT pose a new risk of Cyberattacks

ChatGPT, the AI-powered natural language processing tool developed by OpenAI, has become increasingly popular for tasks ranging from composing emails to writing essays or compiling code. However, while it has significant potential, it is not without its flaws, and its use for malicious purposes has already been discussed in underground forums. OpenAI’s terms of service prohibit the generation of malware, spam, and other software intended to cause harm, but concerns remain that criminals could use ChatGPT and other AI tools to conduct malicious campaigns more efficiently.

Phishing attacks are a common component of malicious hacking and fraud campaigns, with email being the key tool in the initial coercion. However, the need for a steady stream of clear and usable content poses a challenge for criminals, particularly for more sophisticated spear-phishing campaigns that rely on victims believing they’re speaking to a trusted contact. An efficient automated copywriter could make those emails more compelling and help attackers bypass language barriers.

While ChatGPT requires users to register with an email address and phone number, it’s possible to ask it to make email templates for messages such as claiming an annual bonus is on offer or that an important software update must be downloaded and installed. Cybercriminals could use ChatGPT to create a variety of different phishing messages, potentially saving them money on hiring graduates of English studies in Russian colleges to write for phishing emails and call centers. However, there are protections in place to prevent abuse, and OpenAI’s terms of service prohibit using the tool for cybercrime.

AI tools like ChatGPT can be used to generate convincing content for any online text-based platform, including social media. Cybercriminals can use it to create fake but legitimate-looking profiles for phishing or cyber espionage campaigns. ChatGPT could write very convincing thought leadership posts, says cybersecurity expert Kelly Shortridge. Though there are protections in place to prevent abuse, cybercriminals can find ways to circumvent them. The risks associated with the technology need to be discussed to raise awareness, and companies like OpenAI need to invest more in reducing abuse.

Newer Articles

Older Articles

Back to news headlines