ChatGPT just created malware, and that’s seriously scary

A self-proclaimed upstart has reportedly created a powerful data-mining malware using only ChatGPT queries, all in a matter of hours.

Aaron Mulgrew, a security researcher at Forcepoint, recently shared how he created zero-day malware exclusively on the OpenAI generative chatbot. While OpenAI has protections against anyone trying to ask ChatGPT to write malicious code, Mulgrew found a loophole by asking the chatbot to create separate lines of malicious code, feature by feature.

After compiling the individual functions, Mulgrew created a nearly undetectable data-stealing executable in his hands. And this wasn’t his backyard malware, either: the malware was as sophisticated as any attack on the state, capable of evading all detection-based vendors.

Equally important is how Mulgrew’s malware differs from “usual” nation-state iterations in that it requires no hacker teams (and a fraction of the time and resources) to build. Mulgrew, who didn’t do any of the coding himself, had an executable ready in just a few hours, as opposed to the weeks it usually takes him.

Mulgrew malware (sounds good, right?) masquerades as a screen saver application (SCR extension), which then starts automatically in Windows. The software will then examine the files (such as images, Word documents, and PDFs) looking for data to steal. Surprisingly, the malware (via steganography) will split the stolen data into smaller pieces and hide it within images on the computer. These images are then uploaded to a Google Drive folder, a process that avoids detection.

Equally impressive is that Mulgrew was able to refine and harden his anti-discovery code using simple queries in ChatGPT, which really raised the question of how safe it is to use ChatGPT. VirusTotal’s first tests showed that the malware was detected by five out of 69 detection products. Neither product subsequently detected a later version of its code.

Please note that the malware Mulgrew created was a test and is not publicly available. Despite this, his research has shown how easily users with little or no advanced coding experience can bypass ChatGPT’s weak protections to easily create dangerous malware without ever entering a single line of code.

But here’s the scary part about it all: this kind of code usually takes a large team weeks to compile. We wouldn’t be surprised if nefarious hackers are already developing similar malware via ChatGPT as we speak.

editor’s recommendations

Categories: GAMING
Source: newstars.edu.vn

Leave a Comment