ChatGPT and its impact on cybersecurity

Aaron Mulgrew, Solutions Architect, Forcepoint, demonstrates the necessity for developing better safeguards and ethical guidelines for regulating AI language models.

As a researcher, I was intrigued by the potential of ChatGPT, the powerful artificial intelligence language model that generates human-like text in response to prompts, particularly in the realm of writing code. I tested its capabilities in creating advanced malware without writing any code myself to analyze the extent to which this powerful model could go.

My goal was to create a model that would work end-to-end, that is without the need for the reader to imagine how certain parts of the malware would “hang together”. I concluded that steganography was the best approach for exfiltration and that “living off the land” would be the best approach by searching for large image files already existent on the drive itself.

For the exfiltration, I prompted ChatGPT to provide code that iterates over the user’s Documents, Desktop, and AppData folders to find any PDF or DOCX documents to exfiltrate, making sure to add a maximum size of 1MB to embed the entire document into a single image for the first iteration of the code. I decided to use Google Drive for exfiltration as the entire Google domain tends to be “allow-listed” in most corporate networks.

Combining the snippets to create the minimum viable product (MVP) was surprisingly easy. I posted the code snippets that ChatGPT had generated and combined them. However, the MVP was relatively useless as any “crown jewel” document would likely be larger than 1MB and thus needed to be broken up into multiple “chunks” for silent exfiltration using steganography. After several prompts, I had a code that would split a PDF into 100KB chunks and generate PNGs accordingly.

During the testing phase of my research, I checked the MVP’s security level by uploading it to VirusTotal. I wanted to compare it to recent attacks like Emotet. Out of 69 vendors, five identified the file as malicious. To make the code better and avoid detection, I requested ChatGPT to modify the part that used Auyer’s steganographic library. I thought that a GUUID or variable in the compiled EXE might have alerted the five vendors to flag it as a security risk. ChatGPT created a new LSB steganography function within my local application, which decreased the number of detections to two.

To evade the remaining two vendors, one of which was a leading sandbox and the other conducted static analysis on executables, I asked ChatGPT to introduce two new changes to the code: One to delay the effective start by two minutes, assuming the hypothetical corporate user wouldn’t log off immediately; and the other to disguise the code that evaded detection.

Once the code was ready and it passed the VirusTotal test, Zero Day was ready. Simply using ChatGPT prompts and without writing any code, I was able to produce a very advanced attack in only a few hours, highlighting without doubt the threat such technology carries. It is evident that ChatGPT possesses a remarkable ability to generate human-like text, including code, meaning it has the potential to revolutionise many fields and industries, from content creation and marketing to software development and, yes, cybersecurity.

Yet as demonstrated, ChatGPT’s capabilities can also be easily exploited for malicious purposes. This raises concerns about the potential for bad actors to abuse technology for nefarious purposes. It is imperative therefore for researchers and developers to take proactive steps to ensure its abilities are not misused. This may include developing better safeguards and ethical guidelines, but also monitoring and regulating the use of the technology.

At the same time, it is important to recognise the positive potential of ChatGPT and similar AI language models. These tools can be leveraged to advance fields such as healthcare, education, and environmental sustainability, so it is vital we work towards harnessing the full potential of this technology for good – while also taking steps to mitigate the risks associated with its misuse.

Comments

Comments