Yes, Cybercriminals Can Use ChatGPT to Their Advantage, Too

In the age of robots and artificial intelligence comes another player in the AI market: ChatGPT (Generative Pre-trained Transformer). Since its release, cyber security professionals have unlocked various opportunities with its features. Namely, it can answer prompts, write codes on demand, detect phishing emails, and crack passwords.

In a nutshell, ChatGPT can be an invaluable tool for security leaders. But who’s to say it can’t be used by cyber criminals for the other side of the same coin?

What is ChatGPT, and Why is it Important?

Launched by OpenAI in late 2022, ChatGPT aims to answer queries by pooling massive data from the internet to answer prompts. Its cyber security application is its ability to write different software languages and debug codes.

ChatGPT has grown in popularity, having recently launched ChatGPT Plus at $20/month. This pilot subscription plan is only available to users in the United States at the time of writing. The creators have initially launched ChatGPT in the U.S. under a research preview to hopefully gain insight into the tool’s strengths and limitations and improve it for widescale use.

While OpenAI says it has received millions of feedback and is in the process of making updates accordingly, cyber security experts are, as early as now, able to tell its implications in the industry—both good and bad.

With that, ChatGPT is setting the course to revolutionize how AI is utilized to further cyber security objectives and minimize threats. Although it still needs further research, it’s a promising tool for cyber security professionals.

The Pros and Cons of Using ChatGPT in Cyber Security

ChatGPT has a massive impact on the cyber security industry. This effect can be either good or bad, depending on how the technology is used and who uses it. While AI can be invaluable in detecting and stopping cyber attacks, there are also associated risks that cannot be ignored.

The Benefits of ChatGPT for Cyber Security Leaders

ChatGPT’s features are proving to be highly valuable for cyber security leaders, from improving their knowledge to helping them generate complicated code on demand.

Generating Code

ChatGPT makes it easier for cyber security professionals to generate code in any language, whether or not they have prior knowledge or experience. This makes it a very innovative platform that can advance a person’s understanding of cyber security, allowing them to ask follow-up questions to the AI or that complicated topics be simplified.

Better Decision-Making

Automation can help security professionals process and analyze large amounts of data in real time. This helps in improving decision-making abilities and enabling organizations to use their data more efficiently to make more informed business decisions.

The Cons of ChatGPT

…or the benefits of ChatGPT for cyber criminals.

Generating Malware Code

One of the biggest risks is ChatGPT being used to write malware code. While the company behind it has set up parameters to prevent this, many developers have put the security measures to the test. ChatGPT can detect and reject requests to write malware code.

However, cyber criminals can easily get around it by providing a detailed explanation of the steps to write the code instead of a direct prompt. ChatGPT will fail to identify this as a request to write malware and will effectively generate it.

Just by using different wordings and slight variations that do not pertain to malware, multiple scripts for complicated attack processes can be automated with ChatGPT. In effect, ChatGPT makes it easier for inexperienced attackers to keep up with the sophistication of cyber security measures.

With the use of AI and bypassing its security parameters, it can generate malware code to launch cyber attacks.

Creating Phishing Emails

Cyber criminals can also leverage ChatGPT to create phishing emails. They can ask it to generate a phishing prompt, although indirectly, to bypass the security measures and retrieve malicious code that can download reverse shells. These can potentially connect to a computer, allowing the attacker to access it and its files remotely.

Password Cracking

ChatGPT’s capacity to generate password candidates with speed and accuracy makes it more possible and even easier to identify passwords. If users don’t take the necessary precautions and protect their accounts from unauthorized access, ChatGPT can make it highly likely for attackers to identify their passwords and retrieve important data.

BEC

With ChatGPT, BEC detection can get more complicated. ChatGPT can potentially be used to generate new and unique content for every BEC attack, effectively bypassing detection tools. In the same way that ChatGPT makes writing phishing emails easier and faster, this technology may be used by attackers to carry out their agendas.

Is ChatGPT After Cyber Security Jobs?

There’s no doubt that ChatGPT can cut so much workload for cyber security professionals. But as of the moment, nothing can compare to human work—the latter is still more accurate and reliable than this AI technology.

More development is needed to improve the value and effectiveness of ChatGPT and similar technologies, especially in promoting cyber security.

Overall, however, ChatGPT can definitely find its positive footing in the industry. But in the wrong hands, it can also be used to carry out successful cyber attacks. Being that ChatGPT is still in its beta phase, we should expect better security parameters to prevent it from being used to negate cyber security efforts.

There have been quite a few controversies and bans involving ChatGPT since its release, some of which point to its negative cyber security implications. But OpenAI is adamant in saying its goal is to refine and expand its current ChatGPT offer based on user feedback and needs.

Source: Terranova