Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Youtstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

Meet WormGPT: The 'unethical ChatGPT' hackers are using for cybercrime

With the rise in AI applications, hackers are trying to build ChatGPT-like tools to deploy malware and target online users.

Meet WormGPT: The 'unethical ChatGPT' hackers are using for cybercrime

Wednesday July 19, 2023 , 2 min Read

We have seen so many applications of artificial intelligence (AI) in the past few months and it's all thanks to the launch of the viral chatbot ChatGPT.

But recently, a black hat hacker released a malicious version of OpenAI's ChatGPT called "WormGPT". This unethical language model has successfully operated email phishing cyberattacks on thousands of users.

WormGPT is made from the GPT J large language model which was developed by EleutherA in 2021. According to a report by cybersecurity firm SlashNext, this AI tool is specially designed for malicious activities.

Some of the features of this WormGPT include unlimited character support, code formatting capabilities, and chat memory retention. Plus, this AI model has been trained on various data sources that are mainly malware-related.

In short, this AI tool works similarly to ChatGPT without having ethical boundaries or limitations.

SlashNext revealed how effectively WormGPT creates Business Email Compromise (BEC) attacks. Moreover, it also highlighted how "jailbreaks" for ChatGPT's interface are becoming increasingly common.

Also Read
Key parliamentary panel discusses cybercrime trends

This helps cybercriminals to manipulate AI models such as ChatGPT to give outputs that reveal sensitive or personal information, execute harmful codes, or curate inappropriate content.

Why do hackers love AI?

Hacked

The impressive ability of AI to learn and build tools has helped various sectors including technology, healthcare, business and even gaming. However, unfortunately, they are leveraged by cybercriminals to use human-like intelligence for deploying malware or malicious codes.

Generally, hackers use some of the following key methods to get unauthorized entry into a company's network through AI:

  • Developing better malware
  • Human Impersonation on social media platforms
  • Stealth attacks
  • Producing deep fake data

How can AI be used in cybersecurity

Just as AI can be modified or trained to automate tasks, it can also be leveraged to protect business networks from any potential cyber-attacks. For example, companies can use AI and ML-powered systems such as Security Event Management (SEM) to detect any threats and block them.

In fact, this has already come into the picture. This year in March, Microsoft unveiled a security-focused generative artificial intelligence tool called Security Copilot. With the help of AI, Security Copilot improves cybersecurity defences and threat detection.


Edited by Affirunisa Kankudti