top of page

Hackers master artificial intelligence: the new FraudGPT is perfect for writing phishing emails.


Hackers master artificial intelligence: the new FraudGPT is perfect for writing phishing emails.
FraudGPT

The emergence of generative AI models has dramatically changed the cyber threat landscape. A recent analysis of the activity on darknet forums by the Netenrich research team shows the emergence and spread of FraudGPT, a chatbot with artificial intelligence, among cybercriminals.

FraudGPT was created exclusively for malicious purposes. The list of its capabilities includes: writing phishing emails, hacking websites, stealing bank card data, etc. Currently, access to the service is sold on various black markets, as well as in the author's Telegram channel.


Screenshots showing how FraudGPT works
Screenshots showing how Screenshots showing how FraudGPT works

As you can see from the promo materials, an attacker can compose an email that is very likely to make the recipient click on a malicious link. This is critical for launching phishing or BEC attacks.

Subscription to FraudGPT is $200 per month or $1700 per year, and the full list of malicious chatbot capabilities includes the following:

  • writing malicious code;

  • creating undetectable viruses;

  • finding vulnerabilities;

  • creating phishing pages;

  • writing fraudulent emails;

  • finding data breaches;

  • learning how to program and hack;

  • finding sites to steal card data.

Strange as it may seem, this kind of malicious chatbot is not something radically new and unique in its kind. Just earlier this month, another artificial intelligence chatbot called WormGPT, began to be massively advertised on dark forums.

Although ChatGPT and other AI systems are usually created with ethical restrictions, it is possible to modify them for free use, which is what criminals take advantage of and successfully capitalize on it by selling access to their creations to other criminals.


The emergence of FraudGPT and similar fraudulent tools is an alarming signal of the danger of abusing the power of artificial intelligence. And so far, we are mainly talking only about phishing, i.e. the initial stage of an attack. The main thing is that over time chatbots have not learned to conduct the entire cycle of attacks from beginning to end - in automatic mode. Because here artificial intelligence will have no equal in speed and methodicalness.

** BEC attack (Business Email Compromise) attack is a type of cyberattack in which an attacker gains access to a target organization's email account through phishing, social engineering, or purchasing credentials on the darknet. Then, posing as an employee, executive or supplier of the company, tricks accountants into approving a fraudulent wire transfer request.


** Phishing is a fraud technique where an attacker attempts to gain access to personal information such as passwords, bank card numbers and other sensitive data by spoofing emails, websites, apps and other forms of internet communication.

Read more:


18 views0 comments
bottom of page