ChatGPT Account Information Available for Sale on the Dark Web
As the popularity of ChatGPT continues to rise, concerns about cybersecurity have surfaced. A recent report from a cybersecurity firm reveals that more than 100,000 login credentials for the generative AI platform are available for purchase on the dark web.
Businesses worldwide have embraced ChatGPT, utilizing its capabilities for a range of tasks, from emails to coding. However, the growing enthusiasm for this productivity-enhancing technology has resulted in significant cybersecurity issues that demand attention.
Today, cybersecurity firm Group-IB announced a significant discovery on the dark web marketplaceāa massive collection of ChatGPT account credentials. The Singapore-based company identified 101,134 infected devices harboring login details for the generative AI platform.
Group-IB also determined the origin of these compromised devices and highlighted the most affected regions. The Asia-Pacific region experienced the greatest impact, with over 40,999 compromised accounts. The Middle East-Africa region accounted for nearly 25,925, while Europe had 16,951 compromised accounts, according to their findings.
In terms of the countries most affected, India had the highest number of compromised accounts, exceeding 12,632. It was followed by Pakistan (9,217), Brazil (6,531), Vietnam (4,771), and Egypt (4,588).
Assessing the Safety of ChatGPT
Given this revelation, concerns regarding the long-term safety of using ChatGPT are understandable. A professional at Group-IB shed light on the matter:
“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials. At Group-IB, we continuously monitor underground communities to promptly identify such accounts.” – Dmitry Shestakov, Head of Threat Intelligence at Group-IB
The reality is that ChatGPT presents an enticing opportunity for hackers, making it challenging to mitigate the associated risks. Valuable information stored within the platform will likely lead to new methods of unauthorized access, and organizations will struggle to keep pace.
Protecting Yourself
While safeguarding your online information may seem daunting, there are measures you can take to ensure that utilizing ChatGPT doesn’t compromise your company’s sensitive data.
Firstly, refrain from sharing confidential or proprietary company data on ChatGPT. The platform saves all conversations to improve its performance, which could inadvertently result in the leakage of critical information. In fact, some companies, like Samsung, have even banned the use of ChatGPT by employees to prevent such incidents.
For additional security measures, Group-IB recommends changing your password regularly and enabling two-factor authentication in ChatGPT. These precautions can at least prevent unauthorized use of your credentials for malicious purposes.