- The rise of ChatGPT has led to an increase of its users worldwide to millions of people and businesses
- This has in turn led to cybersecurity safety questions being raised about the data of users
- According to a Singaporean cybersecurity firm, more than 100,000 login details of ChatGPT users have been leaked to the dark web
Over the past year, a significant cybersecurity breach has emerged, putting the data privacy of ChatGPT users at risk. According to a report by Singaporean cybersecurity firm Group-IB, more than 100,000 login credentials for the popular artificial intelligence chatbot have been leaked and traded on the dark web. These compromised logins have been circulating on dark web marketplaces between June 2022 and May 2023.
Group-IB’s threat intelligence head, Dmitry Shestakov, revealed that each compromised login contained a combination of login credentials and passwords for ChatGPT. Moreover, this alarming trend reached its peak in May 2023. This was when nearly 27,000 ChatGPT-related credentials were made available on online black markets.
The Asia-Pacific region accounted for the highest number of compromised logins. They made up approximately 40% of the total. Among individual countries, India had the highest number of leaked logins with over 12,500. Meanwhile, the United States ranked sixth with nearly 3,000 compromised logins. France secured the seventh spot, leading among European countries.
ChatGPT users have the option to create accounts directly through OpenAI. Furthermore, they can use their Google, Microsoft, or Apple accounts for authentication. Group-IB’s research did not specifically analyze the sign-up methods targeted by cybercriminals. However, it is likely that accounts using direct authentication methods were predominantly exploited. However, it is important to note that OpenAI is not responsible for the compromised logins.
Group-IB’s blog post highlighted an emerging trend of employees using ChatGPT for work-related purposes. This raises concerns about the potential exposure of confidential company information. This is because user queries and chat history are stored by default. Unauthorized users could exploit this stored information to launch attacks against companies or individual employees, posing a significant risk to data privacy.
The cybersecurity firm emphasized that cybercriminals infected “thousands of individual user devices worldwide” to steal login information. This alarming situation underscores the critical importance of regularly updating software and implementing two-factor authentication to enhance security measures.
It is worth mentioning that Group-IB wrote its press release with the assistance of ChatGPT, highlighting the chatbot’s capabilities in generating written content. While ChatGPT’s AI capabilities are impressive, it is crucial to address security vulnerabilities and protect user data from unauthorized access and misuse.
OpenAI and ChatGPT users should remain vigilant and take proactive measures to safeguard their login credentials. This includes regularly changing passwords, enabling two-factor authentication, and monitoring for any suspicious activity related to their accounts.
In conclusion, the leakage and trading of over 100,000 ChatGPT login credentials on the dark web are cause for serious concern. Users must prioritize data privacy and adopt stringent security practices to mitigate the risks associated with such cybersecurity breaches. OpenAI and cybersecurity firms must collaborate to strengthen the platform’s security measures, ensuring a safer and more secure environment for all ChatGPT users.