It’s safe to say that no technology has taken the world by storm more than ChatGPT over the past six months. Since its launch in November last year, OpenAI's chatbot has captivated users with its incredible capabilities and its ability to mimic human expertise on almost any topic.
As of March 2023, 8.2 per cent of the global workforce has experimented with ChatGPT for work tasks at least once. Of those that had used the chatbot, 3.1 per cent had entered confidential corporate data into the Artificial intelligence (AI) tool.
As ChatGPT reaches new heights of popularity, experts and users alike have begun questioning the security and privacy implications of using the chatbot. Like any new technology, there are questions to be asked about the safety of using ChatGPT – especially within the workplace environment where employees are handling lots of sensitive information.
But is ChatGPT safe to use? In this article, we’ll help you understand how ChatGPT and other chatbots impact your security, exploring how the technology uses your data, the potential security risks, and the measures AI companies take to protect their users.
Is ChatGPT safe to use?
In short, yes. ChatGPT is safe to use in the majority of cases, such as for generating creative content, translating text, or simply asking questions.
AI chatbots like ChatGPT are trained on large language models (LLMs) which are designed to mimic nature language and create content safely and effectively. These models use data taken from the web to generate content on almost any topic using natural human language.
OpenAI has also implemented a variety of different security measures into its LLM to protect users from any potential risks that could arise from using the chatbot. These measures include:
- End-to-end encryption - All conversations with ChatGPT are encrypted, meaning that it is protected from unauthorised access.
- Moderation API - OpenAI offers a Moderation API that developers and platform owners can integrate into their applications. This API helps prevent content that violates usage policies from being shown to users.
- User Flagging and Feedback - Users can provide feedback on problematic model outputs through the user interface. This feedback helps OpenAI identify areas for improvement and address potential issues.
- Incident response plans - OpenAI has incident response plans to manage and communicate security breaches effectively. These plans help to minimise the impact of any potential breach and protect users from the impact.
- Bug bounty program - As well as regular audits, OpenAI has created a Bug Bounty Program to encourage ethical hackers, security research scientists, and tech enthusiasts to identify and report security vulnerabilities. This allows the company to patch security vulnerabilities and better protect its users.
While the specific technical details of OpenAI’s security measures are not publicly disclosed, these procedures are robust enough to protect users from the majority of external threats.
How ChatGPT Handles Your Data
Like many other chatbots, ChatGPT uses data taken from conversations to train the large language model that powers its responses. This means that anything you enter into ChatGPT is collected and stored in OpenAI’s servers to improve the model's natural language processes.
Before April 2023, this data training was compulsory, meaning that data from all user conversations was stored by OpenAI and used to train its LLM.
However, OpenAI now gives users the ability to disable this data training by turning off chat history in ChatGPT allowing users to decide whether their conversations will be used to train and improve OpenAI’s models.
Still, for users with chat history enabled, no conversation is confidential. OpenAI’s privacy policy states that the company collects any personal data you share in conversations, which is then used to train its AI model.
It also states that it can’t delete specific prompts from your history, so any personal or sensitive information cannot be deleted and could be reviewed by human AI trainers.
This is what led the electronics giant Samsung, to ban the use of ChatGPT in the workplace, after employees inadvertently shared trade secrets.
Other large companies and financial institutions including JPMorgan, the Bank of America and Citigroup have also banned or restricted the use of ChatGPT due to this risk, citing strict financial regulations about customer data.
It is important that employees do not share any confidential information with ChatGPT, as this data is automatically fed to the model and regurgitated in future responses.
Other than conversational data, OpenAI also collects:
- Account Information, including name, email address and Contact information
- Log Data, such as internet protocol address, browser type and settings, and how you interact with the ChatGPT website.
- Device information, including the name of the device, operating system and device identifiers.
- Analytics data, which is taken from Cookies and used to help OpenAI “analyse how users use its service.”
All of this data may be shared with affiliates and OpenAI’s third-party vendors and parties involved in any transactions, though ChatGPT says it does not sell data to third parties for marketing or advertising purposes.
Security risks of ChatGPT
Aside from the risk of entering confidential information into ChatGPT, there are also other significant risks users should be aware of.
Like any online platform, ChatGPT could fall victim to a data breach. This could see malicious actors gaining access to private conversation data, user information and other sensitive information in an attack.
OpenAI has robust measures in place to protect itself against the risk of cyber attacks, but no system is immune to attack. The reality is that most security breaches are caused by human error rather than failed defences, so the risk is always there.
If a data breach did occur, hackers could steal the data OpenAI stores and use the information for nefarious purposes. Security experts have also warned that threat actors could transform ChatGPT into a weapon for cybercrime by using the chatbot to launch sophisticated phishing campaigns.
While OpenAI’s Moderation API restricts hackers from using ChatGPT from being used for this purpose, researchers from the security firm Check Point found that hackers have already found ways to bypass them by using the ChatGPT API in a malicious Telegram channel.
Other GPT-wired malicious tools, such as the recently-discovered WormGPT, are lowering the barrier for cybercrime by allowing hackers with little technical knowledge to use ChatGPT to launch sophisticated cyber attacks instantaneously.
How to stay safe when using ChatGPT
While the risks are clear when using ChatGPT, there are a number of steps people can take to protect themselves and their company’s confidential information when using the chatbot.
Best practices to keep safe when using ChatGPT include:
- Avoiding sharing personal or sensitive details when talking to ChatGPT, especially when using it in the workplace.
- Carefully reading OpenAI’s privacy policy to understand how your chats are stored and used.
- Using pseudonymous or fake accounts when using ChatGPT so that your real identity to is not connected to your chats.
- Keep up with any changes to OpenAI's security or privacy rules.
Overall, ChatGPT is a remarkably safe platform. OpenAI has implemented robust security measures, data handling practices, and privacy policies to keep its platform secure and safe for its users. Like any online service, however, users need to be aware of the risks and be careful when sharing any personal information with the chatbot.