It’s ChatGPT’s 1st Birthday!

Published on
ChatGPT

By Robin Campbell-Burt, CEO at Code Red

It’s ChatGPT’s 1st birthday!

A whole year has passed since Open AI’s ChatGPT first became publicly available (we can’t believe it either!). It’s safe to say that generative AI has taken the world by storm, now often being used to craft emails, assist with writing and as an access point to vast amounts of information within seconds.

“This type of large language model (LLM) can largely mimic the neural networks of the human brain by way of Artificial Neural Networks (ANN),” points out Michael Adjei, Systems Engineer at Illumio. “All of a sudden, even the possibility of Artificial General Intelligence (AGI) and the theoretically far superior Artificial Super Intelligence (ASI) may not be an exaggeration of fiction titles anymore.”

Yes, the recent AI boom has already blown the minds of many and made advancements that we might have once thought impossible, possible. Ty Greenhalgh, Healthcare Industry Principal at Claroty points out that “ChatGPT and generative AI are already being used by hospitals and clinics for medical transcription and patient communications.”

But, with great success often comes great challenge, and although platforms such as ChatGPT bring great opportunities for improving productivity, there have been concerns globally about the privacy and security implications that need to be considered.

Ty Greenhalgh said: “with an increasingly connected landscape and larger attack surface, healthcare providers are extremely vulnerable to cyberattacks, especially if new technologies are implemented without robust security protocols in place during the adoption stage. If hackers gain access to a hospital’s BMS system or patient care systems through vulnerable AI apps meant to aid workflows, the consequences could be dire, impacting operations, patient care, or worse, potentially putting lives at risk.

“As tools like ChatGPT gain prominence as part of our daily lives, it’s important not to rush in their adoption, just because the benefits speak directly to your immediate needs and can be exciting to industries, like healthcare, that are in desperate need of labour support. Security teams must take the time to identify vulnerabilities, mitigate potential risk and build resiliency within their systems to prevent the unseen threats that many of these AI technologies come with”.

The explosion in AI use led to the UK hosting the first global AI Safety Summit in Bletchley Park in early November.  This brought together leading AI nations, tech companies, researchers and civil society groups to pioneer action on the safe development of AI worldwide. But Michael Adjei, Systems Engineer at Illumio isn’t so confident in its initial success: “An AI safety summit like the one in November 2023, should’ve not only addressed fears but also taken a step back to assess AI safety and security against a holistic view. Unfortunately, I’m not sure this was accomplished; at least not that time round.”

Whilst questions on the governance of AI remain, individual organisations will need to make decisions based on the risk factors for their business and customers.

John Pritchard, CPO at Radiant Logic, argues that organisations need to get on top of the quality and accuracy of the data they provide ChatGPT: “Before organisations invest time, finances and resources integrating GenAI into their decision-making processes, they need to first and foremost ensure their data is clean and of the best quality. GenAI’s effectiveness is directly dependent on the data it receives and if businesses aren’t careful, they can exacerbate existing issues by making decisions based on inaccurate AI results. This means making sure your data set is accurate, up-to-date and does not have anomalies...”  

For most, the biggest concern around the development of generative AI is how to ensure that our data is kept safe and protected. As Fleming Shi, CTO at Barracuda, points out, the speed at which generative AI is advancing is way ahead that of the development of regulations intended to protect our data.

“The security risks of gen-AI are widely reported. For example, the LLMs (large language models) that underpin them are built from vast volumes of data and they can be distorted by the nature of that data. The sheer volume of information ingested carries privacy and data protection concerns. Regulatory controls and policy guardrails are trailing in the wake of the gen-AI’s development and applications.

“Other risks include attacker abuse of GenAI capability. Generative AI allows attackers to strike faster, with better accuracy, minimising spelling errors and grammar issues that have acted as signals for phishing attacks. This makes attacks more evasive and convincing.”

Fleming continues: “as attackers become more efficient, it is even more critical that businesses use AI-based threat detection to outsmart targeted attacks.”

Andy Patel, Security Researcher at WithSecure, specialises in AI prompt engineering, artificial life and AI ethics. He argues that we have little transparency and control of the mechanisms that serve us content and that LLMs contribute to a lot of disinformation.

“While a vocal few continue to doom monger about existential threats from hypothetical far-future artificial superintelligences, the fact is we should be more concerned with how humans will abuse these tools in the short-term.”

Andy continues, “Examples of human misuse of language models, especially in the field of disinformation are still mostly just academic. However, large language models may be contributing to disinformation far more than we are aware of, especially when considering short-form content that is commonly posted on social media sites.

The potential for a flood of AI-driven disinformation is there, especially since social networks have all but killed off their content moderation efforts. We haven’t seen it yet, but we’re likely to see it soon. And I wouldn’t be surprised if adversaries are ramping up their capabilities ahead of 2024, a big election year.”

So, it seems that a fair amount of generative AI’s development is beyond our control. But what we can control is how we use AI in our own professional lives.

John Pritchard, CPO at Radiant Logic believes that employee training on AI use is essential. “Businesses must also train their employees who will be overseeing the use of AI,” said John Pritchard.

“While GenAI is an intelligent tool, it has not yet been perfected and can produce errors and wrong answers - human oversight remains critical to significantly reduce GenAI hallucinations and unwanted output. As GenAI is not advanced enough to fully function on its own, using it is more like collaborating with it. So, employees must also know how to frame instructions that an AI model can properly understand and interpret, a technique known as prompt engineering. With these steps, businesses can fully move forward with implementing GenAI and harness its full potential.”

Fleming Shi, CTO at Barracuda  adds: “[an] opportunity for generative AI is in enhancing cybersecurity training – basing it on actual attacks, and therefore more real, personal, timely and engaging than the mandatory, periodic, and simulated awareness training sessions.”

The 1st birthday of Chat GPT serves as a timely reminder of just how far we’ve come. The advancements in generative AI to date truly are amazing. But we need to remember that with amazing things often comes amazing risks. Collective action on ensuring safe data use with generative AI and training to spot sophisticated AI assisted attacks that lurk in the dark should serve us well. In the words of Fleming Shi, “We can’t put the AI genie back in the bottle – but nor should we want to. What we need to do is harness its power for good.”

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now