How to Meet the AI Infrastructure Demands of the Future… Now

Published on
AI infrastructure

By Andre Reitenbach, CEO, Gcore

It's not unusual for the world to be on the cusp of technological change, but even by the standards of the last 100 years, artificial intelligence (AI) has the potential to be the most impactful technology of our lifetimes.  

The reason for this is a unique combination of factors. Amongst these is the low barrier to entry that allows developers to make use of the technology, and the fact that there is no single entity controlling the AI ‘core platform’. This decentralisation has allowed innovation to flourish. Timing is also important. The deluge of data we are creating – reports suggest it could be as much as  1.34 trillion megabytes in volume every day – feeds AI, enhancing how it learns, which then informs data analytics to improve outcomes. At last, organisations can realise their investment in Big Data leading to an acceleration in the adoption of AI tools. And the other crucial factor is speed. AI has been gradually impacting our lives for the last decade as we have increasingly shopped online, ordered food deliveries through an app or streamed music and video. With the arrival of generative AI (genAI), however, that adoption has been turbo-charged, making this revolution faster than any other in living memory.

The rapid rise of AI

Ten years ago we were grappling with how businesses could benefit best from cloud computing and entering an era of microservices and containers. Even two years ago any notion of AI in the cloud was greeted with scepticism. But the rise in remote work, online communications and as-a-service models based on the public cloud, has surpassed all expectations and most companies now get their AI capabilities through cloud-based software.

Now, the debate about AI in the cloud is redundant. What organisations want to know is not whether they can deliver AI-enabled, cloud-based services to their customers; but how they can do it efficiently, securely, performatively, and most importantly, economically. Enter Edge AI in the Cloud.

Living life on the edge

Fortunately for the enterprise market, the infrastructure required to deliver Edge AI in the Cloud has already been created by companies like Gcore to support another sector: online gaming. One multiplayer role-playing game (for example, World of Warcraft) can attract more than a million players simultaneously, requiring huge levels of data, extremely low latency to ensure no visual degradation issues, and stringent cyber-security protection. In addition, players can be located anywhere, which means the cloud infrastructure needs to span the entire globe.

Now, this infrastructure is being put to use by enterprises for their AI needs. To make it workable, cloud, network, security and AI must be connected in one platform. The cloud element is powered by dedicated cloud GPUs which offer significant performance benefits over on-premises GPUs. These enable organisations to train generative AI models, build their proof of concept projects and launch AI solutions. The network element is designed to withstand demanding data loads with ultra-low latency and cutting-edge cyber security tools which prevent DDoS attacks on websites, applications and APIs.

Of course, once the AI models have been trained, enterprises need to run them out and scale them, to answer the needs of their customers, who are potentially located in multiple countries or across continents. This process is called inference, and it demands a huge amount of compute power, for which Edge AI has the perfect solution.

Because edge computing takes advantage of IT infrastructure to process data near to the end user, latency is vastly reduced, and services can be delivered quickly and securely. The Gcore network, for example, consists of over 150 points of presence around the world in reliable data centers, but the company also offers genAI clusters, one of which is in Europe, powered by cloud GPUs. This means that enterprises based anywhere from North America to Australia can train their models efficiently in the cloud, and serve them up with low latency, in real-time, removing pressure on bandwidth, accelerating data processes, and keeping IT costs to a minimum. 

Applying AI to business

To put the impact of AI, and particularly genAI into perspective, McKinsey recently published the results of a global survey which found that less than a year after many of the genAI tools (such as ChatGPT) were launched, one third of respondents said their organisations were using genAI regularly in at least one business function. These functions included sales and marketing, product development, and service operations, such as customer care and in the back office.   

Such is the explosive growth of AI, we can reasonably expect that all aspects of our working and personal life will be impacted by it in some way in the coming year. This puts pressure on enterprises to respond in terms of ensuring their infrastructure is fit for purpose. While planning for change, they should be reassured by the experiences of other industries, knowing they can meet the inevitable demands, regardless of location, and remain competitive in the AI world of the future. 


Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now