Editor’s Question: Is APAC’s complex cloud climate ripe for a storm?

Editor’s Question: Is APAC’s complex cloud climate ripe for a storm?

Peter Lees, Head of Solution Architecture for Asia Pacific, SUSE, says there’s a fine line between controlling clouds and allowing them to cause a ‘hurricane of hurt’.

Peter Lees, Head of Solution Architecture for Asia Pacific

Australian companies are accustomed to quite a cloudy technology climate.

It’s birthed a cloud continuum comprising a broad spectrum of computing deployment models. Yesteryear’s on-premises infrastructures have made way for a blend of public and private clouds of many different flavours, headlined by AWS and Azure.

Research shows 93% of Australian enterprises use at least two cloud infrastructure providers; 30% of those lean on four or more. This is further supplemented with edge computing as technology increasingly supports the business need to decentralise.

That’s not a bad thing per se. Organisations have hedged their bets across various cloud modalities for enormous benefits. For most, it’s to improve the predictability of cloud costs and performance, maintain better control over what applications and data live where, provide high availability and redundancy (particularly to minimise the impact of an outage to one provider) and drive cost efficiency.

Outside the operational realm, it has also allowed them to invest in cloud providers that best serve their vast and specific business objectives across various departments.

However, there’s a fine line between controlling these clouds and allowing them to cause a hurricane of hurt. Preventing the latter from unfolding in a heterogeneous ecosystem demands consistency to reel in complexity, driving productivity among the developers responsible for maintaining those clouds and adopting a zero-trust mentality towards security.

Consistency is key

One size never fits all in the technology infrastructure game. However, the convenience of picking and choosing the cloud that fits best comes with the challenge of handling a highly distributed environment.

Heterogeneity is complex. How you manage, operate, automate, and update operating systems, Kubernetes clusters & runtimes, management platforms, security, and the software supply chain, to name a few, varies significantly across different clouds. With every provider wanting you to do it their way, consistency in methodology is key.


From the perspective of operating systems, Kubernetes distributions and cloud-native deployment, it’s a matter of finding consistent tools that can encompass multiple platforms in a scalable way – taking advantage of the native capabilities of those platforms while providing a consistent administrative interface and a single point to integrate ancillary components, such as security and identity management, usage monitoring, application catalogue and so on.

With the right set of ‘master’ tools, ICT teamscan easily adapt to different underlying cloud platforms as and  when the need arises without an extensive review of management processes. This in turn provides a consistency of experience for developers and other consumers of the infrastructure, which helps to streamline productivity.

The added advantage of this approach is that it alleviates the pressure of needing to onboard an excessive number of platform engineers to manage very specific sets of technologies, applications, and clouds – no less during ongoing skills shortages. Instead, it allows companies to invest in high-level skills across their whole team.

Let developers develop

As with many jobs, developers are often pulled away from their forward-looking priorities to keep the lights on. Many of these tasks are cumbersome and border on menial – whether setting up CI/CD pipelines, integrating toolchains, or setting up coding environments in a fragmented cloud environment.

CTOs and CIOs can free their developers from these operational tasks through platforms that address cross-cutting concerns while providing support for shared tools and services. This should be complemented by frictionless experiences that enable developers to focus on coding, both on local desktop machines and in remote environments on the cloud.

The rise of AI-powered coding assistants – including GitHub’s Co-pilot and Amazon’s CodeWhisperer – is primed to tackle mundane coding tasks, leaving developers to focus on the innovative aspects of their jobs, such as creating intuitive features and user experiences. However, they can’t operate alone, and it’s crucial that their use is never the main driver for developer decision-making but is instead limited to use cases that generate the highest value.

Zero Trust across all clouds

About a decade ago, keeping technology safe was a fairly simple prospect: blending physical security, traditional networking and firewalls, and reasonable identity and access management tools was sufficient to keep monolithic applications and systems secure.

As distributed, cloud-native, microservices-based applications have become more and more the industry standard for new development, this connected nature of the enterprise has vastly increased the potential attack surface of key systems. Application development is now more frequently a matter of ‘gluing together’ different pieces of existing code, and these varied components often have complex dependencies themselves – the software supply chain – and so can be very difficult to holistically assess. 

Between this increased use of code components, and an increase in the level of vulnerability research, it’s perhaps no surprise that we’ve seen a hike in software common vulnerabilities and exposures (CVEs), surging from 5,000 in 2013 to roughly 30,000 in 2023. Fortunately, known vulnerabilities can be protected against – but that still means the correct monitoring, controls, and secure software supply chain must be in place. 

The hike in hacks, exploits, and breaches has been met with a healthy rise in cyber security spending.

Market intelligence firm IDC noted that, in 2024, with the frequency of cyberattacks increasing, Australia contributed over 25% of security spending in the region, alongside India, across all of Asia Pacific, excluding Japan.

Even where software code vulnerabilities aren’t the direct vector for attacks, the increasing complexity of distributed cloud-native applications and systems means configuration errors or omissions can introduce paths that can be exploited.

Just in recent memory we’ve seen some high profile breaches of Australian brands, apparently as a result of this kind of issue, including Optus, Medibank and Ticketmaster, which clearly showed how impactful security incidents can be.

While continued security investment is essential, the vulnerability scanners that form part of that security spend only work against known threats. That exposes organisations to zero-day attacks – think Log4Shell – which targets a security vulnerability that has not previously been identified and for which a patch hasn’t been issued.

Another potential source of zero-day attacks is a bad actor within an organisation: proper implementation of zero trust means that even code from ‘the inside’ must be treated with caution.

Although the consensus is that no one is immune to vulnerabilities or breaches, an uptick in cloud-native security solutions that protect applications from zero-day attacks at runtime will ensure threat actors are blocked even if malicious code is present, unauthorised access to sensitive resources is detected, code is executed remotely, or attempts at data exfiltration are made.

As digital landscapes expand, so will the reliance on multi-cloud environments, each with its own nuances as dictated by cloud providers. A combined focus on consistency, developer productivity and zero-trust security stands to strip away the complexity of managing heterogeneity, ultimately making the core objectives of technology investments achievable while mitigating the risk of a storm.

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive