Vincent Lavergne, RVP of System Engineering at F5 Networks, suggests some New Year’s resolutions for IT leaders to ensure successful transformation in 2020.
Without wheeling out all the usual clichés, 2019 has been another whirlwind of disruptive innovation and opportunity – with plenty of challenges to tackle and circumvent along the way.
The threat landscape mutated with predictable unpredictability, multi-cloud app deployments are becoming mainstream fixtures and DevOps methodologies started exerting a newfound influence on business plans.
The big question is: what happens next? What anticipated app-centric trends will change the game and tear up the rulebook (again)?
Digital Transformation takes hold
2020 will see more organisations shift away from aspirational sloganeering to substantively embrace what can and should be a seismic step-change.
Inevitably, business leaders will get more involved in application decisions designed to differentiate or provide unique customer experiences.
Expect a new generation of applications that support the scaling and expansion of businesses’ digital models to emerge. This will include taking advantage of cloud-native infrastructures and driving automation through software development.
Further down the line, Digital Transformation efforts will likely be AI-assisted, particularly as they leverage more advanced capabilities in application platforms, telemetry, data analytics and ML/AI technologies.
End-to-end instrumentation will enable application services to emit telemetry and act on insights produced through AI-driven analytics. We anticipate that these distributed application services will improve performance, security, operability and adaptability without significant development effort.
The era of Application Capital
Applications are now firmly established as the main conduit for companies to develop and deliver goods and services. They have become modern enterprises’ important assets.
Even so, most still only have an approximate sense of how many applications they have, where they’re running, or whether they’re under threat.
This will soon change.
To manage Application Capital effectively, it is essential to establish a company-wide strategy that sets policy and ensures compliance. This includes addressing how applications are built, acquired, deployed, managed, secured and retired. At a high level, there are six distinct and unavoidable steps that need to take place: build an inventory, assess the cyber-risks, define application categories, identify the application services needed for specific activities, define deployment parameters and clarify roles and responsibilities.
The primary aim of an application strategy should always be to enhance and secure all digital capabilities – even as their reach and influence shift and expand.
Collaboration is key
The technical minutiae DevOps methodologies and associated tools got a lot of publicity in 2019.
2020 will be all about getting the culture right, marrying theory with best practice and unlocking new levels of productivity without upsetting the operational apple cart.
Culture is not optional. Team structure alone dramatically changes pipeline automation, with traditional single-function teams often falling behind their contemporary, DevOps-driven counterparts.
Consequently, we will see more collaborative team structures and alignment on key metrics that give NetOps additional means to focus on what the business requires: faster and more frequent deployments.
DevOps has a 10-year head start on NetOps in navigating and overcoming obstacles around certain types of integration, tools and skillsets. Collaborative teams can explode the status quo by promoting standardisation on tools that span from delivery to deployment (like Jenkins and GitHub/GitLab).
DevOps should not – and cannot – end with delivery. That means deployment functions – along with a complex pipeline of devices and application services – must be automated. This won’t happen without effective cultural realignment.
The return of the data centre
Conflating the adoption of SaaS with IaaS caused speculation that cloud was cannibalising IT. Pundits warned that data centres would disappear.
The rumours were exaggerated. Data centres are still being built, expanded and run around the globe. The cloud hasn’t managed to – and likely never will – kill the data centre.
Early in 2019, an IDC executive told channel partners at the IGEL Disrupt conference that over 80% of companies they surveyed anticipated repatriating public cloud workloads. Security, visibility and performance remain common concerns.
Repatriation-related opportunities include improving availability of multi-cloud operational tools and a push towards application architectures that rely on more portable technologies such as containers.
The data centre is not dead. It’s just evolving.
Tailormade security
According to F5 Labs, the server-side language PHP – used for at least 80% of websites since 2013 – will continue to supply rich, soft targets for hackers. Situational awareness is critical to mitigate both vulnerabilities and threats.
Businesses are also realising that applications encompass more than just the code that they execute. Attention needs to be paid to everything that makes them tick, including architecture, configurations, other connectable assets and users. The prevalence of access attacks such as phishing are an obvious case in point.
F5 Labs analysis of 2019 breach data confirms the need for risk-based security programmes instead of perfunctory best practice poses or checklists. Organisations need to tailor controls to reflect the threats they actually face. The first step in any risk assessment is a substantive (and ongoing) inventory process.
As ever, the industry will gradually incorporate emerging risks into business models. For example, cloud computing has gradually shifted from a bleeding-edge risk to a cornerstone of modern infrastructure. The risks associated with the cloud have either been mitigated or displaced to contractual risk in the form of service level agreements and audits.
Keeping a watchful eye on API
The word is out. Application programming interfaces (APIs) can transform business models and directly generate revenue. Cybercriminals know this.
More than ever, organisations need to focus on the API layer, particularly in terms of securing access to the business functions they represent.
One of the biggest issues is overly broad permissions, which means attacks through the API can give bad actors visibility into everything within the application infrastructure. API calls are also prone to the usual web request pitfalls such as injections, credential brute force, parameter tampering and session snooping.
Visibility is another major and pervasive problem. Organisations of every stripe – including IT vendors – have a notoriously poor track record of maintaining situational API awareness.
API security can be implemented directly in an application or, even better, in an API gateway. An API gateway can further protect APIs with capabilities like rate limiting (to prevent denial of service attacks) and authorisation. Authorisation narrows access to APIs by allowing access to specific API calls to only specified clients, usually identified by tokens or API keys. An API gateway can also limit the HTTP methods used and log attempts to abuse other methods so you’re aware of attempted attacks.
All this is of course the tip of an increasingly interconnected iceberg. Any New Year’s resolution worth its salt should include a commitment to comprehensively master the development, deployment, operation, and governance of application portfolio. The best way to do this and to get visibility into the code-to-customer pathways for all applications is to leverage a consistent set of multi-cloud application services. Here’s to a safe, innovative and transformational 2020.