S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Research — 3 Mar, 2022
Introduction
Many companies are now becoming digital service providers, raising their software IQ to successfully compete in the digital economy — a process accelerated by the global pandemic. Underpinning this transformation, cloud-native IT is a strategic priority for the majority of enterprises undertaking new development and application modernization. Re-platforming to cloud native is moving into the mainstream for all kinds of organizations — particularly for the telco market's transition to 5G. More generally, accelerating application modernization is seen as the way to secure the benefits of cloud computing, but it will take several years. As the majority of work still lies ahead, a decade of opportunity beckons for those that can help deliver cloud-native benefits as advertised: better, faster and cheaper.
We are entering an era in which cloud will no longer be seen as a separate IT category: Quite simply, cloud is the IT now. Already the case for hosted environments, and now confirmed by on-premises "flexible" infrastructure — it is the consumption-based, service-driven, retail model discipline that delivers the cloud experience, not the execution venue per se. As for software applications, "cloud nativity" has become the default platform for deployment, and the industry is rotating to the as-a-service/subscription consumption model. In the future, all of this will likely be delivered together with a single bill and a unified customer experience, regardless of the back-end architecture or the vendor fulfilling it.
Cambrian explosion
It has taken less than a decade to reach the point where cloud native is now the prevailing mindset and methodology for application and infrastructure architecture. Adoption by IT teams is strong, a process aided in no small part by the industry's ongoing enthusiasm and the continued leadership of the Cloud Native Computing Foundation, and in large part due to the ingenuity and enthusiasm of the open-source community, arguably the cradle of cloud nativity.
"Cloud native" is a set of overlapping technologies that do not necessarily need to run on public cloud but grew out of public cloud scalability and provisioning. Collectively, these technologies have moved the addressable layer of infrastructure from the server and the virtual machine up and into the application, which allows a lot of new and great things to happen in terms of resiliency, security, connectivity and more. Nearly all of the components are open-source software projects at their core, and now, with the maturity of open source, there is no shortage of commercial vendors ready to provide the type of support that organizations want, especially in Kubernetes, the popular open-source system for automating deployment, scaling and management of containerized applications.
The cloud-native market is undergoing a Cambrian explosion — the breadth of goods and services available in the market is astonishing, with more arriving at a furious pace. 451 Research's Cloud Price Index now tracks more than 3 million product SKUs that can be purchased from the major hyperscaler cloud providers alone. Vendors and service providers are throwing new value-added services at the wall to see what sticks. Enterprises are experimenting. Investors are making increasingly bigger bets. In short, the market is thrashing and crowded — and there's lots of confusion.
Cloud-native engineering
One of the principles of cloud-native engineering is being able to decompose applications that were developed in the pre-cloud era, which can be inflexible, hard to update and very unwieldy to manage — almost to the point where people don't even know what's in them anymore. The cloud-native approach lets organizations abstract away functional components of applications as microservices and upgrade them and maintain them independently, and that opens up a world of innovative possibilities, especially given the tools that the cloud providers are supplying, such as applying machine learning and artificial intelligence to stream data. Cloud native works because it is a culture of application programming interfaces, or APIs, which render infrastructure largely invisible to the developer and technology consumer.
Some of this is just down to good timing. From a tool-chain point of view, the abstraction that containers provide arrived and evolved at a time when enterprises were trying to figure out how best to package and deploy applications in the cloud. Kubernetes has come of age as organizations scrabble with managing hybrid and multicloud infrastructure. Serverless approaches provide infrastructure abstraction, while service meshes enable networking to be added independently, but there is still a high degree of complexity and a lot of runway for more abstraction. It is precisely the granularity of the components in cloud-native applications and connectivity that gives rise to this complexity. However, the ability to actually log and trace all of the activity that's happening — the overall term for which is "observability" — is making it possible to automate many of the operational functions that used to have to be human-mediated.
Complexity
In terms of cloud-native development, this is where the whole notion of "shift left" is coming from, where the code in composable applications can be instrumented with security early on in the development process and with connectivity via service mesh. As the applications become more complex, this opens up the possibility for automation based on the telemetry data coming from the processing of those jobs. It is with increased levels of automation that some of the complexity generated with greater quantities or greater dispersal of services will ultimately be tamed.
Applications, abstraction and automation characterize the move out of corporate data centers and into public clouds, taking advantage of the scalability and the resiliency of those environments. However, this doesn't happen independently of the cultural and organizational change needed to deliver it. Shifts in the software development mindset, such as the move to agile development, are a complex process. For cloud native, it's not just technology — there are similar kinds of organizational-process stumbling blocks in the way.
Providers are providing
A lot of IT departments, especially in large enterprises, have grown up around a non-cloud-native model of using, operating and provisioning IT. Part of the change requires enabling organizations to see the benefits of the end state and that the work can be done incrementally and continuously. Carving out the functional pieces as part of the modernization of back-end applications can enable organizations to see the possibilities.
There is also a wealth of training, webinars and certifications available. The cloud providers themselves have been packaging some of these technologies into turnkey platforms that take away a lot of the complexity and learning curve to help lines of business get onboard without, for example, having to deal with the operation of containers and Kubernetes. There's a sacrifice here to be sure: Organizations won't have access to all of the knobs and levers that they might ultimately need. The landgrab among service providers just to get organizations onto their platforms means they will need to deal with the intricacies of the configuration a little later. The bad news is that organizations may not have quite so many levers or knobs to turn; the good news is they actually won't have quite so many levers or knobs in the first place.
Providers are showing that they can package reliability, security, and optimization of cost and performance and that they can, with this ability to get a granular view of the infrastructure, respond to it in an automated way. They end up taking a lot of the work off the IT operations teams by putting it into the infrastructure and the orchestration platform. This can help with staff shortages and skills gaps because these systems don't require the same volume of work. This saves developers, operations staff, combined DevOps teams and site reliability engineers from focusing on configuring their environments and manual tasks so that they can instead focus on new features, new products and innovation. It is a continuous improvement benefit from using a feedback loop.
2022 and beyond
The next steps, and a main focus for the industry in 2022, will be to continue reducing the complexity of cloud native with additional abstractions and improved integration of its parts to drive increased developer productivity. We expect to see increased use of systems' own data and intelligence to automate tedious and fiddly manual processes — a feat that surpasses human comprehension as things get more diverse and complex and ever more connections need to be managed and secured.
With the increased use of artificial intelligence and machine learning (i.e., automation) techniques, we are not far away from code that can write, or rewrite, itself, and there are numerous early-stage vendor projects to deliver this capability. However, this also requires organizational change, and the people side of the equation looms large as cloud-native engineering and DevOps talent remains in short supply.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
Location
Segment