Few aspects of business confound and irritate senior executives more than the IT’s addiction to trends, buzzwords and impenetrable acronyms, the seemingly endless complexity of the organization’s IT landscape, the resulting growth of technology budgets and ultimately – despite all of that incremental spend – the declining productivity and efficiency of software development and delivery.
The latest trend dominating the software technology headlines is “serverless computing.” This new approach to developing web delivered applications has two primary benefits: It enables clients, designers and software engineers to focus solely on needed functionality, features and business logic without getting mired in the traditional complexities of underlying systems software decisions and it removes the effort required to configure and manage the complex web of software typically required to bring an application to life in the cloud.
Despite all the failed promises of the past, there is a chance this new approach to developing and delivering applications might be the start of a more productive and cost-effective future for IT. Serverless computing will likely have a profound influence on the speed with which both consumer-facing web applications and internal business applications are built and deployed and the cost of managing them once in production.
Serverless computing is a relatively new concept and not every organization needs – or wants – to be on the bleeding edge of technology, but the likely disruptive impact of this new approach means that executives should now be asking their technology organizations and advisors what strategies are being put in place to take advantage of this new way of developing and delivering software.
Organizations which adopt this new approach are likely to gain a significant economic and competitive advantages over those that lag behind.
A history of complexity
The history of business IT is one of growing complexity: Mainframe computing in the 1960s was followed by mini-computers in the seventies, the democratization of IT with the introduction of the PC in the eighties and the explosion of PC server-based computing in the nineties, global-scale computing with the internet and now billions of mobile devices attached to hyper-scale cloud computing infrastructure.
Each of these eras witnessed several orders of magnitude growth in the number of computing devices to be managed and the complexity of software, organizations and skills required to manage them. The demands on IT organizations and their budgets grew as complexity of the computing environment exploded.
With complexity comes cost
Over the decades, a growing proportion of most organization’s budgets were devoted to managing complex technology infrastructures. IT organizations grew as more staff were required to manage all of the servers and the stacks of technology upon which the organization became critically dependent.
Outsourcing was seen as the magic bullet for the rising cost of IT management. But outsourcing was founded on a false premise: that aggregating the complex IT environments of many customers would lead to economies of scale. That never happened, and the economics of large outsourcing deals rarely become viable.
Many customers who took the outsourcing route discovered their IT cost structures did not decline as anticipated, nor did the efficiency of services delivery improve – indeed, in many cases, it got far worse.
Complexity as business strategy
Placing the destiny of your technology strategy in the hands of a third-party outsourcing partner began to look increasingly risky as company revenue streams and missions became increasingly leveraged to the effective delivery of software and online services in the 2000s. Fortunately a solution was at hand.
The fast-growing software and web service companies, including Google, Amazon, Microsoft and others, had all been struggling with how to manage their own massively complex computing infrastructures. Out of their needs and with huge investment in R&D came a new generation of tools and techniques to tame and manage large complex IT environments.
These same companies realized that the management techniques and technologies they had developed for their own needs could also be sold as a service to help clients run their IT infrastructures. Out of these needs emerged Cloud computing.
Cloudy, but a hint of sunshine
Cloud computing has had a profound impact on the delivery of IT over the last decade. Unlike outsourcing, cloud computing solves many of the complexity issues faced by IT organizations by standardizing the infrastructure and services required to deliver new software solutions.
In recent years there has been a massive shift away from organizations running their own physical computing infrastructures toward buying computing as a utility from the likes of Amazon Web Services, Microsoft Azure and Google’s Compute Engine. Efficiency of technology spend has increased by shifting from investment in fixed privately-owned infrastructure to elastic cloud services where an organization pays for what it consumes.
Cloud computing is certainly delivering advantages to companies who no longer need to invest in and manage their own physical computing infrastructures. But senior executives are beginning to realize that moving to the cloud is not yet a panacea for all of IT’s ills. Despite improvements, the speed or efficiency with which new software and services are delivered still often lags far behind the needs of the business.
It’s (still) complicated
Companies and organizations who migrate their computing to the cloud benefit from getting out of the physical infrastructure business but unfortunately, most of the complexities of the software applications and services that have been moved to the cloud still need to be managed.
A modern cloud-based application is composed of a complex web of interacting services and each of those services requires a complex stack of underlying system software to operate. Cloud computing may have replaced an organization’s physical services with a set of utility services, but the complexity of applications and the level of staffing and investment required to build, deploy and operate them really has not changed very much.
A substantial percentage of most IT budgets are still spent on the staffing and skills required to manage the complex software environments required by the company’s application portfolio. This complexity acts as a brake on how quickly new features, services and capabilities can be delivered. The dreams of most senior executives that eventually IT might be able to deliver at the speed of the business with ever-increasing efficiency will never be realized until the complexity beast has been tamed.
Taming complexity
“Serverless computing” is the latest buzz phrase to emerge out of the cloud computing domain. There’s a chance, albeit with still much to prove, that this latest innovation may finally help technology organizations overcome the complexity that holds back pursuit of high-velocity, cost-effective software delivery.
Each of the major cloud computing vendors have been executing a very similar strategy: Invest in R&D at the foundations of computing architecture to remove the need for developers and customers to invest time in managing complex system software. This allows IT organizations to refocus their efforts on delivering true value in the shape of new products and services to the organization. On top of these new services a growing community of open-source projects is creating new software development tools and frameworks to unlock the full potential of this new approach.
The structure of a typical web delivered service or application can thought of in three layers. At the top of this ‘stack’ is the user experience, the layout of design and presentation of data that the consumer interacts with. Below this is a layer of application software that embodies the logical flow of the designed functionality of the application. At the base of the ‘stack’ is complex web of system software that historically needs to be configured and managed to bring the application to life; databases, messaging systems, operating systems etc. A very significant proportion of traditional application development and subsequent operational effort goes into managing these underlying layers of software. All this effort and expended on ‘scafolding’ detracts for delivering value to the consumer and ultimately to the business – all of that effort is overhead.
Serverless computing is the next generation of software development tools and cloud computing services that hide the underyling system complexity so that software designers, developers and their clients can focus solely on the user experience of an application and the business logic and flow of events that deliver value to the consumer or end-user. No complex set of system software or infrastructure plumbing needs to be designed, deployed and maintained.
The new serverless computing model really is a breakthrough, that for the first time has a real chance of radically reducing the complexity of application development and operational management.
New cloud services such as AWS Lambda, DynamoDB , Azure Functions, Google Cloud Functions and development tools such as the Serverless Framework offer a radical alternative for how corporate applications and consumer web services are built , deployed and managed. Companies such as Coca Cola and Nordstrom are already integrating the new approach into their development strategies.
Enabling designers and software engineers to focus solely on the functionality and feature needs of end users without getting bogged down by the complexities of underlying system software has multiple benefits. Radically accelerating the velocity of software delivery and getting new capability into the hands of consumers faster will change the economics of software development and likely the businesses that embrace this new approach.
Finally, maybe?
The serverless computing approach is in its infancy and is likely to take two or three more years to fully mature, but the experience of early adopters indicates we may be witnessing a fundamental and generational shift in the efficiency and speed with which new software and services are built, deployed and managed.
The old adage to “follow the money” is apt in this case. A substantial fraction of the cloud R&D budgets of Amazon, Microsoft and Google is now focused building out the ecosystem of services and tools required to bring serverless computing to life and VC firms such as Greylock Partners are adjusting their technology investment thesis based on a belief that serverless computing will not only disrupt software development but will fundamentally change business economics.
Not all organizations need to be, or are comfortable being, on the bleeding edge of technology development. But serverless computing is likely to have a profound impact on the speed with which organizations create revenue and mission value from new software products and services. Every senior executive needs to be aware of this coming disruption and corporate strategy needs to be evolving to leverage the opportunity afforded by these new serverless capabilities.