A recent report published by Datadog, a monitoring and observability cloud service provider, found that serverless computing is more popular than ever. An analysis of serverless computing use across Datadog customers found that more than 70% of AWS customers, 60% of Google Cloud customers, and 49% of Microsoft Azure customers use one or more serverless solutions.
Nothing new here, really; serverless is old news and is baked into the cloud development cake when it comes to picking the best development platform for net-new and migrated cloud applications. It’s fast, does not require a great deal of infrastructure planning (almost none), and the applications seem to perform well. No brainer, right? Not so fast.
Serverless computing promises to reduce infrastructure management and enhance developer productivity. However, as with any technology, there are downsides to consider. Most of the people picking serverless may not see the whole picture. Perhaps that’s you.
One of the primary concerns with serverless computing is the cold-start latency. Unlike traditional cloud computing models where virtual machines or containers are preprovisioned, serverless functions must be instantiated on demand. Although this provides dynamic scaling, it introduces a delay known as a cold start. It’s not good and can impact the application response time.
Although providers have improved this issue, it can still be a concern for applications with strict real-time performance requirements. I’ve had a few people tell me that they needed to swap serverless out because of this, which delays development time as you scramble to find another platform.
You may be thinking that this is only an issue with apps that require real-time performance. There are more of those applications than you think. Perhaps it is a requirement of the application you’re about to push to a serverless platform.
This should be well understood, but I still run into developers and architects who believe that serverless applications are easily portable between cloud brands. Nope, containers are portable; serverless is different. I’ve seen “avoids vendor lock-in” in more than a few serverless computing presentations, which is a bit jarring.
Each cloud provider has its unique serverless implementation, making it challenging to switch providers without significant code and infrastructure modifications. This can limit an organization’s flexibility and hinder its ability to adapt to changing business needs or take advantage of competitive offerings. Now with the movement to more multicloud deployments, this could be a valid limitation that needs to be factored in.
Debugging and monitoring
Traditional debugging techniques, such as logging into a server and inspecting the code, may not be feasible in a serverless environment. Additionally, monitoring the performance and health of individual serverless functions can be complicated, especially when dealing with many serverless functions spread across different services.
Organizations must invest in specialized tools and techniques to debug and monitor serverless applications effectively. This usually is better understood when the need arises, but at that point, it can cause delays and cost overruns.
The big problem is cost management of deployed serverless systems. Serverless computing can provide cost savings by eliminating the need to manage and provision infrastructure (which many developers and architects screw up by overprovisioning resources). However, it is essential to monitor and control costs effectively, and since serverless systems dynamically allocate resources behind the scenes, it isn’t easy to manage cloud resource costs directly. Furthermore, as applications become complex, the number of processes and associated resources may increase, leading to unexpected overruns.
Organizations should closely monitor resource utilization and implement cost management strategies to avoid surprises, but most don’t, making serverless less cost-effective. Many organizations can operate applications in more cost-optimized ways by taking a non-serverless path for some applications.
Serverless computing does offer increased developer productivity and reduced infrastructure management overhead. It’s the “easy button” for deploying applications. However, it is crucial to consider the potential disadvantages and make informed decisions. Careful planning, proper architectural design, and effective monitoring can help organizations navigate these challenges and fully leverage the benefits of serverless computing—or decide that it’s not right for certain applications.