The term "Serverless" has become quite the buzzword over these past few years, some folks today even arguing whether or not the term still holds weight. You may or may not have heard about serverless technology up to this point in your journey through development. But what exactly is serverless? What does it mean to design applications in a serverless manner? What constitutes a serverless service or offering? I hope to answer all these questions and more over this series of blog posts.
As an AWS Hero and a Principal Software engineer at a large Fortune 100 enterprise, I have been focused solely on serverless technologies and enablement over the last 3 years. I have spoken quite extensively about our serverless journey, what serverless means in a large enterprise, and how to be successful in a corporate setting. Everything I have learned has been through experience, on-the-job. The serverless definition I resonate with most states that serverless is event-driven, your resources scale up and down without you needing to manage them, and follows an "only pay for what you use" model. Let's break this down a bit further:
-
Event-driven architecture is exactly how it sounds. You build your applications following the flow of your events. When x happens, I want to trigger y, to run service z. We write our serverless applications with this flow in mind.
-
Automatically scaling up and down as your service demands is a key component of serverless functionality. I fault the name "serverless" here a quite a bit, because contrary to popular belief, serverless does in fact include servers - you just don't need to manage them in the same way you would with your on-premises ecosystem or other cloud resources. You still need to provision the resources you need, and some configuration is required, but gone are the days of estimating exactly how much storage and processing power you need - your cloud provider handles this for you. This frees the developer up for focusing more on business code, and less on physical infrastructure.
-
With automatic scaling, this also lends you to only pay for exactly what you are using. You no longer need to buy and maintain a physical server you may only use to half its capacity, save for the one time of year your traffic hits its peak, for instance. You don't need to pay for all the storage and processing power you have to have "just in case" - you pay exactly for what you use, exactly when you need it. No more, no less.
I am a large proponent of serverless, and I believe these are huge benefits to adopting serverless, but that does not mean it is for everyone or every architecture. I talk quite a bit about the concept of "serverless-first" design, meaning that you approach every architecture in a serverless manner first, and if that is not the most optimal design, you move on to other solutions like containers, relational databases, reserved compute instances, and so on. Equally as important, especially in a large enterprise, is to evaluate your time constraints and areas of expertise. Serverless is not going to be for everyone, and depending on your background, there can be a large learning curve associated with adopting it. The trade off is worth it, but if you do not have the adequate time or drive to dedicate to this transformation, you will not be successful.
That being said, I hope to provide you with a strong starting point for the land of serverless. Over the next few days, we will be exploring serverless resources and services, from compute, to storage, to API design, and more. We will keep our discussions high-level, but I'll be sure to include relevant examples, resources, and further reading from other leading industry experts. No prerequisites are necessary, I just ask you approach each and every article with an open mind, continue to ask questions & provide feedback, and let's dive in!*
*As a quick disclaimer - as I am an AWS Serverless Hero, most of the examples and explanations I give will reference the AWS ecosystem since that is where my expertise is. Many of the AWS services and tools we will discuss have equivalents across Azure, GCP, or other tooling. I will do my best to call these out going forward. This is part of a series that will be covered here, but I also encourage you to follow along on Medium or Dev.to for more.
See you in Day 71.