Yay! Serverless! But. What advantages do we gain by going serverless? And what challenges do we face?
No billing when there is no execution. In my mind, this is a huge selling point. When no one is using your site or your API, you aren’t paying for. No ongoing infrastructure costs. Pay only for what you need. In some ways, this is the fulfillment of the cloud computing promise “pay only for what you use”.
No servers to maintain or secure. Server maintenance and security is handled by your vendor (you could, of course, host serverless yourself, but in some ways this seems like a step in the wrong direction). Since your execution times are also limited, the patching of security issues is also simplified since there is nothing to restart. This should all be handled seamlessly by your vendor.
Unlimited scalability. This is another big one. Let’s say you write the next Pokémon Go. Instead of your site being down every couple of days, serverless lets you just keep growing and growing. A double-edged sword, for sure (with great scalability comes great… bills), but if your service’s profitability depends on being up, then serverless can help enable that.
Forced microservices architecture. This one really goes both ways. Microservices seem to be a good way to build flexible, scalable, and fault-tolerant architectures. On the other hand, if your business’ services aren’t designed this way, you’re going to have difficulty adding serverless into your existing architecture.
But now your stuck on their platform
Limited range of environments. You get what the vendor gives. You want to go serverless in Rust? You’re probably out of luck.
Limited preinstalled packages. You get what the vendor pre-installs. But you may be able to supply your own.
Limited execution time. Your function can only run for so long. If you have to process a 1TB file you will likely need to 1) use a work around or 2) use something else.
Forced microservices architecture. See above.
Limited insight and ability to instrument. Just what is your code doing? With serverless, it is basically impossible to drop in a debugger and ride along. You still have the ability to log and emit metrics the usual way but these generally can only take you so far. Some of the most difficult problems may be out of reach when the occur in a serverless environment.
The playing field
Since the introduction of AWS Lambda in 2014, the number of offerings has expanded quite a bit. Here are a few popular ones:
- AWS Lambda – The Original
- OpenWhisk – Available on IBM’s Bluemix cloud
- Google Cloud Functions
- Azure Functions
While all of these have their relative strengths and weaknesses (like, C# support on Azure or tight integration on any vendor’s platform) the biggest player here is AWS.