What is the maximum network throughput of your EC2 instance? The answer to this question is key to choosing the type of an instance or defining monitoring alerts on network throughput. Unfortunately, you will only find very vague information about the networking capabilities of EC2 instances within AWS’s service description and documentation. That is why I run a network performance benchmark for almost all EC2 instance types within the last few days. The results are compiled into the following cheat sheet.
You may have heard of Amazon Aurora, a custom built MySQL and PostgreSQL compatible database born and built in the cloud. You may have also heard of serverless, which allows you to build and run applications and services without thinking about instances. These are two pieces of the growing AWS technology story that we’re really excited to be working on. Last year, at AWS re:Invent we announced a preview of a new capability for Aurora called Aurora Serverless. Today, I’m pleased to announce that Aurora Serverless for Aurora MySQL is generally available. Aurora Serverless is on-demand, auto-scaling, serverless Aurora. You don’t have to think about instances or scaling and you pay only for what you use.
This paradigm is great for applications with unpredictable load or infrequent demand. In production, you can save on costs by adjusting to scale based on actual load, in extremely granular increments – matching your demand curve almost perfectly. In development, you can save on costs by automatically pausing the cluster (scale to zero!) when it’s not in use. I’m excited to show you how this all works so let’s look at how we launch a Serverless Aurora cluster.
In this introductory tutorial, we’ll look at what decorators are and how to create and use them. Decorators provide a simple syntax for calling higher-order functions.
By definition, a decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it.
Sounds confusing—but it’s really not, especially after we go over a number of examples. You can find all the examples from this article here.
You’ve just finished building your first Python command-line app. Or maybe your second or third. You’ve been learning Python for a while, and now you’re ready to build something bigger and more complex, but still runnable on a command-line. Or you are used to building and testing web applications or desktop apps with a GUI, but now are starting to build CLI applications.
In all these situations and more, you will need to learn and get comfortable with the various methods for testing a Python CLI application.
While the tooling choices can be intimidating, the main thing to keep in mind is that you’re just comparing the outputs your code generates to the outputs you expect. Everything follows from that.
In this tutorial you’ll learn four hands-on techniques for testing Python command-line apps:
- “Lo-Fi” debugging with
- Using a visual Python debugger
- Unit testing with pytest and mocks
- Integration testing
Amazon Web Services (AWS) recently announced that Simple Queue Service (SQS) is finally a supported event source for Lambda. This is extremely exciting news, as I have been waiting for this for two long years! It got me thinking about what other features I am desperately waiting to see from AWS Lambda. After some quick brainstorming, here is my wish list for Lambda for 2018. These items would address many recurring challenges Lambda users face in production, including:
- better monitoring at scale
- cold start performance
- scalability in spiky load scenarios
So, I hope someone from the Lambda team is reading this. Here we go!