The advantages of a serverless architecture are, at this point, not really a matter of debate. The question for every application or component becomes, “How can I avoid having to manage servers?” Sometimes you come across a roadblock: Perhaps you need a GPU; it takes 60 seconds just to load a machine learning model; maybe your task takes longer than the 300 seconds Amazon gives you for a Lambda process and you can’t figure out how to chop it up. The excuses never end.
Perhaps you want to push events into a browser or app through a WebSocket to create something similar to a chat or email application. You could use Nginx and Redis to create topics and have applications subscribe to them via a push stream; however, that means managing some long-running processes and servers. You can fake it and pound your backend once a second, butBut Amazon SQS and Cognito offer an easier way. Each user session can be paired with a Cognito identity and an SQS queue meaning applications can use SQS long-polling to receive events in real-time. At Reuters, we use this in production to support messaging in event-driven web applications and have open-sourced the underlying Serverless stack.
µWS is one of the most lightweight, efficient & scalable WebSocket server implementations available. It features an easy-to-use, fully async object-oriented interface and scales to millions of connections using only a fraction of memory compared to the competition. License is zlib/libpng (very permissive & suits commercial applications).
- Linux, OS X & Windows support.
- Built-in load balancing and multi-core scalability.
- SSL/TLS support & integrates with foreign HTTPS servers.
- Permessage-deflate built-in.
- Node.js binding exposed as the well-known
- Optional engine in projects like Socket.IO, Primus & SocketCluster.
tcpkali is a high performance TCP and WebSocket load generator and sink.
- Opens millions of connections from a single host by using available interface aliases.
- Efficient multi-core operation (
--workers); utilizes all available cores by default.
- Allows opening massive number of connections (
- Allows limiting an upstream and downstream of a single connection throughput (
- Allows specifying the first and subsequent messages (
- Measures response latency percentiles using HdrHistogram (
- Sends stats to StatsD/DataDog (