How Twitch monitors its services with Amazon CloudWatch

Twitch is the leading service and community for multiplayer entertainment and is owned by Amazon. Twitch also provides social and features and micro-transaction features that drive content engagement for its audiences. These services operate at a high transaction volume.

Twitch uses Amazon CloudWatch to monitor its business-critical services. It emits custom metrics then visualizes and alerts based on predefined thresholds for these key metrics. The high volume of transactions handled by the Twitch services makes it difficult to design a metric ingestion strategy that provides sufficient throughput of data while balancing the cost of data ingestion.

Amazon CloudWatch client-side aggregations is a new feature of the PutMetricData API service that helps customers to aggregate data on the client-side, which increases throughput and efficiency. In this blog post we’ll show you how Twitch uses client-side data aggregations to build a more effective metric ingestion architecture while achieving substantial cost reductions.

https://aws.amazon.com/pt/blogs/mt/how-twitch-monitors-its-services-with-amazon-cloudwatch/

Advertisements

PySnooper – Never use print for debugging again

PySnooper is a poor man’s debugger.

You’re trying to figure out why your Python code isn’t doing what you think it should be doing. You’d love to use a full-fledged debugger with breakpoints and watches, but you can’t be bothered to set one up right now.

You want to know which lines are running and which aren’t, and what the values of the local variables are.

Most people would use print lines, in strategic locations, some of them showing the values of variables.

PySnooper lets you do the same, except instead of carefully crafting the right print lines, you just add one decorator line to the function you’re interested in. You’ll get a play-by-play log of your function, including which lines ran and when, and exactly when local variables were changed.

What makes PySnooper stand out from all other code intelligence tools? You can use it in your shitty, sprawling enterprise codebase without having to do any setup. Just slap the decorator on, as shown below, and redirect the output to a dedicated log file by specifying its path as the first argument.

https://github.com/cool-RR/pysnooper

Orchestrating backend services with AWS Step Functions

The problem

In many use cases, there are processes that need to execute multiple tasks. We build micro-services or server-less functions like AWS Lambda functions to carry out these tasks. Almost all these services are stateless functions and there is need of queues or databases to maintain the state of individual tasks and the process as a whole. Writing code that orchestrates these tasks can be both painful and hard to debug and maintain. It’s not easy to maintain the state of a process in an ecosystem of micro-services and server-less functions.

https://medium.com/engineering-zemoso/orchestrating-backend-services-with-aws-step-functions-8648c90e5a57?hss_channel=tw-920289756919074817

Python for NLP: Introduction to the TextBlob Library

This is the seventh article in my series of articles on Python for NLP. In my previous article, I explained how to perform topic modeling using Latent Dirichlet Allocation and Non-Negative Matrix factorization. We used the Scikit-Learn library to perform topic modeling.

In this article, we will explore TextBlob, which is another extremely powerful NLP library for Python. TextBlob is built upon NLTK and provides an easy to use interface to the NLTK library. We will see how TextBlob can be used to perform a variety of NLP tasks ranging from parts-of-speech tagging to sentiment analysis, and language translation to text classification.

https://stackabuse.com/python-for-nlp-introduction-to-the-textblob-library/

The Illustrated Word2vec

I find the concept of embeddings to be one of the most fascinating ideas in machine learning. If you’ve ever used Siri, Google Assistant, Alexa, Google Translate, or even smartphone keyboard with next-word prediction, then chances are you’ve benefitted from this idea that has become central to Natural Language Processing models. There has been quite a development over the last couple of decades in using embeddings for neural models (Recent developments include contextualized word embeddings leading to cutting-edge models like BERT and GPT2).

Word2vec is a method to efficiently create word embeddings and has been around since 2013. But in addition to its utility as a word-embedding method, some of its concepts have been shown to be effective in creating recommendation engines and making sense of sequential data even in commercial, non-language tasks. Companies like AirbnbAlibabaSpotify, and Anghami have all benefitted from carving out this brilliant piece of machinery from the world of NLP and using it in production to empower a new breed of recommendation engines.

In this post, we’ll go over the concept of embedding, and the mechanics of generating embeddings with word2vec. But let’s start with an example to get familiar with using vectors to represent things. Did you know that a list of five numbers (a vector) can represent so much about your personality?

https://jalammar.github.io/illustrated-word2vec/

Hashing for large scale similarity

Similarity computation is a very common task in real-world machine learning and data mining problems such as recommender systems, spam detection, online advertising etc. Consider a tweet recommendation problem where one has to find tweets similar to the tweet user previously clicked. This problem becomes extremely challenging when there are billions of tweets created each day.

In this post, we will discuss the two most common similarity metric, namely Jaccard similarity and Cosine similarity; and Locality Sensitive Hashing based approximation of those metrics.

https://mesuvash.github.io/blog/2019/Hashing-for-similarity/

HFT-like Trading Algorithm in 300 Lines of Code You Can Run Now

Commission Free API Trading Can Open Up Many Possibilities

Alpaca provides commission-free stock trading API for individual algo traders and developers, and now almost 1,000 people hang around in our community Slack talking about many different use cases. Among other things, like automated long-term value investing and Google Spreadsheet trading, high-frequency trading (“HFT”) often came up as a discussion topic among our users.

Is High-Frequency Trading (“HFT”) That Special?

Maybe because I don’t come from a finance background, I’ve wondered what’s so special about hedge funds and HFTs that those “Wallstreet” guys talk about. Since I am a developer who always looks for ways to make things work, I decided to do research and to figure out myself on how I could build similar things to what HFTs do.

I am fortunate to work with colleagues who used to build strategies and trade at HFTs, so I learned some basic know-how from them and went ahead to code a working example that trades somewhat like an HFT style (please note that my example does not act like the ultra-high speed professional trading algorithms that collocate with exchanges and fight for nanoseconds latency). Also, because this working example uses real-time data streaming, it can act as a good starting point for users who want to understand how to use real-time data streaming.

The code of this HFT-ish example algorithm is here, and you can immediately run it with your favorite stock symbol. Just clone the repository from GitHub, set the API key, and go!

https://medium.com/automation-generation/hft-like-trading-algorithm-in-300-lines-of-code-you-can-run-now-983bede4f13a