As many of us prepare to go to PyCon, we wanted to share a sampling of how Python is used at Netflix. We use Python through the full content lifecycle, from deciding which content to fund all the way to operating the CDN that serves the final video to 148 million members. We use and contribute to many open-source Python packages, some of which are mentioned below.
PySnooper is a poor man’s debugger.
You’re trying to figure out why your Python code isn’t doing what you think it should be doing. You’d love to use a full-fledged debugger with breakpoints and watches, but you can’t be bothered to set one up right now.
You want to know which lines are running and which aren’t, and what the values of the local variables are.
Most people would use
PySnooper lets you do the same, except instead of carefully crafting the right
What makes PySnooper stand out from all other code intelligence tools? You can use it in your shitty, sprawling enterprise codebase without having to do any setup. Just slap the decorator on, as shown below, and redirect the output to a dedicated log file by specifying its path as the first argument.
In many use cases, there are processes that need to execute multiple tasks. We build micro-services or server-less functions like AWS Lambda functions to carry out these tasks. Almost all these services are stateless functions and there is need of queues or databases to maintain the state of individual tasks and the process as a whole. Writing code that orchestrates these tasks can be both painful and hard to debug and maintain. It’s not easy to maintain the state of a process in an ecosystem of micro-services and server-less functions.
This is the seventh article in my series of articles on Python for NLP. In my previous article, I explained how to perform topic modeling using Latent Dirichlet Allocation and Non-Negative Matrix factorization. We used the Scikit-Learn library to perform topic modeling.
In this article, we will explore TextBlob, which is another extremely powerful NLP library for Python. TextBlob is built upon NLTK and provides an easy to use interface to the NLTK library. We will see how TextBlob can be used to perform a variety of NLP tasks ranging from parts-of-speech tagging to sentiment analysis, and language translation to text classification.
Step Functions state machine generator for AWS Lambda Power Tuning.
The state machine is designed to be quick and language agnostic. You can provide any Lambda Function as input and the state machine will estimate the best power configuration to minimize cost. Your Lambda Function will be executed in your AWS account (i.e. real HTTP calls, SDK calls, cold starts, etc.) and you can enable parallel execution to generate results in just a few seconds.
- 2. Anthos (the new name for Cloud Services Platform) is now generally available on Google Kubernetes Engine (GKE) and GKE On-Prem, so you can deploy, run and manage your applications on-premises or in the cloud. Coming soon, we’ll extend that flexibility to third-party clouds like AWS and Azure. And Anthos is launching with the support of more than 30 hardware, software and system integration partners so you can get up and running fast.
- 3. With Anthos Migrate, powered by Velostrata’s migration technology, you can auto-migrate VMs from on-premises or other clouds directly into containers in GKE with minimal effort.
- 4. Anthos Config Management lets you create multi-cluster policies out of the box that set and enforce role-based access controls, resource quotas, and namespaces—all from a single source of truth.
- 5. Cloud Run, our fully managed serverless execution environment, offers serverless agility for containerized apps.
- 6. Cloud Run on GKE brings the serverless developer experience and workload portability to your GKE cluster.
- 7. Knative, the open API and runtime environment, brings a serverless developer experience and workload portability to your existing Kubernetes cluster anywhere.
- 8. We’re also making new investments in our Cloud Functions and App Engine platforms with new second generation runtimes, a new open-sourced Functions Framework, and additional core capabilities, including connectivity to private GCP resources.