Why We Decided to Rewrite Uber’s Driver App

This article is the first in a series covering how Uber’s mobile engineering team developed the newest version of our driver app, codenamed Carbon, a core component of our ridesharing business. Among other new features, the app lets our population of over three million driver-partners find fares, get directions, and track their earnings. We began designing the new app in conjunction with feedback from our driver-partners in 2017, and began rolling it out for production in September 2018.

In early 2017, Uber made the decision to rewrite our driver app. This is the sort of decision that Joel Spolsky, the CEO of StackOverflow, once called “the single worst strategic mistake that any software company can make.”

Rewrites are incredibly risky, resource-intensive, and take a long time to deliver a tangible benefit for users. For this particular rewrite, hundreds of engineers contributed in some capacity, not to mention designers, product managers, data scientists, operations, legal, and marketing. In practice, our rewrite took a year and a half to implement and roll out globally.

Our case is an extreme example of a question that engineers in all organizations face. If you are an engineer working for a start-up and are considering rewriting some code or a feature, you might ask, “How much of our runway are we burning?” If you are working on a small team in a large organization, you might ask, “Are these changes worth the features we are not building?” A good engineer and a good team will look at these broader questions before they take on the challenge of a rewrite.

So, while the rewrite process involved a number of important technical decisions (to be covered in future articles), the decision to rewrite involved a combination of both technical considerations and broader business concerns. While these questions are hard to answer, good answers to the above questions will help you justify a rewrite to your organization or team.

Ultimately, these decisions do not get made in a vacuum. We did not make the decision to rewrite the app as a result of theoretical architectural thinking (“our code might be better, if only we…”), but rather as a result of an intensive, three-month research process that involved hundreds of pages of documentation and broad, cross-organizational buy-in. In the following sections, we discuss our decision to rewrite the Uber driver app and what we discovered as a result of this process.

https://eng.uber.com/rewrite-uber-carbon-app/

Advertisements

Making an Unlimited Number of Requests with Python aiohttp + pypeln

This post is a continuation on the works of Paweł Miech’s Making 1 million requests with python-aiohttp and Andy Balaam’s Making 100 million requests with Python aiohttp. I will be trying to reproduce the setup on Andy’s blog with some minor modifications due to API changes in the aiohttp library, you should definitely read his blog, but I’ll give a recap.

UPDATE: Since Andy’s original post, aiohttp introduced another API change which limited the total number of simultaneous requests to 100 by default. I’ve updated the code shown here to remove this limit and increased the number of total requests to compensate. Apart from that, the analysis remains the same.

https://medium.com/@cgarciae/making-an-infinite-number-of-requests-with-python-aiohttp-pypeln-3a552b97dc95

How Harry’s Shaved Off Their Operational Overhead by Moving to AWS Fargate

Broadcast Date: July 25, 2018
Level 200 | Customer Showcase
Harry’s migrated their messaging workload to AWS Fargate to eliminate the responsibility of maintenance of their compute infrastructure. In this tech talk, learn how Harry’s used the seamless integration of Fargate with other AWS services such as CloudFormation, Auto Scaling, and ALB to manage their traffic spikes and reduce their message processing time by more than 75%.
Learning Objectives:
• Learn how Harry’s uses AWS to simplify infrastructure operations
• Learn how AWS Fargate makes it simple to run containers
• Learn how to use AWS services like CloudFormation, Auto Scaling, and Load Balancing with AWS Fargate

Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS

Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.

Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.

This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.

What this post does not cover

This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.

https://aws.amazon.com/blogs/compute/run-your-kubernetes-workloads-on-amazon-ec2-spot-instances-with-amazon-eks/

Building a Serverless Subscription Service using Lambda@Edge

Personalizing content helps to drive subscriptions, improve revenue, and increase retention rates by providing a more engaging and responsive customer experience. In this blog post, we’ll show you how to build a serverless subscription service for your website that personalizes and monetizes content by using Amazon CloudFront and AWS Lambda@Edge.

Customers have typically used content delivery networks (CDNs) to reduce latency for global applications by serving content closer to their users. Since we announced Lambda@Edge in December 2016, customers have also started using Lambda functions to shift compute-heavy processing to the edge. By using Lambda@Edge, developers can build and continuously deliver features in edge locations, closer to their users and web consumers. Using CloudFront and Lambda@Edge together helps you to build and provide highly-performant online experiences. Using serverless applications at the edge also helps you avoid managing an extra tier of infrastructure at the origin.

If you’re just learning about Lambda@Edge, we recommend checking out the Get Started section in the documentation first, before you read this article, to get a general understanding about how Lambda@Edge works.

In our example application for personalizing content, users must register first, so that we can show them content that is most relevant to them. We use Lambda@Edge to validate registered users by authenticating them. For simplicity, we haven’t included a customer registration page but it’s straightforward to include one in your web flow. If someone is visiting your site for the first time, you can redirect them to a registration page, and then attach an entitlement to the profile to permit them to perform actions based on the level of their subscription.

There are a number of reasons to use Lambda@Edge when you build a subscription service. For example, you and your customers can gain the following benefits:

  • Lambda@Edge is a serverless computing platform, which has several advantages. There’s no infrastructure to manage when you use it. It’s an event-driven system, so you only pay for the service when an event is triggered. It scales automatically based on the demand. And, finally, it’s highly available.
  • A Lambda@Edge function runs closer to the viewer, so users have a better experience with faster response times.
  • The load on your origin is reduced because you can offload some CPU-intensive applications and processes from your web and app servers. Caching at the edge further reduces the load on your origin.
  • You can control your user journey in a more fine-grained manner, so you can, for example, implement micropayments, micro-subscriptions, bots management, and metering content. These features help your website to interact in innovative ways with customers and frequent viewers.
  • The AWS ecosystem includes more than 100 managed services that you can integrate with. For example, you can build analytics based on the logs generated on Lambda@Edge, CloudFront, and CloudWatch.
  • You can promote advertisements on your articles that align with your brand and opinion by using Lambda@Edge to provide relevant tags to advertising platforms at the Edge, allowing you to further drive revenue based on the viewer’s subscription level.

https://aws.amazon.com/blogs/networking-and-content-delivery/building-a-serverless-subscription-service-using-lambdaedge/

Top 10 Must-Watch PyCon Talks

Serverlessconf San Francisco 2018

For the first time ever, Serverlessconf was held in San Francisco! Serverlessconf is a community led conference focused on sharing experiences building applications using serverless architectures. Serverless architectures enable developers to express their creativity and focus on user needs instead of spending time managing infrastructure and servers. Watch the first release of talks from the main stage at Serverlessconf San Francisco 2018! The first 24 videos are now live, with more to come!

https://acloud.guru/series/serverlessconf-sf-2018