AWS Lambda Power Tuning

Step Functions state machine generator for AWS Lambda Power Tuning.

The state machine is designed to be quick and language agnostic. You can provide any Lambda Function as input and the state machine will estimate the best power configuration to minimize cost. Your Lambda Function will be executed in your AWS account (i.e. real HTTP calls, SDK calls, cold starts, etc.) and you can enable parallel execution to generate results in just a few seconds.

I Joined Airbnb at 52, and Here’s What I Learned About Age, Wisdom, and the Tech Industry

A growing number of people feel like an old carton of milk, with an expiration date stamped on their wrinkled foreheads. One paradox of our time is that Baby Boomers enjoy better health than ever, remain young and stay in the workplace longer, but feel less and less relevant. They worry, justifiably, that bosses or potential employers may see their age more as liability than asset. Especially in the tech industry.

And yet we workers “of a certain age” are less like a carton of milk and more like a bottle of fine wine — especially now, in the digital era. The tech sector, which has become as famous for toxic company cultures as for innovation, and as well-known for human resource headaches as for hoodie-wearing CEOs, could use a little of the mellowness and wisdom that comes with age.

I started a boutique hotel company when I was 26 and, after 24 years as CEO, sold it at the bottom of the Great Recession, not knowing what was next. That’s when Airbnb came calling. In early 2013 cofounder and CEO Brian Chesky approached me after reading my book Peak: How Great Companies Get Their Mojo from Maslow. He and his two Millennial cofounders wanted me to help turn their growing tech startup into an international giant, as their Head of Global Hospitality and Strategy. Sounded good. But I was an “old-school” hotel guy and had never used Airbnb. I didn’t even have the Uber app on my phone. I was 52 years old, I’d never worked in a tech company, I didn’t code, I was twice the age of the average Airbnb employee, and, after running my own company for well over two decades, I’d be reporting to a smart guy 21 years my junior. I was a little intimidated. But I took the job.

On my first day I heard an existential tech question in a meeting and didn’t know how to answer it: “If you shipped a feature and no one used it, did it really ship?” Bewildered, I realized I was in deep “ship,” as I didn’t even know what it meant to ship product. Brian had asked me to be his mentor, but I also felt like an intern.

I realized I’d have to figure out a way to be both.

AWS X-Ray Update – General Availability, Including Lambda Integration

I first told you about AWS X-Ray at AWS re:Invent in my post, AWS X-Ray – See Inside Your Distributed Application. X-Ray allows you to trace requests made to your application as execution traverses Amazon EC2 instances, Amazon ECS containers, microservices, AWS database services, and AWS messaging services. It is designed for development and production use, and can handle simple three-tier applications as well as applications composed of thousands of microservices. As I showed you last year, X-Ray helps you to perform end-to-end tracing of requests, record a representative sample of the traces, see a map of the services and the trace data, and to analyze performance issues and errors. This helps you understand how your application and its underlying services are performing so you can identify and address the root cause of issues.

You can take a look at the full X-Ray walk-through in my earlier post to learn more.

Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads

I’m fairly sure that you already know about Amazon DynamoDB. As you probably know, it is a managed NoSQL database that scales to accommodate as much table space, read capacity, and write capacity as you need. With response times measured in single-digit milliseconds, our customers are using DynamoDB for many types of applications including adtech, IoT, gaming, media, online learning, travel, e-commerce, and finance. Some of these customers store more than 100 terabytes in a single DynamoDB table and make millions of read or write requests per second. The Amazon retail site relies on DynamoDB and uses it to withstand the traffic surges associated with brief, high-intensity events such as Black Friday, Cyber Monday, and Prime Day.

While DynamoDB’s ability to deliver fast, consistent performance benefits just about any application and workload, there’s always room to do even better. The business value of some workloads (gaming and adtech come to mind, but there are many others) is driven by low-latency, high-performance database reads. The ability to pull data from DynamoDB as quickly as possible leads to faster & more responsive games or ads that drive the highest click-through rates.

Litho: A declarative UI framework for Android

Litho is a declarative framework for building efficient user interfaces (UI) on Android. It allows you to write highly-optimized Android views through a simple functional API based on Java annotations. It was primarily built to implement complex scrollable UIs based on RecyclerView.

With Litho, you build your UI in terms of components instead of interacting directly with traditional Android views. A component is essentially a function that takes immutable inputs, called props, and returns a component hierarchy describing your user interface.

class HelloComponentSpec {

  static ComponentLayout onCreateLayout(
      ComponentContext c,
      @Prop String name) {

    return Text.create(c)
        .text("Hello, " + name)
        .paddingDip(ALL, 10)

You simply declare what you want to display and Litho takes care of rendering it in the most efficient way by computing layout in a background thread, automatically flattening your view hierarchy, and incrementally rendering complex components.

Have a look at our Tutorial for a step-by-step guide on using Litho in your app. You can also read the quick start guide on how to write and use your own Litho components.

Cache eviction: when are randomized algorithms better than LRU?

Once upon a time, my computer architecture professor mentioned that using a random eviction policy for caches really isn’t so bad. That random eviction isn’t bad can be surprising – if your cache fills up and you have to get rid of something, choosing the least recently used (LRU) is an obvious choice, since you’re more likely to use something if you’ve used it recently. If you have a tight loop, LRU is going to be perfect as long as the loop fits in cache, but it’s going to cause a miss every time if the loop doesn’t fit. A random eviction policy degrades gracefully as the loop gets too big.

In practice, on real workloads, random tends to do worse than other algorithms. But what if we take two random choices and just use LRU between those two choices?