This post is part of the https://medium.com/gett-engineering/before-you-go-go-bf4f861cdec7 series, where we explore the world of Golang, provide tips and insights you should know when writing Go, so you don’t have to learn them the hard way.
As the new year begins, we encourage you to make a resolution to follow Amazon DynamoDB best practices. Following these best practices can help you maximize performance and minimize throughput costs when working with DynamoDB. Click the following links to learn more about each best practice in the DynamoDB documentation.
Message queues like Apache Kafka are a common component of distributed systems. This blog post will look at several different strategies for improving performance when working with message queues.
Kafka consists of topics which have one or more partitions.
Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.
With this structured commit log, each consumer follows the same basic steps:
- A consumer is assigned a particular topic-partition (either manually or automatically via a consumer group)
- The previous offset is read so that the consumer will begin where it last left off
- Messages are consumed from Kafka
- Messages are processed in some way
- The processed message offset is committed back to Kafka
Other types of message queues (like AMQP) have a similar flow – messages are consumed, processed and acknowledged. Generally we rely on idempotent message processing – that is the ability to process the same message twice with no ill effect – and err on the side of only committing if we’re certain we’ve done what we need to. This gives us durability and guarantees that every message will be processed, even if our consumer process crashes.
Go kit is a collection of Go (golang) packages (libraries) that help you build robust, reliable, maintainable microservices. It was originally conceived as a toolkit to help larger (so-called modern enterprise) organizations adopt Go as an implementation language. But it very quickly “grew downward”, and now serves smaller startups and organizations just as well. For more about the origins of Go kit, see Go kit: Go in the modern enterprise.
- Multi framework: Cortex supports TensorFlow, PyTorch, scikit-learn, XGBoost, and more.
- Autoscaling: Cortex automatically scales APIs to handle production workloads.
- CPU / GPU support: Cortex can run inference on CPU or GPU infrastructure.
- Spot instances: Cortex supports EC2 spot instances.
- Rolling updates: Cortex updates deployed APIs without any downtime.
- Log streaming: Cortex streams logs from deployed models to your CLI.
- Prediction monitoring: Cortex monitors network metrics and tracks predictions.
- Minimal configuration: Cortex deployments are defined in a single
With tech offices around the world, Uber engineers are responsible for building new features and systems that improve rideshare, new mobility, food delivery, and other services enabled by our platform. Our Uber Engineering Blog highlights some of these efforts, giving technical explanations of our work that can serve as useful examples to the engineering community at large.
Throughout 2019, we published articles about front-end and back-end development, data science, applied machine learning, and cutting edge research in artificial intelligence. Some of our most popular articles introduced new open source projects originally developed at Uber, such as Kraken, Base Web, Ludwig, and AresDB. Likewise, we shared articles from Uber AI covering research projects such as POET, EvoGrad, LCA, and Plato, and original research on our new research publications site.
Along with our technical articles, we offer a look at what it’s like to work at Uber through interviews with engineers and profiles of offices and community building programs…
This page is a collection of MIT courses and lectures on deep learning, deep reinforcement learning, autonomous vehicles, and artificial intelligence organized by Lex Fridman. Here are some steps to get started:
- Sign up to our mailing list for occassional updates.
- Connect on Twitter or LinkedIn for more frequent updates.
- Read the Deep Learning Basics blog post and check out the code tutorials on our GitHub.
- Watch the Deep Learning Basics and other lectures below.
- Attend the following lectures at MIT in January 2020. If you cannot attend, the lectures will also be posted on YouTube (with a delay of a few days)
This article is about how we developed the high-load WebSocket server with Go.
If you are familiar with WebSocket, but know little about Go, I hope you will still find this article interesting in terms of ideas and techniques for performance optimization.
Khan Academy is embarking on a huge effort to rebuild our server software on a more modern stack in Go.
At Khan Academy, we don’t shy away from a challenge. After all, we’re a non-profit with a mission to provide a “free world-class education to anyone, anywhere”. Challenges don’t get much bigger than that.
Our mission requires us to create and maintain software to provide tools which help teachers and coaches who work with students, and a personalized learning experience both in and out of school. Millions of people rely on our servers each month to provide a wide variety of features we’ve built up over the past ten years.
Ten years is a long time in technology! We chose Python as our backend server language and it has been a productive choice for us. Of course, ten years ago we chose Python 2 because Python 3 was still very new and not well supported.