A new Go API for Protocol Buffers

Introduction

We are pleased to announce the release of a major revision of the Go API for protocol buffers, Google’s language-neutral data interchange format.

Motivations for a new API

The first protocol buffer bindings for Go were announced by Rob Pike in March of 2010. Go 1 would not be released for another two years.

In the decade since that first release, the package has grown and developed along with Go. Its users’ requirements have grown too.

Many people want to write programs that use reflection to examine protocol buffer messages. The reflect package provides a view of Go types and values, but omits information from the protocol buffer type system. For example, we might want to write a function that traverses a log entry and clears any field annotated as containing sensitive data. The annotations are not part of the Go type system.

Another common desire is to use data structures other than the ones generated by the protocol buffer compiler, such as a dynamic message type capable of representing messages whose type is not known at compile time.

We also observed that a frequent source of problems was that the proto.Message interface, which identifies values of generated message types, does very little to describe the behavior of those types. When users create types that implement that interface (often inadvertently by embedding a message in another struct) and pass values of those types to functions expecting a generated message value, programs crash or behave unpredictably.

All three of these problems have a common cause, and a common solution: The Message interface should fully specify the behavior of a message, and functions operating on Message values should freely accept any type that correctly implements the interface.

Since it is not possible to change the existing definition of the Message type while keeping the package API compatible, we decided that it was time to begin work on a new, incompatible major version of the protobuf module.

Today, we’re pleased to release that new module. We hope you like it.

https://blog.golang.org/a-new-go-api-for-protocol-buffers

Advanced Go Concurrency

f you’ve used Go for a while you’re probably aware of some of the basic Go concurrency primitives:

  • The go keyword for spawning goroutines
  • Channels, for communicating between goroutines
  • The context package for propagating cancellation
  • The sync and sync/atomic packages for lower-level primitives such as mutexes and atomic memory access

These language features and packages combine to provide a very rich set of tools for building concurrent applications. What you might not have discovered yet is a set of higher-level concurrency primitives available in the “extended standard library” available at golang.org/x/sync. We’ll be taking a look at these in this article.

https://encore.dev/blog/advanced-go-concurrency

Go: Discovery of the Trace Package

This article is based on Go 1.13.

Go provides us a tool to enable tracing during the runtime and get a detailed view of the execution of our program. This tool can be enabled by flag -trace with the tests, frompprof to get live tracing, or anywhere in our code thanks to the trace package. This tool can be even more powerful since you can enhance it with your own traces. Let’s review how it works.

https://medium.com/a-journey-with-go/go-discovery-of-the-trace-package-e5a821743c3c

Go memory ballast: How I learnt to stop worrying and love the heap

Go memory ballast: How I learned to stop worrying and love the heap

I’m a big fan of small code changes that can have large impact. This may seem like an obvious thing to state, but let me explain:

  1. These type of changes often involve diving into and understanding things one is not familiar with.
  2. Even with the most well factored code, there is a maintenance cost to each optimization you add, and it’s usually (although not always) pretty linear with the amount of lines of code you end up adding/changing.

We recently rolled out a small change that reduced the CPU utilization of our API frontend servers at Twitch by ~30% and reduced overall 99th percentile API latency during peak load by ~45%.

This blog post is about the change, the process of finding it and explaining how it works.

https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i-learnt-to-stop-worrying-and-love-the-heap-26c2462549a2/

Plumbing At Scale – Event Sourcing and Stream Processing Pipelines at Grab

As custodians and builders of the streaming platform at Grab operating at massive scale (think terabytes of data ingress each hour), the Coban team’s mission is to provide a NoOps, managed platform for seamless, secure access to event streams in real-time, for every team at Grab.

Coban Sewu Waterfall In Indonesia
Coban Sewu Waterfall In Indonesia. (Streams, get it?)

Streaming systems are often at the heart of event-driven architectures, and what starts as a need for a simple message bus for asynchronous processing of events quickly evolves into one that requires a more sophisticated stream processing paradigms. Earlier this year, we saw common patterns of event processing emerge across our Go backend ecosystem, including:

  • Filtering and mapping stream events of one type to another
  • Aggregating events into time windows and materializing them back to the event log or to various types of transactional and analytics databases

Generally, a class of problems surfaced which could be elegantly solved through an event sourcing1 platform with a stream processing framework built over it, similar to the Keystone platform at Netflix2.

This article details our journey building and deploying an event sourcing platform in Go, building a stream processing framework over it, and then scaling it (reliably and efficiently) to service over 300 billion events a week.

https://engineering.grab.com/plumbing-at-scale

URL shortening service written in Go and React

Short backend is built on top of Uncle Bob’s Clean Architecture, the central objective of which is separation of concerns.

Short adopts Microservices Architecture to organize dependent services around business capabilities and to enable independent deployment of each service.

Short leverages Kubernetes to automate deployment, scaling, and management of containerized microservices.

Short is maintained by a small team of talented software engineers working at Google, Uber, and Vmware as a side project.

https://github.com/short-d/short

Make resilient Go net/http servers using timeouts, deadlines and context cancellation

When it comes to timeouts, there are two types of people: those who know how tricky they can be, and those who are yet to find out.

As tricky as they are, timeouts are a reality in the connected world we live in. As I am writing this, on the other side of the table, two persons are typing on their smartphones, probably chatting to people very far from them. All made possible because of networks.

Networks and all their intricacies are here to stay, and we, who write servers for the web, have to know how to use them efficiently and guard against their deficiencies.

Without further ado, let’s look at timeouts and how they affect our net/http servers.

https://ieftimov.com/post/make-resilient-golang-net-http-servers-using-timeouts-deadlines-context-cancellation/

Microservices in Golang

Getting started, gRPC: https://ewanvalentine.io/microservices-in-golang-part-1/
Docker and micro: https://ewanvalentine.io/microservices-in-golang-part-2/
Docker compose and datastores: https://ewanvalentine.io/microservices-in-golang-part-3/
Authentication and JWT: https://ewanvalentine.io/microservices-in-golang-part-4/
Event brokering: https://ewanvalentine.io/microservices-in-golang-part-5/
Web Clients: https://ewanvalentine.io/microservices-in-golang-part-6/
Terraform: https://ewanvalentine.io/microservices-in-golang-part-7/
Kubernetes: https://ewanvalentine.io/microservices-in-golang-part-8/
CircleCI: https://ewanvalentine.io/microservices-in-golang-part-9/
Summary: https://ewanvalentine.io/microservices-in-golang-part-10/