“This article was excerpted from the book Docker in Action
It is easy to get started building images if you are already familiar with using containers. A union file system (UFS) mount provides a container’s file system so any changes that you make to the file system inside a container will be written as new layers that are owned by the container that created them.
Before you work with real software, this article will detail the typical workflow using a Hello World example…”
“In this book, you will find descriptions of programs that you can compose (write) in Erlang.The programs will usually be short, and each one has been designed to provide practice material for a particular Erlang programming concept. These programs have not been designed to be of considerable difficulty, though they may ask you to stretch a bit beyond the immediate material and examples that you find in the book Introducing Erlang…”
“Let’s Encrypt is a new certificate authority (CA) offering free and automated SSL/TLS certificates. Certificates issued by Let’s Encrypt are trusted by most browsers in production today, including Internet Explorer on Windows Vista. Simply download and run the Let’s Encrypt client to generate a certificate (there are a few more steps than that, of course, though not many).
Before issuing a certificate, Let’s Encrypt validates ownership of your domain. First, The Let’s Encrypt client running on your host creates a temporary file (a token) with the required information in it. The Let’s Encrypt validation server makes an HTTP request to retrieve the file and validates the token, which serves to verify that the DNS record for your domain resolves to the server running the Let’s Encrypt client.
The Let’s Encrypt client does not yet officially support NGINX and NGINX Plus (support is in beta), but you can still get started right away using Let’s Encrypt with NGINX and NGINX Plus. (This blog applies to both NGINX and NGINX Plus, but for ease of reading we’ll refer only to NGINX Plus from now on.) All you need is the webroot plug-in from Let’s Encrypt, and a few small changes to your NGINX Plus configuration…”
SmartQ zWatch was a cheap-ish smartwatch that has been available online for the last few years. It came with its own android-based OS and no sources. The updates from the manufacturer were quick initially, but soon stopped. Luckily there was an “unbrick tool” also published that allows to recover the device no matter its current state. The updates were published as unsigned zip files that the the existing bootloader would apply on boot if found in the /media partition. Some people created some “ROM”s for this watch, but all of them were just small modifications of the stock firmware. My goal was to produce a fully open-source version of Android for this device. Well, Android is already open source, but the device-specific parts for this device clearly are not.
First, I spent a lot of time inspecting the existing OS and libraries using a disassembler. Then I made a fake build from AOSP for the MIPS architecture, and started comparing what files were in one but not the other. There were a lot. One by one I categorized all of them into two piles: important and not. Most of the HALs were obviously important. Most other files were likely not as important. The next step was producing a build that used the existing binaries of the HALs as pre-built but otherwise worked. This actually took quite a lot of time and work, but eventually Android 4.4.4 ran. It did not run well. The screen flickered insanely, audio did not work, WiFi did not work. But it was an encouraging start – it booted. I estimate that to get to this step I used the “unbrick” tool about 200 times on this watch…”
“RPyC (pronounced as are-pie-see), or Remote Python Call, is a transparent python library for symmetricalremote procedure calls, clustering and distributed-computing. RPyC makes use of object-proxying, a technique that employs python’s dynamic nature, to overcome the physical boundaries between processes and computers, so that remote objects can be manipulated as if they were local…”
“This multi-part blog series aims to outline the path of a packet from the wire through the network driver and kernel until it reaches the receive queue for a socket. This information pertains to the Linux kernel, release 3.13.0. Links to source code on GitHub are provided throughout to help with context.
This document will describe code throughout the Linux networking stack as well as some code from the following Ethernet device drivers:
- e1000e: Intel PRO/1000 Linux driver
- igb: Intel Gigabit Linux driver
- ixgbe: Intel 10 Gigabit PCI Express Linux driver
- tg3: Broadcom Tigon3 ethernet driver
- be2net: HP Emulex 10 Gigabit PCI Express Linux Driver
- bnx2: Broadcom NX2 network driver
Other kernels or drivers will likely be similar, but line numbers and detailed inner workings will likely be different…”
part 1 | part 2 | part 3 | part 4 | part 5
“Flask is a micro web framework powered by Python. Its API is fairly small, making it easy to learn and simple to use. But don’t let this fool you, as it’s powerful enough to support enterprise-level applications handling large amounts of traffic. You can start small with an app contained entirely in one file, then slowly scale up to multiple files and folders in a well-structured manner as your site becomes more and more complex…”
“It turns out that the meaning of ‘load average’ on Unixes is rather more divergent than I thought it was. So here’s the story as I know it.
In the beginning, by which I mean 3 BSD, the load average counted how many processes were runnable or in short term IO wait (in a decaying average). The BSD kernel computed this count periodically by walking over the process table; you can see this in for example 4.2BSD’s vmtotal()function. Unixes that were derived from 4 BSD carried this definition of load average forward, which primarily meant SunOS and Ultrix. Sysadmins using NFS back in those days got very familiar with the ‘short term IO wait’ part of load average, because if your NFS server stopped responding, all of your NFS clients would accumulate lots of processes in IO waits (which were no longer so short term) and their load averages would go skyrocketing to absurd levels…”
“There have been several good talks about using Haskell in industry lately, and several people asked me to write about my personal experiences. Although I can’t give specific details I will speak broadly about some things I’ve learned and experienced.
The myths are true. Haskell code tends to be much more reliable, performant, easy to refactor, and easier to incorporate with coworkers code without too much thinking. It’s also just enjoyable to write.
The myths are sometimes trueisms. Haskell code tends to be of high quality by construction, but for several reasons that are only correlated; not causally linked to the technical merits of Haskell. Just by virtue of language being esoteric and having a relatively higher barrier to entry we’ll end up working with developers who would write above average code in any language. That said, the language actively encourage thoughtful consideration of abstractions and a “brutal” (as John Carmack noted) level of discipline that high quality code in other languages would require, but are enforced in Haskell.
Prefer to import libraries as qualified. Typically this is just considered good practice for business logic libraries, it makes it easier to locate the source of symbol definitions. The only point of ambiguity I’ve seen is disagreement amongst developers on which core libraries are common enough to import unqualified and how to handle symbols. This ranges the full spectrum from fully qualifying everything
(Control.Monad.>>=) to common things like
(Data.Maybe.maybe) or just disambiguating names like
Consider rolling an internal prelude. As we’ve all learned the hard way, the Prelude is not your friend. The consensus historically has favored the “Small Prelude Assumption” which presupposes that tools get pushed out into third party modules, even the core tools that are necessary to do anything (text, bytestring, vector, etc). This makes life easier for library authors at the cost of some struggle for downstream users…”