A modernized and annotated code companion to Paul Graham’s “On Lisp”

“This repository contains a version of the code from On Lisp modified for use in modern Lisp environments. Paul Graham’s original code can be found here.

Among other necessary changes, this version:

  • Updates code that relied on pre-ANSI built-ins
  • Includes the bug fixes mentioned on pg’s errata page
  • Organizes everything into a modern system structure with ASDF and named-readtables
  • Adapts most of the example code into test suites
  • Makes it easy to load each version of the query system, Prolog system, and OOP system separately

It was written to follow along with the book page by page and catalogue dependencies between the chapters, which become quite complex toward the end.

The PDF file of the book available from Paul Graham’s site isn’t so great for reading on a screen. A version with smaller margins and the missing figures re-added can be found here. Figure 20.2 is missing a line between 2 and 3.

An online HTML version of the book, minus graphical figures, can be found here.

If you’re using CLISP, clone the ahead-of-Quicklisp version of lisp-unit2 to load the test code…”


A Quick Comparison of Nim vs. Rust

Rust and Nim are the two new programming languages I have been following for a while. Shortly after my previous blog post about Rust, Nim 0.10.2 was out. This led me to take a closer look at Nim, and, naturally, compare it with Rust.

In this blog post, I’m going to share my two simple examples written in Nim and Rust, the rough execution time comparison, and my subjective impressions of coding in both…”


60sec Review: Rust Language

“Rust is different. Rust is a statically typed compiled language meant to target the same tasks that you might use C or C++ for today, but it’s whole purpose in life is to promote memory safety. By design, Rust code can’t have dangling pointers, buffer overflows, or a whole host of other memory errors. Any code which would cause this literally can’t be compiled. The language doesn’t allow it. I know it sounds crazy, but it really does work…”


Markov Chains – Explained

“Markov Chains is a probabilistic process, that relies on the current state to predict the next state. For Markov chains to be effective the current state has to be dependent on the previous state in some way; For instance, from experience we know that if it looks cloudy outside, the next state we expect is rain. We can also say that when the rain starts to subside into cloudiness, the next state will most likely be sunny. Not every process has the Markov Property, such as the Lottery, this weeks winning numbers have no dependence to the previous weeks winning numbers…”


Can You Beat This Virtually Unbeatable Poker Algorithm?

“The challenge is on. Computer scientists say they’ve created an algorithm that has essentially solved a version of Texas hold ’em, and it’s guaranteed to beat every single puny human competitor in the long run. Don’t believe it? Why, you can play against the program yourself.

Cepheus, as this poker-playing program is called, plays a virtually perfect game of heads-up limit hold’em. The variant is like the popular Texas hold ’em, except there are only two players and a fixed number of bet sizes and raises. That still leaves 3.16 × 1017 states in the game…”


Learn Lisp The Hard Way

“It is with great pleasure that I announce that Learn Lisp The Hard Way is now an official, collaborative project of the Toronto Lisp User Group; and we are now accepting donations specifically for this project through PayPal. I will still be participating in the exact same capacity, but hopefully this change will help LLTHW become the definitive introduction to Common Lisp significantly faster. In particular I would like to welcome Leo “Inaimathi” Zovic as a co-author—the author of cl-notebook, cl-css, formlets, and many other excellent Lisp libraries—although other members of LispTO have expressed interest as well…”



Reader Macros in Common Lisp

“Reader macros are perhaps not as famous as ordinary macros. While macros are a great way to create your own DSL, reader macros provide even greater flexibility by allowing you to create entirely new syntax on top of Lisp.

Paul Graham explains them very well in On Lisp (Chapter 17, Read-Macros):

The three big moments in a Lisp expression’s life are read-time, compile-time, and runtime. Functions are in control at runtime. Macros give us a chance to perform transformations on programs at compile-time. …read-macros… do their work at read-time.

Macros and read-macros see your program at different stages. Macros get hold of the program when it has already been parsed into Lisp objects by the reader, and read-macros operate on a program while it is still text. However, by invoking read on this text, a read-macro can, if it chooses, get parsed Lisp objects as well. Thus read-macros are at least as powerful as ordinary macros.

(Note that read-macros and reader macros mean the same thing)…”


Wikipedia on HHVM

“If you’ve been watching our GitHub wiki, following us on Twitter, or reading the wikitech-l mailing list, you’ve probably known for a while that Wikipedia has been transitioning to HHVM. This has been a long process involving lots of work from many different people, and as of a few weeks ago, all non-cached API and web traffic is being served by HHVM. This blog post from the Wikimedia Foundation contains some details about the switch, as does their page about HHVM.

I spent four weeks in July and August of 2014 working at the Wikimedia Foundation office in San Francisco to help them out with some final migration issues. While the primary goal was to assist in their switch to HHVM, it was also a great opportunity to experience HHVM as our open source contributors see it. I tried to do most of my work on WMF servers, using HHVM from GitHub rather than our internal repository. In addition to the work I did on HHVM itself, I also gave a talk about what the switch to HHVM means for Wikimedia developers…”