For our products, like the trivago hotel search, we are using Redis a lot. The use cases vary: Caching, temporary storage of data before moving those into another storage or a typical database for hotel meta data including persistence.
One of the key problems in onboarding developers to use modern Common Lisp is the vertical wall of difficulty. Things that are routinely problematic:
- emacs use. Most people don’t use emacs.
- Library creation. Putting together ASDF libraries and using them is a fairly horrid experience the first time.
- Selection of Lisp implementation to use, along with an up-to-date discussion of pros and cons.
- Putting together serious projects is not commonly discussed.
This site is dedicated to handling these problems. My goal is to put together an introduction/tutorial for practicing professionals and hobbyists from other languages. People who want to get started with Lisp beyond just typing into a REPL. Right now, it feels like this information is less disseminated and much less centralized than it otherwise might be. It’s not intended to be a HOWTO for Common Lisp. That’s been covered quite well. But it is intended to be a HOWTO on how to put together a Lisp environment.
Anyway, I’d like to collaborate with other people to make this a remarkably fine Lisp help site. Contributions are both accepted and welcome. It’s a wholly static site at this point in time – I don’t see a need for articulate-lisp.com to have a dynamic backend. Perhaps/probably one of the code examples will be a webapp.
P.S.: feel free to contact me for anything you like.
- Set up your implementation.
- Set up Quicklisp.
- Write some Lisp.
- Check out the new project tutorial
- Look up our Trotter, a web spider.
- Keep our Quick Links bookmarked.
Polymer vs. React—which should you use? It’s a question that inevitably crops up whenever anyone discusses the components-based future of the web. While both Polymer and React are libraries created to support a component-oriented approach to front-end web development, they do so in very different ways. In this article, we’ll try to illustrate the role each of these technologies plays in front-end web development so that you can decide which is best suited for your needs.
Around September of 2016 I wrote two articles on using Python for accessing, visualizing, and evaluating trading strategies (see part 1 and part 2). These have been my most popular posts, up until I published my article on learning programming languages (featuring my dad’s story as a programmer), and has been translated into both Russian (which used to be on backtest.ru at a link that now appears to no longer work) and Chinese (here and here). R has excellent packages for analyzing stock data, so I feel there should be a “translation” of the post for using R for stock data analysis.
This post is the first in a two-part series on stock data analysis using R, based on a lecture I gave on the subject for MATH 3900 (Data Science) at the University of Utah. In these posts, I will discuss basics such as obtaining the data from Yahoo! Finance using pandas, visualizing stock data, moving averages, developing a moving-average crossover strategy, backtesting, and benchmarking. The final post will include practice problems. This first post discusses topics up to introducing moving averages.
NOTE: The information in this post is of a general nature containing information and opinions from the author’s perspective. None of the content of this post should be considered financial advice. Furthermore, any code written here is provided without any form of guarantee. Individuals who choose to use it do so at their own risk.
Lisp is a deep language with many unusual and powerful features. The goal of this tutorial is not to teach you many of those powerful features: rather it’s to teach you just enough of Lisp that you can get up and coding quickly if you have a previous background in a procedural language such as C or Java.
Notably this tutorial does not teach macros, CLOS, the condition system, much about packages and symbols, or very much I/O.
In response to my last post about
dd, a friend of mine noticed that GNU
cp always uses a 128 KB buffer size when copying a regular file; this is also the buffer size used by GNU
cat. If you use
strace to watch what happens when copying a file, you should see a lot of 128 KB read/write sequences:
$ strace -s 8 -xx cp /dev/urandom /dev/null ... read(3, "\x61\xca\xf8\xff\x1a\xd6\x83\x8b"..., 131072) = 131072 write(4, "\x61\xca\xf8\xff\x1a\xd6\x83\x8b"..., 131072) = 131072 read(3, "\xd7\x47\x8f\x09\xb2\x3d\x47\x9f"..., 131072) = 131072 write(4, "\xd7\x47\x8f\x09\xb2\x3d\x47\x9f"..., 131072) = 131072 read(3, "\x12\x67\x90\x66\xb7\xed\x0a\xf5"..., 131072) = 131072 write(4, "\x12\x67\x90\x66\xb7\xed\x0a\xf5"..., 131072) = 131072 read(3, "\x9e\x35\x34\x4f\x9d\x71\x19\x6d"..., 131072) = 131072 write(4, "\x9e\x35\x34\x4f\x9d\x71\x19\x6d"..., 131072) = 131072 ...
As you can see, each copy is operating on buffers 131072 bytes in size, which is 128 KB. GNU
cp is part of the GNU coreutils project, and if you go diving into the coreutils source code you’ll find this buffer size is defined in the file src/ioblksize.h. The comments in this file are really fascinating. The author of the code in this file (Jim Meyering) did a benchmark using
dd if=/dev/zero of=/dev/null with different values of the block size parameter,
bs. On a wide variety of systems, including older Intel CPUs, modern high-end Intel CPUs, and even an IBM POWER7 CPU, a 128 KB buffer size is fastest. I used gnuplot to graph these results, shown below. Higher transfer rates are better, and the different symbols represent different system configurations.