Zepto is a minimalist JavaScript library for modern browsers

Zepto is a minimalist JavaScript library for modern browsers with a largely jQuery-compatible API. If you use jQuery, you already know how to use Zepto. While 100% jQuery coverage is not a design goal, the APIs provided match their jQuery counterparts. The goal is to have a ~5-10k modular library that downloads and executes fast, with a familiar and versatile API, so you can concentrate on getting stuff done…”

http://zeptojs.com/

via: http://blog.caelum.com.br/nao-use-jquery-no-seu-site-mobile-conheca-o-zepto-js

Is GNOME “Staring into the abyss?”

“Benjamin Otte, a leading GNOME developer thinks GNOME, once a popular Linux/Unix desktop but now more often used as a foundation for other desktop interfaces, is “staring into the abyss.

“I can’t argue with him. I think GNOME lost its way when it decided to move from its excellent 2.x release series to a barely usable GNOME 3.x line in 2009. Like many Linux users, I loved GNOME 2.x and hated GNOME 3.x. I’m far from the only one who disliked GNOME 3.x that strongly. Linus Torvalds, Linux’s father, would like to see GNOME forked and the current GNOME 3.x buried…”

http://www.zdnet.com/is-gnome-staring-into-the-abyss-7000001833/

Stella is a desktop focused, Gnome2 based Centos 6 remix

“It is available as installable Live media and contains standard Centos software plus some multimedia and desktop additions. I made it a rule not to overwrite Centos Base, so besides the changed artwork and naming what you get under the hood is basically Centos. Stella comes bundled with various 3rd party repos:
– EPEL
– ElRepo
– Adobe
– nux-dextop (my own repo containing additional programs, mostly desktop oriented)
– nux-libreoffice (my own repo containing Libreoffice – backported from Fedora)

It was my intention to make it so that the bundled repos do NOT conflict. You do not need to mess around with yum configuration, priorities etc; it should all just work(tm)!…”

http://li.nux.ro/stella/

Binary Search Is a Pathological Case for Caches

“Programmers tend to like round numbers, i.e. powers of two. So do hardware designers. Sadly, this shared value doesn’t always work to our advantage. One common issue is that of cache line aliasing induced by alignment.

Binary search suffers from a related ailment when executed on medium or large vectors of almost power-of-two size (in bytes), but it can be cured. Once that is done, searching a sorted vector can be as fast as searches with a well-tuned hash table, for a few realistic access patterns.

The task is interesting to me because I regularly work with static, or almost static, sets: sets for which there’s a majority of lookups, while updates are either rare or batchable. For such sets, the improved performance of explicit balanced search trees on insertions is rarely worth the slowdown on lookups, nor the additional space usage. Replacing binary search with slightly off-center binary or quaternary (four-way) searches only adds a bit more code to provide even quicker, more consistent lookup times…”

http://www.pvk.ca/Blog/2012/07/30/binary-search-is-a-pathological-case-for-caches/

Context switches and serialization in Node

“Go scales quite well across multiple cores iff you decompose the problem in a way that’s amenable to Go’s strategy. Same with Erlang. No one is making “excuses”. It’s important to understand these problems. Not understanding concurrency, parallelism, their relationship, and Amdahl’s Law is what has Node.js in such trouble right now…”

“Threads and processes both require a context switch, but on posix systems the thread switch is considerably less expensive. Why? Mainly because the process switch involves changing the VM address space, which means all that hard-earned cache has to be fetched from DRAM again. You also pay a higher cost in synchronization: every message shared between processes requires crossing the kernel boundary. So not only do you have a higher memory use for shared structures and higher CPU costs for serialization, but more cache churn and context switching…”

http://aphyr.com/posts/244-context-switches-and-serialization-in-node

Xah Emacs Lisp Tutorial

“This is a emacs lisp tutorial. This tutorial focus on practical needs with examples. This tutorial is concise and concrete. It assumes you already know a scripting language, such as Perl, Python, JavaScript, PHP.

This tutorial is designed so that each lesson is self-contained. However, it is recommened you read all numbered items in the Elisp Basics section.

For new articles and updates, subscribe: Xah Emacs Blog…”

http://ergoemacs.org/emacs/elisp.html

Meet Hyperborean, the Poker-Playing AI

“This is the second in a short series of posts about the Annual Computer Poker Competition (ACPC) taking place at the AAAI conference in Toronto July 22-26, 2012. My name is Richard Gibson and I’m a member of the Computer Poker Research Group (CPRG) at the University of Alberta. In my previous post, I discussed the history and present state of the competition, as well as the six events currently being played. Here, I will talk about our programs from previous years and explain our new programs for this year’s competition.

The CPRG’s poker programs, named “Hyperborean” in the ACPC, are constructed quite differently compared to programs for “perfect information games” such as chess. For example, the chess program Deep Blue was based on a technique called alpha-beta search, which compares sequences of moves until one sequence is found to be superior. On Deep Blue’s turn to move, search was performed on-line from the current game state for up to several minutes before the best estimated move was taken. In contrast, during a match, our poker programs play instantaneously. All computation for decision-making is performed off-line for several days before the competition and a final strategy profile is written to disk. Then, during an actual match, actions are chosen simply through table lookups in the precomputed profile…”

http://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/meet-hyperborean-the-poker-playing-ai

The CoDel queue management algorithm

“Bufferbloat” can be thought of as the buffering of too many packets in flight between two network end points, resulting in excessive delays and confusion of TCP’s flow control algorithms. It may seem like a simple problem, but the simple solution—make buffers smaller—turns out not to work. A true solution to bufferbloat requires a deeper understanding of what is going on, combined with improved software across the net. A new paper from Kathleen Nichols and Van Jacobson provides some of that understanding and an algorithm for making things better—an algorithm that has been implemented first in Linux…”

“…One of the key insights in the design of CoDel is that there is only one parameter that really matters: how long it takes a packet to make its way through the queue and be sent on toward its destination. And, in particular, CoDel is interested in the minimum delay time over a time interval of interest. If that minimum is too high, it indicates a standing backlog of packets in the queue that is never being cleared, and that, in turn, indicates that too much buffering is going on. So CoDel works by adding a timestamp to each packet as it is received and queued. When the packet reaches the head of the queue, the time spent in the queue is calculated; it is a simple calculation of a single value, with no locking required, so it will be fast…”

https://lwn.net/Articles/496509/