C++ is fun: tips and tricks

“C++ is not the language you learn in 12 lesson in one week. With the C++ standard spanning 1300 pages, you can still have things to learn after years of experience. I’d argue you could hardly count on your fingers the people that know everything the standard says.

In this article I will walk through several language features that are probably less known to many C++ developers. Some of them are more useful than other, some could only confuse fellow developers and should not be used in real code…”

http://www.codeproject.com/Articles/1035313/Cplusplus-is-fun-tips-and-tricks

Fast Memory Pool Allocators: Boost, Nginx & Tempesta FW

“Memory Pools and Region-based Memory Management

Memory pools and region-based memory management allow you to improve your program performance by avoiding unnecessary memory freeing calls. Moreover, pure memory pools gain even more performance due to simpler internal mechanisms. The techniques are widely used in Web servers and using them you can do following (pseudo code for some imaginary Web server):

http_req->pool = pool_create();
while (read_request & parse_request) {
http_req->hdr[i] = pool_alloc(http_req->pool);
// Do other stuff, don’t free allocated memory.
}
// Send the request.
// Now destroy all allocated items at once.
pool_destroy(http_req->pool);

This reduces number of memory allocator calls, simplifies its mechanisms (e.g. since you don’t free allocated memory chunks it doesn’t need to care about memory fragmentation and many other problems which common memory allocators must solve at relatively high cost) and makes the program run faster.

Probably, you’ve noticed that we call pool_alloc() in the example above without specifying allocation size. And here we go to the difference between memory pools and region-based memory management: memory pools can allocate memory chunks with only one fixed size while region-based memory allocators are able to allocate chunks with different size. Meantime, both of them allow you not to care about freeing memory and just drop all the allocated memory chunks at once…”

http://natsys-lab.blogspot.com.br/2015/09/fast-memory-pool-allocators-boost-nginx.html

CppCon 2015: Herb Sutter “Writing Good C++14… By Default”

“Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/isocpp/CppCoreGuid…

Modern C++ is clean, safe, and fast. It continues to deliver better and simpler features than were previously available. How can we help most C++ programmers get the improved features by default, so that our code is better by upgrading to take full advantage of modern C++?

This talk continues from Bjarne Stroustrup’s Monday keynote to describe how the open C++ core guidelines project is the cornerstone of a broader effort to promote modern C++. Using the same cross-platform effort Stroustrup described, this talk shows how to enable programmers write production-quality C++ code that is, among other benefits, type-safe and memory-safe by default – free of most classes of type errors, bounds errors, and leak/dangling errors – and still exemplary, efficient, and fully modern C++.

Background reading: Bjarne Stroustrup’s 2005 “SELL” paper, “A rationale for semantically enhanced library languages,” is important background for this talk…”

KVM creators open-source fast Cassandra drop-in replacement Scylla

“Two key figures behind popular open-source hypervisor KVM are today unveiling a new NoSQL database that they describe as a far faster drop-in replacement for Apache Cassandra.

The Scylla database, from KVM inventor Avi Kivity and the man who oversaw the hypervisor’s development, Dor Laor, offers what they say is 10 times better throughput and latency than wide column store Cassandra, while maintaining complete compatibility.”…

“Scylla has been written in C++ 14 – together with the project’s Seastar programming model. The Seastar C++ application framework is designed for high concurrency server applications and described on GitHub as “an event-driven framework allowing you to write non-blocking, asynchronous code in a relatively straightforward manner”…

http://www.zdnet.com/article/kvm-creators-open-source-fast-cassandra-drop-in-replacement-scylla/
https://github.com/scylladb/scylla
http://www.seastar-project.org/

Android Studio v1.3 Released To Stable Channel, Includes Support For C/C++, NDK, Data Binding, And More

“A preview of Android Studio v1.3 made its first appearance at the Google I/O 2015 session What’s New in Android Development Tools, which introduced a number of significant improvements and additions. The biggest announcement was about the integration of JetBrains Clion, enabling Android Studio to be used for C/C++ development, and ultimately support app development with the Native Development Kit (NDK). After a few months in development and about 3 weeks in the Canary channel, version 1.3 has been promoted to a Stable release…”

http://bit.ly/1KON9HP

Cache optimizing a priority queue

“I must begin with saying that if you found this because you have a performance problem, you should almost certainly look elsewhere. It is highly unlikely that your performance problem is caused by your priority queue. If, however, you are curious, or you have done careful profiling and found out that the cache characteristics of your priority queue are causing your performance problem, and you cannot fix that by altering your design, by all means read on.

A priority queue is typically implemented as a binary heap. The std::priority_queue<> class template in the C++ standard library is such an example. There is a reasonably good explanation for how they work on Wikipedia, but I’ll go through some operations anyway since it leads naturally to the optimization process.

The heap is a partially sorted tree-like structure. Below is a heap with the letters ‘a’-‘i’. A parent node always has higher priority than its children. In this example, ‘a’ has the highest priority. There is no order between the children, though. Either can have higher priority…”

http://playfulprogramming.blogspot.com.br/2015/08/cache-optimizing-priority-queue.html

Cache-friendly binary search

“High-speed memory caches present in modern computer architectures favor data structures with good locality of reference, i.e. the property by which elements accessed in sequence are located in memory addresses close to each other. This is the rationale behind classes such as Boost.Container flat associative containers, that emulate the functionality of standard C++ node-based associative containers while storing the elements contiguously (and in order). This is an example of how binary search works in a boost::container::flat_set with elements 0 trough 30…”

http://bannalia.blogspot.com/2015/06/cache-friendly-binary-search.html