“A collaborative list of Parse alternative backend service providers…”
“As you may noticed Parse will be fully retired after a year-long period ending on January 28, 2017. Most of us need to find an alternative backend service for our apps. Please help the community to find a great BaaS. Contribution guidelines can be found here.
What features are we looking for?
Cloud Code integration
Multiple mobile platform SDKs
“You can now get free https certificates from the non-profit certificate authority Let’s Encrypt! This is a website that will take you through the manual steps to get your free https certificate so you can make your own website use https! This website is open source and NEVER asks for your private keys. Never trust a website that asks for your private keys!
NOTE: This website is for people who know how to generate certificate signing requests (CSRs)! If you’re not familiar with how to do this, please use the official Let’s Encrypt client that can automatically issue and install https certificates for you. This website is designed for people who know what they are doing and just want to get their free https certificate…”
“This page has information on the “autotools”, a set of tools for software developers that include at least autoconf, automake, and libtool. The autotools make it easier to distribute source code that (1) portably and automatically builds, (2) follows common build conventions (such as DESTDIR), and (3) provides automated dependency generation if you’re using C or C++. They’re primarily intended for Unix-like systems, but they can be used to build programs for Microsoft Windows (this requires some additional facilities, e.g., MSYS, Cygwin, or cross-compilation libraries). The autotools are not the only way to create source code releases that are easily built and packaged. But they’re one of the most widely-used, especially for programs that use C or C++ (though they’re not limited to that)…”
“ata delivered over an unencrypted channel is insecure, untrustworthy, and trivially intercepted. We owe it to our users to protect the security, privacy, and integrity of their data — all data must be encrypted while in flight and at rest. Historically, concerns over performance have been the common excuse to avoid these obligations, but today that is a false dichotomy. Let’s dispel some myths…”
“As projects reach scale of moderate complexity, they must represent objects in many forms. As a json record from an API endpoint, cache, or database, as a thrift object for RPC, or as an object in-memory.
At Uber, we’ve begun to use the schematics library for our in-memory representations. It provides a canonical form for an object to take, which can then be serialized into thrift, sql, or various json forms. The native serialization works great for the base case…”
“Deep Learning allows computational models composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection, and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large datasets by using the back-propagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about dramatic improvements in processing images, video, speech and audio, while recurrent nets have shone on sequential data such as text and speech. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep learning methods are representation learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. This tutorial will introduce the fundamentals of deep learning, discuss applications, and close with challenges ahead…”
“Data locality is very important. Keeping data close together in memory is a huge win on basically every modern computing device due to our systems’ inherently multi-level memory hierarchy. Contiguous data leads to speed, and even some perhaps counter-intuitive results (like insert-and-shift into the middle of a dynamic array actually being faster than the equivalent linked list insertion1).
Despite these facts, interestingly enough the C++11 (and beyond) standard library provides a hash table implementation in std::unordered_map that is a separate chaining hash table, with linked list buckets. This implementation implies a strong trade-off: while elements inserted into the map are guaranteed to be stable in memory (never needing to be moved or copied), we now must chase pointers when walking through the buckets in the hash table.
What if we allow for keys to be moved/copied around as the hash table grows? If we relax the requirement that we never move key/value pairs after insertion, we now open the door for implementing a hash table using open addressing. Here, the table is stored as a single huge array containing either a key/value pair or nothing. On an insertion where a collision occurs, an empty slot is located by “probing” through the array looking for an empty slot, following some probing strategy. The most well known is linear probing, which has developed a bit of a bad reputation. But is the disdain deserved?
In this post, I’m going to walk you through some results that motivated the development of an open source (insertion-only) probing hash table framework that’s distributed as part of the MeTA toolkit, and attempt to prove to you that naive, linear probing (or, at least, blocked probing) strategies are not nearly so bad as you might initially think. In particular, we’ll be benchmarking hash tables with both integer and string keys, taking a look at how they perform in terms of building time, memory consumption, and query throughput…”