A Zero-Math Introduction to Markov Chain Monte Carlo Methods

For many of us, Bayesian statistics is voodoo magic at best, or completely subjective nonsense at worst. Among the trademarks of the Bayesian approach, Markov chain Monte Carlo methods are especially mysterious. They’re math-heavy and computationally expensive procedures for sure, but the basic reasoning behind them, like so much else in data science, can be made intuitive. That is my goal here.


So, what are Markov chain Monte Carlo (MCMC) methods? The short answer is:

MCMC methods are used to approximate the posterior distribution of a parameter of interest by random sampling in a probabilistic space.

In this article, I will explain that short answer, without any math.

https://towardsdatascience.com/a-zero-math-introduction-to-markov-chain-monte-carlo-methods-dcba889e0c50

Advertisements

The Case for Learned Index Structures

Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.

https://www.arxiv-vanity.com/papers/1712.01208v1/

Neural Networks in JavaScript with deeplearn.js

A couple of my recent articles gave an introduction into a subfield of artificial intelligence by implementing foundational machine learning algorithms in JavaScript (e.g. linear regression with gradient descentlinear regression with normal equation or logistic regression with gradient descent). These machine learning algorithms were implemented from scratch in JavaScript by using the math.js node package for linear algebra (e.g. matrix operations) and calculus. You can find all of these machine learning algorithms grouped in a GitHub organization. If you find any flaws in them, please help me out to make the organization a great learning resource for others. I intend to grow the amount of repositories showcasing different machine learning algorithms to provide web developers a starting point when they enter the domain of machine learning.

Personally, I found it becomes quite complex and challenging to implement those algorithms from scratch at some point. Especially when combining JavaScript and neural networks with the implementation of forward and back propagation. Since I am learning about neural networks myself at the moment, I started to look for libraries doing the job for me. Hopefully I am able to catch up with those foundational implementations to publish them in the GitHub organization in the future. However, for now, as I researched about potential candidates to facilitate neural networks in JavaScript, I came across deeplearn.js which was recently released by Google. So I gave it a shot. In this article / tutorial, I want to share my experiences by implementing with you a neural network in JavaScript with deeplearn.js to solve a real world problem for web accessibility.

I highly recommend to take the Machine Learning course by Andrew Ng. This article will not explain the machine learning algorithms in detail, but only demonstrate their usage in JavaScript. The course on the other hand goes into detail and explains these algorithms in an amazing quality. At this point in time of writing the article, I learn about the topic myself and try to internalize my learnings by writing about them and applying them in JavaScript. If you find any parts for improvements, please reach out in the comments or create a Issue/Pull Request on GitHub.

https://www.robinwieruch.de/neural-networks-deeplearnjs-javascript/

Introduction to TensorFlow Datasets and Estimators

Datasets and Estimators are two key TensorFlow features you should use:

  • Datasets: The best practice way of creating input pipelines (that is, reading data into your program).
  • Estimators: A high-level way to create TensorFlow models. Estimators include pre-made models for common machine learning tasks, but you can also use them to create your own custom models.

https://developers.googleblog.com/2017/09/introducing-tensorflow-datasets.html

https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html

How Cargo Cult Bayesians encourage Deep Learning Alchemy

There is a struggle today for the heart and minds of Artificial Intelligence. It’s a complex “Game of Thrones” conflict that involves many houses (or tribes) (see: “The Many Tribes of AI”). The two waring factions I focus on today is on the practice Cargo Cult science in the form of Bayesian statistics and in the practice of alchemy in the form of experimental Deep Learning.

For the uninitiated, let’s talk about what Cargo Cult science means. Cargo Cult science is a phrase coined by Richard Feynman to illustrate a practice in science of not working from fundamentally sound first principles. Here is Richard Feynman’s original essay on “Cargo Cult Science”. If you’ve never read it before, it great and refreshing read. I read this in my youth while studying physics. I am unsure if its required reading for physicists, but a majority of physicists are well aware of this concept.

https://medium.com/intuitionmachine/cargo-cult-statistics-versus-deep-learning-alchemy-8d7700134c8e

Think Bayes

Think Bayes is an introduction to Bayesian statistics using computational methods.

The premise of this book, and the other books in the Think X series, is that if you know how to program, you can use that skill to learn other topics.

Most books on Bayesian statistics use mathematical notation and present ideas in terms of mathematical concepts like calculus. This book uses Python code instead of math, and discrete approximations instead of continuous mathematics. As a result, what would be an integral in a math book becomes a summation, and most operations on probability distributions are simple loops.

I think this presentation is easier to understand, at least for people with programming skills. It is also more general, because when we make modeling decisions, we can choose the most appropriate model without worrying too much about whether the model lends itself to conventional analysis. Also, it provides a smooth development path from simple examples to real-world problems.

Think Bayes is a Free Book. It is available under the Creative Commons Attribution-NonCommercial 3.0 Unported License, which means that you are free to copy, distribute, and modify it, as long as you attribute the work and don’t use it for commercial purposes.

Other Free Books by Allen Downey are available from Green Tea Press.

http://greenteapress.com/wp/think-bayes/
https://github.com/rlabbe/ThinkBayes