Pyro – Deep Universal Probabilistic Programming

Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. Pyro enables flexible and expressive deep probabilistic modeling, unifying the best of modern deep learning and Bayesian modeling. It was designed with these key principles:

Universal: Pyro can represent any computable probability distribution.
Scalable: Pyro scales to large data sets with little overhead.
Minimal: Pyro is implemented with a small core of powerful, composable abstractions.
Flexible: Pyro aims for automation when you want it, control when you need it.

Check out the blog post for more background or dive into the tutorials.

http://pyro.ai/

Efficient Counter that uses a limited (bounded) amount of memory regardless of data size

Bounter is a Python library, written in C, for extremely fast probabilistic counting of item frequencies in massive datasets, using only a small fixed memory footprint.

Why Bounter?

Bounter lets you count how many times an item appears, similar to Python’s built-in dict or Counter:

from bounter import bounter

counts = bounter(size_mb=1024)  # use at most 1 GB of RAM
counts.update([u'a', 'few', u'words', u'a', u'few', u'times'])  # count item frequencies

print(counts[u'few'])  # query the counts
2

However, unlike dict or Counter, Bounter can process huge collections where the items would not even fit in RAM. This commonly happens in Machine Learning and NLP, with tasks like dictionary building or collocation detection that need to estimate counts of billions of items (token ngrams) for their statistical scoring and subsequent filtering.

Bounter implements approximative algorithms using optimized low-level C structures, to avoid the overhead of Python objects. It lets you specify the maximum amount of RAM you want to use. In the Wikipedia example below, Bounter uses 31x less memory compared to Counter.

Bounter is also marginally faster than the built-in dict and Counter, so wherever you can represent your items as strings(both byte-strings and unicode are fine, and Bounter works in both Python2 and Python3), there’s no reason not to use Bounter instead.

https://github.com/RaRe-Technologies/bounter

A Visual Introduction to Machine Learning

In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions. Using a data set about homes, we will create a machine learning model to distinguish homes in New York from homes in San Francisco.

First, some intuition

Let’s say you had to determine whether a home is in San Francisco or in New York. In machine learning terms, categorizing data points is a classification task.Since San Francisco is relatively hilly, the elevation of a home may be a good way to distinguish the two cities. Based on the home-elevation data to the right, you could argue that a home above 240 ft should be classified as one in San Francisco.

Adding nuance

Adding another dimension allows for more nuance. For example, New York apartments can be extremely expensive per square foot. So visualizing elevation and price per square foot in a scatterplot helps us distinguish lower-elevation homes. The data suggests that, among homes at or below 240 ft, those that cost more than $1776 per square foot are in New York City. Dimensions in a data set are called features, predictors, or variables. 1

Drawing boundaries

You can visualize your elevation (>242 ft) and price per square foot (>$1776) observations as the boundaries of regions in your scatterplot. Homes plotted in the green and blue regions would be in San Francisco and New York, respectively.

Identifying boundaries in data using math is the essence of statistical learning. Of course, you’ll need additional information to distinguish homes with lower elevations and lower per-square-foot prices. The dataset we are using to create the model has 7 different dimensions. Creating a model is also known as training a model. On the right, we are visualizing the variables in a scatterplot matrix to show the relationships between each pair of dimensions.

There are clearly patterns in the data, but the boundaries for delineating them are not obvious.

And now, machine learning

Finding patterns in data is where machine learning comes in. Machine learning methods use statistical learning to identify boundaries. One example of a machine learning method is a decision tree. Decision trees look at one variable at a time and are a reasonably accessible (though rudimentary) machine learning method.

What you will find in the full article:

  • Finding better boundaries
  • Your first fork
  • Tradeoffs
  • The best split
  • Recursion
  • Growing a tree
  • Making predictions
  • Reality check

To check out all this information, and play with a few cool interactive visualizations, click here.

 

http://www.datasciencecentral.com/profiles/blogs/a-visual-introduction-to-machine-learning-1

How Bayesian inference works

Bayesian inference is a way to get sharper predictions from your data. It’s particularly useful when you don’t have as much data as you would like and want to juice every last bit of predictive strength from it.

Although it is sometimes described with reverence, Bayesian inference isn’t magic or mystical. And even though the math under the hood can get dense, the concepts behind it are completely accessible. In brief, Bayesian inference lets you draw stronger conclusions from your data by folding in what you already know about the answer.

Bayesian inference is based on the ideas of Thomas Bayes, a nonconformist Presbyterian minister in London about 300 years ago. He wrote two books, one on theology, and one on probability. His work included his now famous Bayes Theorem in raw form, which has since been applied to the problem of inference, the technical term for educated guessing. The popularity of Bayes’ ideas was aided immeasurably by another minister, Richard Price. He saw their significance, refined them and published them. It would be more accurate and historically just to call Bayes’ Theorem the Bayes-Price Rule.

https://brohrer.github.io/how_bayesian_inference_works.html

Patrick Winston Explains Deep Learning

Patrick Winston is one of the greatest teachers at M.I.T., and for 27 years was Director of the Artificial Intelligence Laboratory (which later became part of CSAIL).

Patrick teaches 6.034, the undergraduate introduction to AI at M.I.T. and a recent set of his lectures is available as videos.

I want to point people to lectures 12a and 12b (linked individually below). In these two lectures he goes from zero to a full explanation of deep learning, how it works, how nets are trained, what are the interesting problems, what are the limitations, and what were the key breakthrough ideas that took 25 years of hard thinking by the inventors of deep learning to discover.

The only prerequisite is understanding differential calculus. These lectures are fantastic. They really get at the key technical ideas in a very understandable way. The biggest network analyzed in lecture 12a only has two neurons, and the biggest one drawn only has four neurons. But don’t be disturbed. He is laying the groundwork for 12b, where he explains how deep learning works, shows simulations, and shows results.

This is teaching at its best. Listen to every sentence. They all build the understanding.

I just wish all the people not in AI who talk at length about AI and the future in the press had this level of technical understanding of what they are talking about. Spend two hours on these lectures and you will have that understanding.

At YouTube, 12a Neural Nets, and 12b Deep Neural Nets.

http://rodneybrooks.com/patrick-winston-explains-deep-learning/

Five reasons blog posts are of higher scientific quality than journal articles

The Dutch toilet cleaner ‘WC-EEND’ (literally: ‘Toilet Duck’) aired a famous commercial in 1989 that had the slogan ‘We from WC-EEND advise… WC-EEND’. It is now a common saying in The Netherlands whenever someone gives an opinion that is clearly aligned with their self-interest. In this blog, I will examine the hypothesis that blogs are, on average, of higher quality than journal articles. Below, I present 5 arguments in favor of this hypothesis.  [EDIT: I’m an experimental psychologist. Mileage of what you’ll read below may vary in other disciplines].

http://daniellakens.blogspot.com.br/2017/04/five-reasons-blog-posts-are-of-higher.html

In-depth introduction to machine learning in 15 hours of expert videos

In January 2014, Stanford University professors Trevor Hastie and Rob Tibshirani (authors of the legendary Elements of Statistical Learning textbook) taught an online course based on their newest textbook, An Introduction to Statistical Learning with Applications in R (ISLR). I found it to be an excellent course in statistical learning (also known as “machine learning”), largely due to the high quality of both the textbook and the video lectures. And as an R user, it was extremely helpful that they included R code to demonstrate most of the techniques described in the book.

If you are new to machine learning (and even if you are not an R user), I highly recommend reading ISLR from cover-to-cover to gain both a theoretical and practical understanding of many important methods for regression and classification. It is available as a free PDF download from the authors’ website.

If you decide to attempt the exercises at the end of each chapter, there is a GitHub repository of solutions provided by students you can use to check your work.

As a supplement to the textbook, you may also want to watch the excellent course lecture videos (linked below), in which Dr. Hastie and Dr. Tibshirani discuss much of the material. In case you want to browse the lecture content, I’ve also linked to the PDF slides used in the videos.

https://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos

CS 20SI: Tensorflow for Deep Learning Research

Tensorflow is a powerful open-source software library for machine learning developed by researchers at Google Brain. It has many pre-built functions to ease the task of building different neural networks. Tensorflow allows distribution of computation across different computers, as well as multiple CPUs and GPUs within a single machine. TensorFlow provides a Python API, as well as a less documented C++ API. For this course, we will be using Python.

This course will cover the fundamentals and contemporary usage of the Tensorflow library for deep learning research. We aim to help students understand the graphical computational model of Tensorflow, explore the functions it has to offer, and learn how to build and structure models best suited for a deep learning project. Through the course, students will use Tensorflow to build models of different complexity, from simple linear/logistic regression to convolutional neural network and recurrent neural networks with LSTM to solve tasks such as word embeddings, translation, optical character recognition. Students will also learn best practices to structure a model and manage research experiments.

http://web.stanford.edu/class/cs20si/index.html
http://web.stanford.edu/class/cs20si/syllabus.html

A Complete Tutorial to Learn Data Science with Python from Scratch

This article on a complete tutorial to learn Data Science with Pyhon from scratch, was posted by Kunal Jain. Kunal is a post graduate from IIT Bombay in Aerospace Engineering. He has spent more than 8 years in field of Data Science. He learned basics of Python within a week. And, since then, he has not only explored this language to the depth, but also has helped many other to learn this language.

Python was originally a general purpose language. But, over the years, with strong community support, this language got dedicated library for data analysis and predictive modeling.
Due to lack of resource on python for data science, he decided to create this tutorial to help many others to learn python faster. In this tutorial, you will take bite sized information about how to use Python for Data Analysis, chew it till you are comfortable and practice it at your own end.

http://www.datasciencecentral.com/profiles/blogs/a-complete-tutorial-to-learn-data-science-with-python-from