Due to the recent achievements of artificial neural networks across many different tasks (such as face recognition, object detection and Go), deep learning has become extremely popular. This post aims to be a starting point for those interested in learning more about it.
If you already have a basic understanding of linear algebra, calculus, probability and programming: I recommend starting with Stanford’s CS231n. The course notes are comprehensive and written well. The slides for each lesson are also available, and even though the accompanying videos were removed from the official site, re-uploads are quite easy to find online.
If you dont have the relevant math background: There is an incredible amount of free material online that can be used to learn the required math knowledge. Gilbert Strang’s course on linear algebra is a great introduction to the field. For the other subjects, edX has courses from MIT on both calculus and probability.
If you are interested in learning more about machine learning: Andrew Ng’s Coursera class is a popular choice as a first class in machine learning. There are other great options available such as Yaser Abu-Mostafa’s machine learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners. Knowledge in machine learning isn’t really a prerequisite to learning deep learning, but it does help. In addition, learning classical machine learning and not only deep learning is important because it provides a theoretical background and because deep learning isn’t always the correct solution…
Introducing Phippy, an intrepid little PHP app, and her journey to Kubernetes.
What is this? Well, I wrote a book that explains Kubernetes. We posted a video version to the Kubernetes community blog. If you find us at a conference, you stand a chance to pick up a physical copy. But for now, here’s a blog post version!
And after you’ve finished reading, tweet something at @opendeis for a chance to win a squishy little Phippy toy of your own. Not sure what to tweet? Why don’t you tell us about yourself and how you use Kubernetes!
A popular demonstration of the capability of deep learning techniques is object recognition in image data.
The “hello world” of object recognition for machine learning and deep learning is the MNIST dataset for handwritten digit recognition.
In this post you will discover how to develop a deep learning model to achieve near state of the art performance on the MNIST handwritten digit recognition task in Python using the Keras deep learning library.
After completing this tutorial, you will know:
- How to load the MNIST dataset in Keras.
- How to develop and evaluate a baseline neural network model for the MNIST problem.
- How to implement and evaluate a simple Convolutional Neural Network for MNIST.
- How to implement a close to state-of-the-art deep learning model for MNIST.
Let’s get started.
A pure Lua client library for Apache Cassandra (2.0+), compatible with ngx_lua/OpenResty and plain Lua.
It is implemented following the example of the official Datastax drivers, and tries to offer the same behaviors, options and features.
There are four simple steps to the Feynman Technique, which I’ll explain below:
- Choose a Concept
- Teach it to a Toddler
- Identify Gaps and Go Back to The Source Material
- Review and Simplify
Deep learning is the new big trend in machine learning. It had many recent successes in computer vision, automatic speech recognition and natural language processing.
The goal of this blog post is to give you a hands-on introduction to deep learning. To do this, we will build a Cat/Dog image classifier using a deep learning algorithm called convolutional neural network (CNN) and a Kaggle dataset.
This post is divided into 2 main parts. The first part covers some core concepts behind deep learning, while the second part is structured in a hands-on tutorial format.
In the first part of the hands-on tutorial (section 4), we will build a Cat/Dog image classifier using a convolutional neural network from scratch. In the second part of the tutorial (section 5), we will cover an advanced technique for training convolutional neural networks called transfer learning. We will use some Python code and a popular open source deep learning framework called Caffe to build the classifier. Our classifier will be able to achieve a classification accuracy of 97%.
By the end of this post, you will understand how convolutional neural networks work, and you will get familiar with the steps and the code for building these networks.
The source code for this tutorial can be found in this github repository.
This tutorial shows how we can use MLDB’s TensorFlow integration to do image recognition. TensorFlow is Google’s open source deep learning library.
We will load the Inception-v3 model to generate discriptive labels for an image. The Inception model is a deep convolutional neural network and was trained on the ImageNet Large Visual Recognition Challenge dataset, where the task was to classify images into 1000 classes.
To offer context and a basis for comparison, this notebook is inspired by TensorFlow’s Image Recognition tutorial.