In this post, I will demonstrate how you can develop, test, deploy and operate a production ready Serverless Microservice using the AWS ecosystem. The combination of AWS Lambda and Amazon API Gateway allows us to operate a REST endpoint without the need of any virtual machines. We will use Amazon DynamoDB as our database, Amazon CloudWatch for metrics and logs, and AWS CodeCommit and AWS CodePipeline as our delivery pipeline. In the end, you know how to wire together a bunch of AWS services to run a system in production. This post is a summary of my talk The Life of a Serverless Microservice on AWSwhich I gave at DevOpsCon 2016 in Berlin.
Welcome to 5 Algorithms Every Web Developer Can Use and Understand!
We have compiled a short primer for web developers on how and when to harness the power of algorithms in your website or other web applications. Each chapter features one algorithm available with the Algorithmia API. Using this tool, developers with limited background in mathematics can access these amazing utilities. In fact, only basic, high-school level mathematics is used in this book. However, even if you are an advanced mathematician, the examples may still prove to be useful, because they clearly display how to use the API within your application.
The world of algorithms is as endless as it is fascinating. Working as a programmer encourages us to pick up new tools and bring them into our applications. However, if we never take the time to explore the areas we are not familiar with, then we could be missing opportunities to write better code. I believe that the five algorithms featured here are some of the most practical for web engineers to use in their applications. Harnessing the Algorithmia API takes these complex challenges – related to both computation and infrastructure – and turns them in to low hanging fruit.
“Learn the rules like a pro, so you can break them like an artist.” — Pablo Picasso
The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven’t seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can’t fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization?
It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.
Today we’re open-sourcing our implementation of Wide & Deep Learning as part of the TF.Learn APIso that you can easily train a model yourself. Please check out the TensorFlow tutorials on Linear Models and Wide & Deep Learning, as well as our research paper to learn more.
This is a tutorial on how to implement a programming language. If you ever wrote an interpreter or a compiler, then there is probably nothing new for you here. But, if you’re using regexps to “parse” anything that looks like a programming language, then please read at least the section on parsing. Let’s write less buggy code!
The ToC on the right is in “simple-to-advanced” order. I’d recommend you not to skip forward, unless you know the subject well. You can always refer back if you don’t understand something. Also, questions and feedback are very much appreciated!
What are we going to learn
- What is a parser, and how to write one.
- How to write an interpreter.
- Continuations, and why are they important.
- Writing a compiler.
- How to transform code to continuation-passing style.
- A few basic optimization techniques.
In between, I’m going to argue why Lisp is a great programming language. However, the language we will work on is not a Lisp. It has a richer syntax (classical infix notation that everybody knows) and will be about as powerful as Scheme, except for macros. Sadly or not, macros are the ultimate bastion of Lisp, something that other languages just can’t conquer (unless they are called Lisp dialects). [Yes, I know about SweetJS… close but no cigar.]
But first, let’s dream up a programming language.
Due to the recent achievements of artificial neural networks across many different tasks (such as face recognition, object detection and Go), deep learning has become extremely popular. This post aims to be a starting point for those interested in learning more about it.
If you already have a basic understanding of linear algebra, calculus, probability and programming: I recommend starting with Stanford’s CS231n. The course notes are comprehensive and written well. The slides for each lesson are also available, and even though the accompanying videos were removed from the official site, re-uploads are quite easy to find online.
If you dont have the relevant math background: There is an incredible amount of free material online that can be used to learn the required math knowledge. Gilbert Strang’s course on linear algebra is a great introduction to the field. For the other subjects, edX has courses from MIT on both calculus and probability.
If you are interested in learning more about machine learning: Andrew Ng’s Coursera class is a popular choice as a first class in machine learning. There are other great options available such as Yaser Abu-Mostafa’s machine learning course which focuses much more on theory than the Coursera class but it is still relevant for beginners. Knowledge in machine learning isn’t really a prerequisite to learning deep learning, but it does help. In addition, learning classical machine learning and not only deep learning is important because it provides a theoretical background and because deep learning isn’t always the correct solution…
Introducing Phippy, an intrepid little PHP app, and her journey to Kubernetes.
What is this? Well, I wrote a book that explains Kubernetes. We posted a video version to the Kubernetes community blog. If you find us at a conference, you stand a chance to pick up a physical copy. But for now, here’s a blog post version!
And after you’ve finished reading, tweet something at @opendeis for a chance to win a squishy little Phippy toy of your own. Not sure what to tweet? Why don’t you tell us about yourself and how you use Kubernetes!
A popular demonstration of the capability of deep learning techniques is object recognition in image data.
The “hello world” of object recognition for machine learning and deep learning is the MNIST dataset for handwritten digit recognition.
In this post you will discover how to develop a deep learning model to achieve near state of the art performance on the MNIST handwritten digit recognition task in Python using the Keras deep learning library.
After completing this tutorial, you will know:
- How to load the MNIST dataset in Keras.
- How to develop and evaluate a baseline neural network model for the MNIST problem.
- How to implement and evaluate a simple Convolutional Neural Network for MNIST.
- How to implement a close to state-of-the-art deep learning model for MNIST.
Let’s get started.
There are four simple steps to the Feynman Technique, which I’ll explain below:
- Choose a Concept
- Teach it to a Toddler
- Identify Gaps and Go Back to The Source Material
- Review and Simplify