An interactive deep learning book with code, math, and discussions, based on the NumPy interface.
This page is a collection of MIT courses and lectures on deep learning, deep reinforcement learning, autonomous vehicles, and artificial intelligence organized by Lex Fridman. Here are some steps to get started:
- Sign up to our mailing list for occassional updates.
- Connect on Twitter or LinkedIn for more frequent updates.
- Read the Deep Learning Basics blog post and check out the code tutorials on our GitHub.
- Watch the Deep Learning Basics and other lectures below.
- Attend the following lectures at MIT in January 2020. If you cannot attend, the lectures will also be posted on YouTube (with a delay of a few days)
In today’s blog post you are going to learn how to perform face recognition in both images and video streams using:
- Deep learning
As we’ll see, the deep learning-based facial embeddings we’ll be using here today are both (1) highly accurate and (2) capable of being executed in real-time.
To learn more about face recognition with OpenCV, Python, and deep learning, just keep reading!
The method can also be used to edit images by removing content and filling in the resulting holes.
The method, which performs a process called “image inpainting”, could be implemented in photo editing software to remove unwanted content, while filling it with a realistic computer-generated alternative.
“Our model can robustly handle holes of any shape, size location, or distance from the image borders. Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing,” the NVIDIA researchers stated in their research paper. “Further, our model gracefully handles holes of increasing size.”
Deep neural networks are composed of many individual neurons, which combine in complex and counterintuitive ways to solve a wide range of challenging tasks. This complexity grants neural networks their power but also earns them their reputation as confusing and opaque black boxes.
Understanding how deep neural networks function is critical for explaining their decisions and enabling us to build more powerful systems. For instance, imagine the difficulty of trying to build a clock without understanding how individual gears fit together. One approach to understanding neural networks, both in neuroscience and deep learning, is to investigate the role of individual neurons, especially those which are easily interpretable.
Our investigation into the importance of single directions for generalisation, soon to appear at the Sixth International Conference on Learning Representations (ICLR), uses an approach inspired by decades of experimental neuroscience — exploring the impact of damage — to determine: how important are small groups of neurons in deep neural networks? Are more easily interpretable neurons also more important to the network’s computation?