For the latest episode of Stack Stories, we did something a bit different. We decided to focus on the origin story of the one of the most popular technologies in the software development world: React. We sat down with Pete Hunt, one of the original creators of React, now CEO at Smyte, to get the untold, in-depth story of why React was first created, how it gained adoption within Facebook due to the Instagram acquisition, and it’s eventual release to the public.
Listen to the interview in full or check out the transcript below (edited for brevity).
Check out Stack Stories’ sponsor STRV at strv.com/stackshare.
All code examples below are labeled for reference. They are purely intended to provide examples of concepts. Most of them can be written in a much better way.
For today’s post, I’ve drawn material not just from one paper, but from five! The subject matter is ‘word2vec’ – the work of Mikolov et al. at Google on efficient vector representations of words (and what you can do with them). The papers are:
From the first of these papers (‘Efficient estimation…’) we get a description of the Continuous Bag-of-Words and Continuous Skip-gram models for learning word vectors (we’ll talk about what a word vector is in a moment…). From the second paper we get more illustrations of the power of word vectors, some additional information on optimisations for the skip-gram model (hierarchical softmax and negative sampling), and a discussion of applying word vectors to phrases. The third paper (‘Linguistic Regularities…’) describes vector-oriented reasoning based on word vectors and introduces the famous “King – Man + Woman = Queen” example. The last two papers give a more detailed explanation of some of the very concisely expressed ideas in the Milokov papers.
Check out the word2vec implementation on Google Code.
The amazing power of word vectors
We are publishing pre-trained word vectors for 90 languages, trained on Wikipedia. These are vectors in dimension 300, trained with the default parameters of fastText.
Open source Torchnet helps researchers and developers build rapid and reusable prototypes of learning systems in Torch.
Building rapid and clean prototypes for deep machine-learning operations can now take a big step forward with Torchnet, a new software toolkit that fosters rapid and collaborative development of deep learning experiments by the Torch community.
Introduced and open-sourced this week at the International Conference on Machine Learning (ICML) in New York, Torchnet provides a collection of boilerplate code, key abstractions, and reference implementations that can be snapped together or taken apart and then later reused, substantially speeding development. It encourages a modular programming approach, reducing the chance of bugs while making it easy to use asynchronous, parallel data loading and efficient multi-GPU computations.
The new toolkit builds on the success of Torch, a framework for building deep learning models by providing fast implementations of common algebraic operations on both CPU (via OpenMP/SSE) and GPU (via CUDA).
Python aficionados are often surprised to learn that Python has long been the language most commonly used by production engineers at Facebook and is the third most popular language at Facebook, behind Hack (our in-house dialect of PHP) and C++. Our engineers build and maintain thousands of Python libraries and binaries deployed across our entire infrastructure.
Every day, dozens of Facebook engineers commit code to Python utilities and services with a wide variety of purposes including binary distribution, hardware imaging, operational automation, and infrastructure management.
One of the many great parts of React is how it makes you think about apps as you build them. In this post, I’ll walk you through the thought process of building a searchable product data table using React.