Drawing lessons from the “ Bezos Way”

Amazon’s CEO annual letter to his shareholders is a must-read. Customer focus, decision-making or the importance of writing down important things… Here are my takeaways from Jeff’s latest.

Whatever we think of its founder and CEO, Amazon remains a remarkable example of great management. Since its 1994 start, the company enjoyed steady growth, relentlessly conquering new markets and sectors, coupled to exceptional resilience shown when the company weathered two market crashes (2000 and 2008). In addition, Bezos has demonstrated a consistent ability to convince his board and shareholders to let expansion take precedence over profits and dividends. (No one can complain: thousand dollars invested in Amazon’s 1997 IPO are now worth more than half a million, a 500x multiple).

This didn’t happen without damage. By some measures, Amazon isn’t an enviable place to work and the pressure it applies to its suppliers rivals the iron fist of Walmart’s purchasing department. All things considered, Amazon’s level of corporate toxicity remains reasonable compared to Uber, as an example.

Jeff Bezos is also able to project an ultra-long term vision with his space exploration project for which he personally invests about a billion dollars per year.

Closer to our concerns, he has boosted a respected but doomed news institution — The Washington Post — thanks to a combined investment in journalistic excellence and in technology, two areas left fallow by most publishers.

That is why I thought Bezos’ written addresses to his shareholders (here) are worth some exegesis.

Let start with last week’s letter. (Emphasis mine, and while quotes are lifted from the original documents, some paragraphs have been rearranged for clarity and brevity).

Bezos starts his 2016 missive with a question asked by staffers at all-hands meetings:

“Jeff, what does Day 2 look like? (…) [Bezos reply:] Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”

Then he enumerates the three obsessions that make Amazon what it is today:…

https://mondaynote.com/drawing-lessons-from-the-bezos-way-dd0e950ade68

The real prerequisite for machine learning isn’t math, it’s data analysis

When beginners get started with machine learning, the inevitable question is “what are the prerequisites? What do I need to know to get started?”

And once they start researching, beginners frequently find well-intentioned but disheartening advice, like the following:

You need to master math. You need all of the following:
– Calculus
– Differential equations
– Mathematical statistics
– Optimization
– Algorithm analysis
– and
– and
– and ……..

A list like this is enough to intimidate anyone but a person with an advanced math degree.

It’s unfortunate, because I think a lot of beginners lose heart and are scared away by this advice.

If you’re intimidated by the math, I have some good news for you: in order to get started building machine learning models (as opposed to doing machine learning theory), you need less math background than you think (and almost certainly less math than you’ve been told that you need). If you’re interested in being a machine learning practitioner, you don’t need a lot of advanced mathematics to get started.

But you’re not entirely off the hook.

There are still prerequisites. In fact, even if you can get by without having a masterful understanding of calculus and linear algebra, there are other prerequisites that you absolutely need to know (thankfully, the real prerequisites are much easier to master).

https://www.r-bloggers.com/the-real-prerequisite-for-machine-learning-isnt-math-its-data-analysis/

Patrick Winston Explains Deep Learning

Patrick Winston is one of the greatest teachers at M.I.T., and for 27 years was Director of the Artificial Intelligence Laboratory (which later became part of CSAIL).

Patrick teaches 6.034, the undergraduate introduction to AI at M.I.T. and a recent set of his lectures is available as videos.

I want to point people to lectures 12a and 12b (linked individually below). In these two lectures he goes from zero to a full explanation of deep learning, how it works, how nets are trained, what are the interesting problems, what are the limitations, and what were the key breakthrough ideas that took 25 years of hard thinking by the inventors of deep learning to discover.

The only prerequisite is understanding differential calculus. These lectures are fantastic. They really get at the key technical ideas in a very understandable way. The biggest network analyzed in lecture 12a only has two neurons, and the biggest one drawn only has four neurons. But don’t be disturbed. He is laying the groundwork for 12b, where he explains how deep learning works, shows simulations, and shows results.

This is teaching at its best. Listen to every sentence. They all build the understanding.

I just wish all the people not in AI who talk at length about AI and the future in the press had this level of technical understanding of what they are talking about. Spend two hours on these lectures and you will have that understanding.

At YouTube, 12a Neural Nets, and 12b Deep Neural Nets.

http://rodneybrooks.com/patrick-winston-explains-deep-learning/

Five reasons blog posts are of higher scientific quality than journal articles

The Dutch toilet cleaner ‘WC-EEND’ (literally: ‘Toilet Duck’) aired a famous commercial in 1989 that had the slogan ‘We from WC-EEND advise… WC-EEND’. It is now a common saying in The Netherlands whenever someone gives an opinion that is clearly aligned with their self-interest. In this blog, I will examine the hypothesis that blogs are, on average, of higher quality than journal articles. Below, I present 5 arguments in favor of this hypothesis.  [EDIT: I’m an experimental psychologist. Mileage of what you’ll read below may vary in other disciplines].

http://daniellakens.blogspot.com.br/2017/04/five-reasons-blog-posts-are-of-higher.html

DeepBreath: Preventing angry emails with machine learning

We all have bad days. Maybe deadlines are slipping, your cat destroyed your couch (again), or you just have a regular case of the Mondays. Whatever the source of your stress, you hit “Send” on a Gmail draft at work, and you immediately regret it. No matter what, you never want to send excessively emotional or angry emails to coworkers, clients or even friends.

Inspired by many other fun use cases of Google Cloud Natural Language API, we wrote a Chrome plugin called DeepBreath that automatically sends all your saved drafts to Cloud Natural Language API for sentiment analysis. The API automatically detects how positive or negative any given piece of text is with a simple API call, so a plugin to solve the angry email problem was very easy and quick to build for Gmail, and could also be easily repurposed for any other places you write text (forums, project management tools, etc). Please see “A Note On User Data Privacy” below before considering making these extensions.

If your email is of sufficiently negative magnitude, it will automatically display a warning so you can consider a rewrite before you hit send, rather than after. The warning gives you a chance to take a literal deep breath and reconsider the contents of the email.

How does it work? Every time a draft is saved, the body of the draft is sent to the analyzeSentiment API endpoint. A score (the positive or negative sentiment) and the magnitude (how strong the feeling is) is returned. You can read more about score and magnitude in the docs. If the score is sufficiently negative and the magnitude sufficiently strong, a warning pops up. Only one warning pops up per draft.

https://cloud.google.com/blog/big-data/2017/04/deepbreath-preventing-angry-emails-with-machine-learning

How Elon Musk Learns Faster And Better Than Everyone Else

The implicit assumption is that if you study in multiple areas, you’ll only learn at a surface level, never gain mastery.

The success of expert-generalists throughout time shows that this is wrong.Learning across multiple fields provides an information advantage (and therefore an innovation advantage) because most people focus on just one field.

For example, if you’re in the tech industry and everyone else is just reading tech publications, but you also know a lot about biology, you have the ability to come up with ideas that almost no one else could. Vice-versa. If you’re in biology, but you you also understand artificial intelligence, you have an information advantage over everyone else who stays siloed.

Despite this basic insight, few people actually learn beyond their industry.

Each new field we learn that is unfamiliar to others in our field gives us the ability to make combinations that they can’t. This is the expert-generalist advantage.

https://medium.com/@michaeldsimmons/c7c753266993