Why is this?
You might’ve heard an explanation that goes like this: whenever you drop an egg, or melt an ice cube, or shatter a wine glass, you’ve increased the entropy of the world. You might also have heard the phrase, “entropy always increases”. In other words, things are only allowed to happen in one direction — the direction in which entropy increases.
But this doesn’t answer the question, it just replaces it with a new set of questions.
What is entropy, really? Why does it always keep increasing? Why don’t eggshells uncrack, or wine glasses unshatter? In this piece, my goal is to give you the tools to answer these questions.
Going down this road leads us to some of the biggest unanswered questions about the cosmos: how did our universe begin, how will it end, and why is our past different from our future?
In 2012 I co-founded Better, a startup building a new type of enterprise e-learning platform. Our goal was to make it faster and cheaper for large companies to develop, deliver, and analyse adaptive, cross-platform, multi-language, online courses.
From day one, we decided to use Haskell as our main language, and it remained the only language we used on the back-end as our team grew to 10 developers.
After a period of experimentation and development, Better grew from $0 to $500k+ in annual recurring revenue in the space of a few months, with companies including American Express and Swissport as customers. However, the distribution model proved challenging to grow beyond that and we eventually sold to GRC Solutions, an Australian compliance company.
Though interest in Haskell appears to be growing steadily, its use in production is still rare. Some have the mistaken impression it’s an academic language and nothing more. In this post I’ll give my perspective on what it’s like to use Haskell in a startup. Can you get stuff done? Does it hold up in practice? Can you hire developers? Should more companies use it?
The short answer to those questions is yes. It’s not a fit for all problems or for all teams, but it is worth serious consideration. For building server-side software, Haskell might be the closest thing to a secret weapon you’ll find today.
This article was posted on Mashape by Chris Ismael. Chris is a developer evangelist with proven success and experience in technical evangelism, business development, and software development in the mobile industry. Wikipedia defines Machine Learning as “a branch of artificial intelligence that deals with the construction and study of systems that can learn from data.”
The most common question students have about mathematics is “when will I ever use this?” Many math teachers would probably struggle to give a coherent answer, beyond being very good at following precise directions. They will say “critical thinking” but not much else concrete. Meanwhile, the same teachers must, with a straight face, tell their students that the derivative of arccosine is important. (It goes beyond calculus, in case you were wondering)
Algorithms play a big part in our day-to-day lives. From search engines to architecture, explore how these formulas affect the way we view and interact with the world around us.
Redis is often used for caching, in a setup where a fixed maximum memory to use is specified. When new data arrives, we need to make space by removing old data. The efficiency of Redis as a cache is related to how good decisions it makes about what data to evict: deleting data that is going to be needed soon is a poor strategy, while deleting data that is unlikely to be requested again is a good one.
In other terms every cache has an hits/misses ratio, which is, in qualitative terms, just the percentage of read queries that the cache is able to serve. Accesses to the keys of a cache are not distributed evenly among the data set in most workloads. Often a small percentage of keys get a very large percentage of all the accesses. Moreover the access pattern often changes over time, which means that as time passes certain keys that were very requested may no longer be accessed often, and conversely, keys that once were not popular may turn into the most accessed keys.
So in general what a cache should try to do is to retain the keys that have the highest probability of being accessed in the future. From the point of view of an eviction policy (the policy used to make space to allow new data to enter) this translates into the contrary: the key with the least probability of being accessed in the future should be removed from the data set. There is only one problem: Redis and other caches are not able to predict the future.
Some time ago I posted two comments in the Perl group about the “speed” topic. The Need for Speed: use 5.024; and The Quest for Speed: What not to do. The first one is a simple recommendation to use Perl 5.24 as it really gives your real world application a speedup of roughly 25% overall – sometimes it may be even 200% (regular expressions), sometimes less. The second was basically a negative report on my compile optimization endeavors and the lack of (significant) success. On a side note, I debunked the performance claims of cperl – which is something I am truly sorry about, because some day I really would like to see cperl to actually be where it claims to be today.
I also promised to write a thing or two about what you really should do, when you’re looking for performance (aka execution speed) in your Perl programs. If your goal is not 25% improvement and not even 200% or anything short of “10 times as fast as the original code”, then this article is for you.
Tutorials on the scientific Python ecosystem: a quick introduction to central tools and techniques. The different chapters each correspond to a 1 to 2 hours course with increasing level of expertise, from beginner to expert.