Finding a path to enlightenment in Programming Language Theory can be a tough one, particularly for programming pracitioners who didn’t learn it at school. This resource is here to help. Please feel free to ping me or send pull requests if you have ideas for improvement.
Note that I’ve attempted to order the books in order of most “tackleable”. So the idea is to read books from top to bottom. As always, it depends on your background and inclinations. It would be nice to provide multiple paths through this material for folks with different backgrounds and even folks with different goals. However, for now, it is what it is.
pandas is a Python library for doing data analysis. It’s really fast and lets you do exploratory work incredibly quickly.
The goal of this cookbook is to give you some concrete examples for getting started with pandas. The docs are really comprehensive. However, I’ve often had people tell me that they have some trouble getting started, so these are examples with real-world data, and all the bugs and weirdness that entails.
I’m working with 3 datasets right now
- 311 calls in New York
- How many people were on Montréal’s bike paths in 2012
- Montreal’s weather for 2012, hourly
It comes with batteries (data) included, so you can try out all the examples right away.
For the sake of simplicity, let’s assume your everyday framework is React but if you use something else, replace React with X framework because this will still apply to you too.
John Graham-Cumming is a computer programmer and author. He studied mathematics and computation at Oxford and stayed for a doctorate in computer security. As a programmer he has worked in Silicon Valley and New York, the UK, Germany, and France. His open source POPFile program won a Jolt Productivity Award in 2004. John is the author of a travel book for scientists published in 2009 called The Geek Atlas, and has written articles for The Times, The Guardian, The Sunday Times, The San Francisco Chronicle, New Scientist, and other publications.
He can be found on the web at jgc.org and on Twitter as @jgrahamc.
If you’ve heard of him at all, it’s likely because in 2009 he successfully petitioned the British Government to apologize for the mistreatment of British mathematician Alan Turing.
Xamarin brings open source .NET to mobile development, enabling every developer to build truly native apps for any device in C# and F#. We’re excited for your contributions in continuing our mission to make it fast, easy, and fun to build great mobile apps.
Asynchronous programming is becomming more common in Modern C++. However, the callback based model, whether through the use of explicit callbacks or through futures with completion callbacks, can result in code that is difficult to write and read especially when trying to call an asynchronous function in a loop.
There have been several attempts to mitigate this
However, there are issues that prevent universal adoption of these solutions
- Stackfull coroutines – Require linking to a separate library. In addition, there are concerns with how much stack space a coroutine needs. Too much stackspace wastes memory, too little risks stack overflow. Segmented stacks can help with this, but may not be supported on all compilers/platforms.
- Stackless coroutines – So far only implemented in Visual C++. Lack of consensus at the Standards meeting caused it to be put into a TS rather than into the standard
- Macro based libraries – Macro magic 😦
This library is another alternative that uses C++14 generic lambdas to implement stackless coroutines without macros. The library is a single header file and has no dependencies other than C++14.
In this, I describe why Erlang is different from most other language runtimes. I also describe why it often forgoes throughput for lower latency.
TL;DR – Erlang is different from most other language runtimes in that it targets different values. This describes why it often seem to perform worse if you have few processes, but well if you have many.
From time to time the question of Erlang scheduling gets asked by different people. While this is an abridged version of the real thing, it can act as a way to describe how Erlang operates its processes. Do note that I am taking Erlang R15 as the base point here. If you are a reader from the future, things might have changed quite a lot—though it is usually fair to assume things only got better, in Erlang and other systems.
Toward the operating system, Erlang usually has a thread per core you have in the machine. Each of these threads runs what is known as a scheduler. This is to make sure all cores of the machine can potentially do work for the Erlang system. The cores may be bound to schedulers, through the +sbt flag, which means the schedulers will not “jump around” between cores. It only works on modern operating systems, so OSX can’t do it, naturally. It means that the Erlang system knows about processor layout and associated affinities which is important due to caches, migration times and so on. Often the +sbt flag can speed up your system. And at times by quite a lot…
I’ve been wanting to add more scripting capabilities to NGINX for a long time. Scripting lets people do more in NGINX without having to write C modules, for example. Lua is a good tool in this area, but it’s not as widely known as some other languages.
This is another milestone in the development of NGINX open source software and NGINX Plus. I want to take the opportunity to explain what nginScript is, describe why it’s needed, share some examples, and talk about the future.
Launching nginScript and Looking Ahead
Agera (Swedish for “to act”) is a super lightweight Android library that helps prepare data for consumption by the Android application components (such as Activities), or objects therein (such as Views), that have life-cycles in one form or another. It introduces a flavor of functional reactive programming, facilitates clear separation of the when, where and what factors of a data processing flow, and enables describing such a complex and asynchronous flow with a single expression, in near natural language.
In this post I’d like to test limits of python aiohttp and check its performance in terms of requests per minute. Everyone knows that asynchronous code performs better when applied to network operations, but it’s still interesting to check this assumption and understand how exactly it is better and why it’s is better. I’m going to check it by trying to make 1 million requests with aiohttp client. How many requests per minute will aiohttp make? What kind of exceptions and crashes can you expect when you try to make such volume of requests with very primitive scripts? What are main gotchas that you need to think about when trying to make such volume of requests?