“Parallelism and concurrency are both very fashionable notions. Lots of languages and tools are advertised as good at these things – often at both things.
I believe that concurrency and parallelism call for very different tools, and each tool can be really good at either one or the other. To oversimplify:
“I’ve been having a lot of fun learning Haskell these past few months, but getting started isn’t quite as straight-forward as it could be. I had the good fortune to work at the right place at the right time and was able to take Bryan O’Sullivan’s Haskell class at Facebook, but it’s definitely possible to get started on your own. While you can play a bit with Haskell at Try Haskell you’ll eventually want to get GHC installed on your own machine…”
“It’s often claimed that learning Haskell will make you a better programmer in other languages. I like the idea that there’s no such thing as a good programmer, just a programmer who follows good practices. As soon as we stop following good practices we suck again. So, Haskell must introduce and indoctrinate better practices that we carry back to our other languages. Right? I think it’s true but it’s not obvious, so I’ve written this article to outline some of the habits and practices that I think changed after I used Haskell for a while…”
“These are the course materials for the Stanford Computer Science class CS240h, Functional Systems in Haskell, which we are teaching for the first time during the Fall term of 2011…”
“After the last post about high performance, high level programming, Slava Pestov, of Factor fame, wondered whether it was generally true that “if you want good performance you have to write C in your language”. It’s a good question to ask of a high level language.
In this post I want to show how, often, we can answer “No”. That by working at a higher abstraction level we can get the same low level performance, by exploiting the fact that the compiler knows a lot more about what our code does. We can teach the compiler to better understand the problem domain, and in doing so, enable the compiler to optimise the code all the way down to raw assembly we want.
Specifically, we’ll exploit stream fusion — a new compiler optimisation that removes intermediate data structures produced in function pipelines to produce very good code, yet which encourages a very high level coding style. The best of high level programming, combined with best of raw low level throughput…”
“Ever wondered how functional programmers think? I aim to give you a glimpse into the programming style and mindset of experienced functional programmers, so you can see why we are so passionate about what we do. We’ll also discuss some wider ideas about programming, such as making our languages fit the problem and not the other way round, and how this affects language design…”
“What’s better than programming a GPU with a high-level, Haskell-embedded DSL (domain-specific-language)? Well, perhaps writing portable CPU/GPU programs that utilize both pieces of silicon—with dynamic load-balancing between them—would fit the bill.
This is one of the heterogeneous programming scenarios supported by our new meta-par packages. A draft paper can be found here, which explains the mechanism for building parallel schedulers out of “mix-in” components. In this post, however, we will skip over that and take a look at CPU/GPU programming specifically.
This post assumes familiarity with the monad-par parallel programming library, which is described in this paper…”
“Many book/articles about Haskell start by introducing some esoteric formula (quick sort, Fibonacci, etc…). I will make the exact opposite. At first I won’t show you any Haskell super power. I will start with similarities between Haskell and other programming languages. Let’s jump in the obligatory “Hello World”…”