“There have been several good talks about using Haskell in industry lately, and several people asked me to write about my personal experiences. Although I can’t give specific details I will speak broadly about some things I’ve learned and experienced.
The myths are true. Haskell code tends to be much more reliable, performant, easy to refactor, and easier to incorporate with coworkers code without too much thinking. It’s also just enjoyable to write.
The myths are sometimes trueisms. Haskell code tends to be of high quality by construction, but for several reasons that are only correlated; not causally linked to the technical merits of Haskell. Just by virtue of language being esoteric and having a relatively higher barrier to entry we’ll end up working with developers who would write above average code in any language. That said, the language actively encourage thoughtful consideration of abstractions and a “brutal” (as John Carmack noted) level of discipline that high quality code in other languages would require, but are enforced in Haskell.
Prefer to import libraries as qualified. Typically this is just considered good practice for business logic libraries, it makes it easier to locate the source of symbol definitions. The only point of ambiguity I’ve seen is disagreement amongst developers on which core libraries are common enough to import unqualified and how to handle symbols. This ranges the full spectrum from fully qualifying everything
(Control.Monad.>>=) to common things like
(Data.Maybe.maybe) or just disambiguating names like
Consider rolling an internal prelude. As we’ve all learned the hard way, the Prelude is not your friend. The consensus historically has favored the “Small Prelude Assumption” which presupposes that tools get pushed out into third party modules, even the core tools that are necessary to do anything (text, bytestring, vector, etc). This makes life easier for library authors at the cost of some struggle for downstream users…”