I once knew a skilled carpenter. He had a saying, about screw drivers and drills and such:
It’s not the tool, it’s the fool.
What he meant is that a master carpenter with poor quality tools is better than a novice carpenter with high quality tools. His particular example was to say “A master mechanic can take apart and repair the entire engine room on a ship with a flathead screwdriver and an adjustable crescent wrench. If you’re blaming your tools, you should be blaming the fool who uses them instead.”
The other day I was at a Meetup for the Rust programming language. It was by the founders of a Rust-focused tech consulting start up. I really enjoyed it and I learned a lot, but it got me thinking about what Uncle Bob has called the Type Wars.
Mark Seemann wrote a fantastic example of the strict language argument. He essentially argues that, out of the set of all programs, most are very bad and obviously wrong. Therefore, languages that have strict rules produce better programs because fewer of the really bad programs are allowed.
Anyway, this got me thinking:
Can I make an analogy between writing programs and searching for something in a finite space? In other words, if we do a thought experiment where the programmer is searching for the right program in the set of all programs that his language will allow, can we gain any useful insight into programming language design? I like this approach because speeding up a search algorithm is much easier to reason about than the entire process of creating software is, and the analogy is not that terrible; programmers often consider and reject many alternatives before they are happy with a piece of code, and that is a little bit like a search algorithm.
Mark Seemann and others want to optimize the search for good programs by shrinking the search space. In other words, they argue that if there are fewer valid but flawed programs, the search for valid, non-flawed programs will go faster. Based on our analogy, they’re right. If you’re looking for a specific book somewhere in the library, good luck. But if you know anything about the book, the author or the call number the library gave it, you might be able to find it quickly. Shrinking the legal search space is a smart optimization for search algorithms.
If we really push this analogy, the question becomes this: Does the shrinking of the search space help more than the adding of picky rules that slow one down hurts? I don’t know the answer to that question, but it is definitely worth thinking about.
My personal preference is usually for the rigid language. Eric Lippert says that one benefit of these rigid languages (my term) is that the language
“… is a tool for producing a partial proof that the program is correct, or at least not broken in some obvious, detectable way.”
I find this partial proof comforting. It makes the number of things I have to worry about smaller.
I personally think, also, that a specific subset of the rigid languages, namely the functional languages, have a distinct advantage here: They are better at dealing with concurrency.
Basically, concurrency is the set of programming problems that arise when more than one process is acting at the same time. Because functions’ outputs depend only on their inputs, and not on the outside state of the world, they are less risky to use in situations where many processes are acting than other constructs are. I think we might reach the point where functional programs, which are much less reliant on storing the state of some process than other programs, are just better for dealing with the kinds of problems we have.
Still, I think the size and space of the search space depends more on the problem domain than on anything else. Perhaps the type wars end in stalemate, and it’s really about the fools writing the code.
Till next week, happy learning!