Revisiting The Phoenix Project

In one of my first posts on this blog, I reviewed a wonderful book called The Phoenix Project. The key lesson I got from the book then was that work in progress is a killer. Work in progress, that is, anything to which you have given time or effort, but which has yet to give you anything back, kills productivity. Another way to think of this is that productivity is hurt by starting some things before we finish others. (Look at it this way: If we always start a new task, and never work on an old task, we will finish nothing. If we always work on our nearest-to-completion task, and empty the queue before starting new things, we will finish everything, but may start new things only slowly. The best way to manage time is in between, but certainly closer to the second.)

The idea that we should finish things is hardly shocking. Joel Spolsky has written about it. It was the topic of a recent episode of .NET Rocks, and John Sonmez recently tweeted:

“Of all the habits you have developed in life, which is most beneficial? Mine is finishing things.”

For this post I am going to take it as given that finishing old tasks before starting new ones is better than starting new ones all the time without finishing anything. What I want to look at instead is why it’s hard. Why is it hard for me to finish things I start? What should I even finish? What should I even start?

MPJ recently did an awesome video about the difficulty of focus. The hard thing people never want to admit is that focusing on something means not focusing on something else. If we want to focus on making the user interface prettier, we are going to spend less time making data retrieval faster. Time allocations are a zero-sum game. To give time you must take time. So what should we give time to, and what should we take time from? That question seems hopelessly broad, so I’m going to try narrowing it to a more manageable focus: What should I start?

In The Phoenix Project, the IT department learns to start meeting deadlines by limiting the release of work. That is, they start to limit the rate at which they take on new projects, and this keeps them from blowing deadlines on projects they’ve already committed too. I seem to already have 43 side projects in flight, so maybe I shouldn’t take on any new ones for a bit. Maybe to start a project, I need to reconcile myself to abandoning an existing project. That would be hard, but the alternative seems to be always having 43 partially completed, messy codebases on my hard drive, which doesn’t really help anyone.

This post has turned into a long-winded and sort of depressing way of reaching this message: Finishing things is important, and it requires not starting new things till the old things are done. Queues are better than stacks for time management.

(Speaking of time management, @SteveAlbers and @jsonmez recommended Kanbanflow. It’s awesome. Check it out.)

Starting things is easier than finishing them, but finishing them is more important. I guess that’s a hard lesson.

Till next time, happy learning

-Will

Motivational Book are Motivating

Earlier this year I read Soft Skills by John Sonmez. It’s a book about non-technical (i.e. “soft”) skills that contribute to one’s career as a software developer. The subtitle, “The Software Developer’s Life Manual” really does mean it. The book covers finance, fitness, nutrition, job interviews, salary negotiation, etc.

Reading Soft Skills was a fantastically motivating experience. I found myself more interested in my future and motivated to improve it than I ever have been. There is a lot of advice in the book. Some of it, such as particular tips on job interviews and salary negotiations, I haven’t tested out yet because they simply haven’t come up. Others, like trying out audiobooks and starting a blog, I have tried. I’ll limit this review to commenting on things I’ve actually attempted, rather than guessing about the things I haven’t tried. Test everything and keep the good, as they say.

Recommendation 1: motivational books are motivating.

Continue reading “Motivational Book are Motivating”

Three Facts about ASP.NET Core

Last Tuesday, Jeff Fritz (@csharpfritz), a program manager on the ASP.NET team at Microsoft, spoke to the Pittsburg .NET User Group about ASP.NET Core. After the meetup, I had a chance to speak to Jeff, and I asked him what he wished people knew about ASP.NET Core. These are my words, based on the notes I took during our conversation. Here are three things Jeff Fritz wants to get out there about ASP.NET Core:

1. It’s Built on Stuff You Know

The first thing I learned from Jeff’s talk is that my early interactions with ASP.NET Core had a little friction because of missing tools not missing ideas. In other words, I was trying to do a familiar thing (write some ASP.NET web applications) in an unfamiliar way. This unfamiliarity creates the illusion of difficulty, so we can focus on what we do know, and use the asp.net docs website to fill in the gaps.

MVC is still the basic pattern. C# syntax is the same (and getting better all the time). We still have Json.Net. In short, the fact that I had trouble finding my way around the new project layout in Visual Studio is a difficulty that’s both superficial and temporary. The tooling will come, and when it does I’ll learn it. And speaking of tooling:

2. Visual Studio 15 will Help A Lot

Visual Studio 2015 (the current version) shipped before ASP.NET Core was final, but Visual Studio 15 (the new version) will be designed with ASP.NET Core in mind. (Also, VS15 is available in preview.) I think that having well-thought-out menus and templates in the new Visual Studio will eliminate a lot of friction. Visual Studio 15 Preview 5 is already easier to do ASP.NET Core in than the previous tools I’d worked with, and I look forward to seeing the experience move. (Also, aside: Microsoft has been very responsive to feedback about VS15. I asked a question on Twitter about the licensing and received a response within hours; I opened a small bug report and someone starting investigating it within days. Thanks guys!)

dotnetnew

(This is a screenshot of using the command line tool, dotnet.exe, to start a blank dot net core application.)

3. Core is Converging with the Framework

This means that the difference in available functionality between different implementations of .NET will go down. Right now, if I want to write some C# that can be used in 2 places, say in a Xamarin Android app and a Windows 10 desktop app, one option is a portable class library. The limitation is that a portable class library is always a subset – the intersection of the features available in each of the targeted runtimes. My portable class library will only have features that are available on both Xamarin Android and Windows 10. The pressure of diverging .NET implementations tends to reduce compatibility between the runtimes.

But the .NET Standard inverts this pressure – it defines a standard that any .NET runtime should meet, and then the implementers of the runtime and libraries on each platform can push to meet that standard. In other words, the runtimes will be converging on a standard, and developers can target the standard, instead of targeting the intersection of diverse and divergent feature sets.

After this talk, I am excited. I can’t wait to start using ASP.NET Core.

Till next time, happy learning!

-Will

Right Little Thing: Second 90%

The “Right Little Thing” is a category of posts I use to share quotes that have really helped me, and to explore why. (They’re called the right little thing because they remind me of the moment when a teacher says the right little thing, and suddenly the student gets it. My first post in this category has more info.)

Today I was listening to Developer on Fire, which is a fantastic podcast that I recently discovered, and I heard an interview with Daniel Moore, the creator of HyperDev, which is an awesome product I recently discovered.

Amid all this awesome, Daniel said something in his interview that really struck me. He said:

When you’re 90% done, then it’s time to start working on the second 90%.

Daniel uses this to capture the fact that the innovative, interesting, solving-cool-problems-with-code part of software is not enough to make a shippable product. If you want to make something people will pay you for, you have to be prepared to put in the “second 90%.”

I’ve recently been contemplating all the half-abandoned side projects I have, and thinking about finishing up, and I think that the part of what happened with these projects is that I didn’t budget for the second 90%.

This saying can also help a lot of project managers. I know I’ve tripped myself up at work by telling a boss I was almost done with something, when I was really almost done only with the problems that I knew about. I could have saved myself a lot of estimate-revising and a little face by saying, “I’m about to start the second 90% of the project.”

This principle reminds me of Hofstadter’s Law:

It always takes longer than you expect, even when you take into account Hofstadter’s Law.

I think that’s the whole lesson this week: in work and in life, remember to account for the second 90%; when you’re 90% done with what you think a project will need, roll up you sleeves and get ready to do the second 90% of the project.

Till next time, happy learning,

-Will

Those Prophetic Compiler Advocates

The other day I was reading the Internet, and I stumbled across a few posts advocating that one should write a compiler as a side project, because it will teach a person so much computer science. Despite having 43 side projects, I decided that the posts advocating that people write compilers as a side project are really, really convincing, and started fiddling with one right away. (If it ever gets into a state where I’d want other people to see it, I’ll put it up on GitHub and put a link here.)

In particular, Steve Yegge advocates writing compilers as a way of learning some of the really important things in computer science.

After I read that post, I started thinking about all the weird things I’ve worked on that are really a lot like compilers. For example, I worked on a project that allowed users to make tagged email templates, so that they could write things like “Dear {contact:firstname}” and it would come out as “Dear Frederick” or whatever when they sent the email. To get this working, I needed to compute the smallest set of queries that would let us populate all the little variable fields in the template. To do this, we split a string into tokens, build a tree out of tokens (where contact table is the root, intermediate nodes are tables we’re joining to, and leaves are columns we’re fetching data from), and emit a different strings, composed by arranging the tree’s nodes. The new string is a query we can send off to the database. It’s a weird-proprietary-email-language to weird-proprietary-query-language transpiler, but at end of the day it still tokenizes, parses, builds a tree, and emits code for consumption by some other program. For some value of “compiler” it’s a compiler.

I was working on a similar project the other day. Basically, some queries should be done through a REST API, and some queries can be done by calling SQL server directly. The API is easier to call, because it’s part of an existing SDK and someone else already wrote the complicated code, but calling SQL is a lot faster, so we want to call SQL when the query is simple enough that we’re confident of getting the SQL right. So we need to, guess what, convert the filter expressions and column sets and whatnot from the query object into some kind of tree, walk the tree, decide whether we can emit SQL, and if so emit SQL. Again, this is a lot like a compiler.

I noticed both of these projects’ similarities to compiler building after reading Steve Yegge’s post. It’s weird how slowly computer science changes, in some ways. I mean, compilers have been on computer science curricula pretty much forever (at least, forever in Internet years), and it’s 2016.

But there you have it. Being able to build a simple compiler that takes in strings and emits instructions is still super important. Turns out those prophetic compiler advocates were onto something.

So, if you really want to do something with computer science that will drive your skills forward, consider building a compiler. I hear the “dragon book” (so named for its cover art) is the one to read, and it’s definitely on my reading list for the future. When I get through it, I’ll make sure to post a review.

Till next week, happy learning,

-Will

Why is There Churn?

 

Uncle Bob recently wrote a post about what he calls “The Churn”. By “The Churn” Uncle Bob means, I think, pointless innovation. Or rather, the belief that innovation puts us on a monotonically increasing trajectory called “progress.” In other words, the churn is the belief that “the next big thing” will solve all our problems and be better than the current thing.

I think that Uncle Bob has a good point, in that the belief that adopting functional programming, or adopting agile, or adopting a better git workflow, or whatever, will not automatically make us successful or solve all our problems. After all, writing software is hard; at the end of the day, we have to exhaustively specify the behavior of a complex system, and that specification must be complex or it won’t be exhaustive. (This point is also from Uncle Bob, in Clean Code.)

So given that there is churn – people continually invent and learn new techniques in the face of diminishing returns – and that at least some of this time is wasted, why do people do it? Whence this temptation to innovation rather than mastery?

I think the first reason that people are tempted to innovate rather than master something is that it’s frankly easier to start things than to finish them. Here’s an example: The other day I installed as Haskell interpreter and wrote a “hello world” and a program that printed out the first N square integers, or something equally useless and first-project-y. And it was easy! I don’t think it took an hour. “But,” you might say, “Haskell is hard!” And you would be right, but starting Haskell is easy. Haskell doesn’t get hard until you want to be good at it.

Now, I have some unfinished side projects, but I spent time learning three Haskell commands because it was easier. I think that is the first reason for the churn. I know C# syntax pretty well at this point. If I want to get better and C#, I need to do some serious thinking and reading, I need to solve hard problems in the language and then refactor those solutions to be more readable. And that would be a lot of work. I already picked a lot of low hanging fruit, and what’s left are gross, small apples at awkward heights. But with Haskell, heck, I don’t even know how to read files off the disk! If I spent a weekend playing with Haskell, I would get 300% better. If I spend the same weekend playing we C#, I will only get 3 or 4% better. So Haskell is more attractive because starting out in Haskell is easier than mastering C#.

So perhaps reason 1 for the churn is that innovation is easier than mastery. I think another reason that people churn is that in languages we know well, we know where all the annoying problems are, but in languages we don’t know, we don’t know where the problems are. For example, in C# I’ve seen libraries where someone’s use of inheritance made a simple change really difficult to implement, and the C# regex API is pretty annoying. Also, you can write concurrency bugs in C# pretty easily if you’re not paying attention. I don’t even know whether Haskell has regular expressions. I hear it’s better at concurrency because of immutable state and lazy evaluation, but I couldn’t swear to it because I’ve never done anything non-trivial in Haskell.

I think then the second reason for the churn is that an unknown quantity of unknown problems seems smaller than a known quantity of known problems. Of course, the unknown problems might be smaller, but that’s a guess. By definition, we don’t know whether the known problems are smaller than the unknown problems. So like Hamlet, rather than face our current confusing regex APIs, we “fly unto others we know not of.”

And I think the third example is peer pressure. Look on Twitter sometime at the programmers. Everyone (rather, everyone who wants to share) is writing some sort of functional microservice that runs in docker containers on scalable cloud infrastructures. And who wants to be the guy who writes line of business apps in a language that’s ten years old! It’s not even containerized, duh. No one wants to feel left out of a great party, so we tend to follow trends, and people who set trends are people who brag on twitter about what they’re doing, and people who brag on twitter about what they’re doing and doing new things. So we do new things.

I think maybe I’ve answered the question: Programmers churn because we’re afraid of being left out of cool new technologies, we prefer (perhaps mistakenly) solving unknown problems to dealing with known problems, and we find it easier to race through the simple parts of learning a new language or framework than to slog through the mire of learning the edge cases and idiosyncrasies of a language and framework we already know.

Hmm. Maybe the next big thing won’t solve all our problems after all.

Till next time, happy learning.

-Will

 

What is Practice, Anyway?

Grit: The Power of Passion and Perseverance by Angela Duckworth asserts that in life, effort counts twice. The way she illustrated this point was with a pair of mathematical expressions:

  1. skill = talent * effort
  2. performance = skill * effort

In other words, part of her thesis is that because practice uses effort to build skill, and then performance uses effort to employ skill, effort is a better predictor of success than talent is. She also argues that grit, a term which in her book means a combination of focus and tenacity, is a predictor of sustained effort, and therefore a predictor of success. I recommend the book; her argument convinced me, .

So I sat down at my computer to exercise some grit. But I immediately came up against a difficulty in applying Duckworth’s formula: When am I practicing, and when am I performing? Some of her examples in the book were athletes or musicians, people in two occupations with the sharpest possible distinction between practice and performance. But I don’t have an example of running laps or playing scales for programming. I mean, it’s really not clear to me what practice would be for a programmer.

Thinking of scales reminds me of my first piano teacher. “What are we doing when we play scales?” she would ask, “We’re thinking about what we should think about. When you’re playing scales, you should try to have perfect posture, you should try to have perfect tempo, you should try to have perfect dynamics.” In other words, she was telling me, “while you’re playing scales, you should focus on some aspect of your musicianship critically. Did that C and that D have the right volume, or did I accidentally play the C too loud? When I get to the top of the scale, where my index finger through my little finger go down in order, did I accidentally speed up?”

Duckworth recommends this sort of practice. One of the things she talks about in her book is the mindfulness of deliberate practice. In other words, great sprinters don’t just run around the track over and over again; they pick some aspect of their stride and try to improve it each time they run. They practice starting. They set concrete goals and measure their progress against them.

Athletics and music are somewhat dissimilar to programming in that they have clearly delineated times of practice and performance. I don’t spend hours a day refining my technique for writing software, and then write software for one glorious hour once a month; I write software all day, most days.

But there’s an important similarity: They both benefit from deliberate practice. That is, they both benefit from choosing a goal and exercising towards that goal. And I think it’s important that the goal be specific. After all, for piano, playing a scale “well” is not some abstract thing. It means, specifically, that you played with dynamic control (you had some deliberate shape to the loudness or quietness of the notes), that the tempo was even, that your posture was good, that your fingerings were correct. Do your ring finger and little finger strike the keys as firmly as your stronger fingers? Is your off hand keeping up with your dominant hand?

So this week, instead of trying to write code that is “good,” which is hard to measure and therefore impossible to practice, I’m going to try to write code that has clear variable and method names. I’ll still get the other parts of my job done, still get features written and bugs fixed, still test my changes and check in to version control, but I will add to my usual duties deliberate practice in naming things. I will try to ask myself, “Will anyone have to go to the definition of this field or property to figure out what it is? If someone sees only the signature of this method, will they be able to decide whether to call it?”

After I focus on naming things for a few weeks, I’ll shift focus to something else. I hope to take a small aspect of the code I write, and try to improve on it.

Do you have any suggestions for what to focus on while you code? Please leave a comment!

Till next time, happy learning,

 

-Will