It’s impossible to measure developer productivity. At least, that’s what the experts say. Martin Fowler came to that conclusion over 10 years ago in a classic article, and the consensus hasn’t changed since then. My favorite recent article on the subject is by Jim Bird, a development manager and CTO.

So let’s take that as a given: Measuring developer productivity reliably and objectively is a hard problem, maybe an impossible one. But rather than rehash the standard arguments, I’m going to change the rules a bit.

## Why It’s a Hard Problem

There are two related challenges that come up in discussions of measuring developer productivity from a manager’s perspective:

• You have to choose your metrics carefully, and base them on what you want people to focus on. For example, if you measure hours worked, people may decide to work longer days (at least in the short term). Is that really what you want?

• Eventually people will use your metrics as an opportunity to game the system. One classic example is measuring the number of bugs fixed. Guess what: developers can add bugs as well as remove them.

The root cause of these challenges is that the person doing the measuring has different goals from the people doing the work. The person doing the measuring is a manager, who may want to use measurements for good reasons like creating accurate schedules and improving team practices, and not so good reasons like evaluating people. The person doing the work is a developer, whose goals may include working on interesting projects, learning new technologies, and leaving work at a reasonable time. At best, developers will see measurements as an annoyance, the price to pay for working for the Man. At worst, they will actively thwart the goals of the measurement process in order to advance their own goals.

Rather than addressing the topic of developer productivity in general, I’m going to cover a more specific subject: measuring your own productivity. This has the same benefits as measuring other people’s productivity: you can use it for scheduling, improving work processes, and evaluating your progress. But most of the drawbacks are no longer relevant.

For example, let’s say you’re measuring your productivity by counting how many lines of code you write per day. Lines of code is an infamous productivity measurement (see this question with 100+ answers on Quora). But much of its infamy has to do with managers using the numbers blindly, and developers gaming the system. If you’re measuring your own productivity, you don’t have to worry much about these problems. If you spend a day writing 0 lines of code because you were designing something on paper, or -10000 lines of code because you deleted a few redundant modules, or +10000 lines of code because you imported a few useful modules, you can use the interpretation of these numbers that meets your needs. You don’t just have to average in those numbers, if that doesn’t make sense. Since you only have one person’s results to analyze, you’re more likely to come up with a useful measure of (one type of) productivity.

## So What Do You Want to Improve?

What you want to improve is up to you, but some examples may help you decide what would work best. The article by Jim Bird describes the value of measuring lines of code, function points, profitability, customer adoption, speed of development, turnaround time, bug count, and various other metrics. He finds negative aspects to all of these metrics, but a metric that doesn’t work when applied by a manager may be fine for self-measurement.

One section from the article that I found a bit surprising is called “We’re making (or saving) more money, so we must be working better.” In this section, Bird evaluates using profitability as a way to evaluate developers in a software company. You might expect this to be a good metric: Businesses need to be profitable, individual developers aren’t in a position to game the metric, and profitability is probably already being measured, so measuring it doesn’t add work for anyone. But Bird finds several problems with it:

• While developers have some influence on business results, decisions made by other people in the business also have an effect. So at best, business results are an indirect measure of developer productivity.
• Measuring profitability may cause managers to avoid hiring staff even if they actually need them, which puts pressure on the remaining developers, and could reduce team morale.
• It takes time for actions taken by the development organization to have an effect on business results. This makes it hard to use this metric as a way to experiment with an engineering process change.

This example shows how difficult it is to find a good metric. Let’s make things easier by considering a one-person software business. If you’re a developer who is monetizing your own work in some way, then you’re the only one making decisions that affect profitability, and there’s no pressure to increase or reduce staff size. Time lag may still be a consideration, but smaller businesses should see less of a lag between a decision and its financial impact. Is profitability a good metric for this type of business?

## Decide What Kind of Job You Want

One of the sections from last week’s post is called “Professional programmers don’t spent much time writing code.” This claim about programmers often comes up in discussions of the benefits of competitive programming. I cited an example on Quora from a developer who estimates that he spends only 4% of his work time writing new code.

For someone with that type of job, measuring the amount of new code written wouldn’t be a very useful metric, regardless of what one thinks about the validity of that metric for developers in general. In the best case, this person would only be optimizing 4% of their time.

For the example of the solo entrepreneur, optimizing profitability may or may not be the right metric to focus on. People often choose to work for themselves so that they have more control over what they work on. Someone who chose that type of work might really want to optimize number of hours worked per day, or even percentage of time spent writing new code.

A manager in a large business has to choose metrics based on the goals of the company or department. But individual developers can switch things around. They can choose a job based on what they want to measure and improve. If coding expertise is their goal, they can choose a job that mainly requires writing new code, and they can use metrics to ensure that they’re doing the right amount of that type of work. If they’re more interested in design or architecture, they can come up with metrics that match those goals.

## Hitting the High Notes

In Hitting the High Notes, a classic Joel on Software post, Joel analyzes a dataset that a Yale Computer Science professor collected from his students. The dataset contains the number of hours that each student spent working on each programming assignment in a particular class. In other words, it’s a somewhat controlled experiment involving a group of developers working on exactly the same problem. This is rarely found in real-world programming, since companies don’t like to pay multiple people or groups to do redundant work.

Joel’s analysis of the data shows that there is no correlation between the amount of time that a randomly-selected student from the class spends on an assignment, and the score that they get on it. Fast students could do well on the assignments despite spending much less time than their slower peers.

Joel uses the data as evidence for the 10X developer conjecture: the claim that the best developers are ten times as productive as the worst developers (or the average developer). But the main point of the rest of the article is that this productivity difference matters. At least, it matters for software companies — that is, companies whose “success or failure is directly a result of the quality of their code.” Developers who can produce good code quickly, and developers who can produce better code than their peers regardless of how much time they spend, can make a big difference in the success of a software business.

## Coding is Underrated

When the topic of coding productivity comes up in online discussions, the response often goes like this:

• It’s impossible to measure, and
• In the real world, it isn’t very important anyway compared to other aspects of software engineering.

I won’t argue against the assertion that it’s hard for managers to objectively measure the productivity of developers on their team. (Though subjective measurement is another story). But measuring your own productivity doesn’t have to be hard, since you can avoid many of the pitfalls of measuring your employees.

As for the second point, it has never made much sense to me. As I wrote at the beginning of this year: Coding is Underrated. When people make the claim that coding isn’t important enough to focus on, I think they’re either assuming that the average developer’s coding ability is better than it actually is, or they’re targeting a different type of job (like the 4% example from Quora).

So I’m continuing to explore ways to reliably practice coding. Having a reliable measure of personal coding productivity can only help.

(Image credit: Post Memes (formerly on Flickr))