top | item 6296772

Cannot Measure Productivity

127 points| alexfarran | 12 years ago |martinfowler.com | reply

74 comments

order
[+] lifeisstillgood|12 years ago|reply
I am going to get my drum out and bang on it again.

Software is a form of literacy - and we measure literacy completely differently. In fact we measure it like we measure science - you are not a scientist unless other scientists agree you are, and you are not a coder unless other coders say you are.

What Fowler wants to measure is not the top echelons of productivity but the lower bounds - presumably to winnow out the unproductive ones.

But that is not how we conduct ourselves in literacy or science. We educate and train people for a very long time, so that the lower bound of productivity is still going to add value to human society - and the upper bounds are limitless.

What Fowler is asking for is a profession.

[+] foobarbazqux|12 years ago|reply
> you are not a scientist unless other scientists agree you are

Science requires one thing: making and testing falsifiable hypotheses. A priest is able to determine whether or not you are doing that. If anything, it's philosophers who decide what science is, e.g. Karl Popper.

[+] joshyeager|12 years ago|reply
Your last sentence is exactly right. There are significant benefits to building a profession that is predictable and well-defined. Ensuring a reasonable lower bound reduces the risk of failure significantly.

The open questions are 1) whether it's possible to do that, and 2) whether it's possible to do that without inadvertently imposing an upper bound. Common wisdom among software developers is that it is not possible and even if it was it would impose a very tight upper bound. But as far as I know, that common wisdom is just guess work. No one has found a good solution yet, but there's also no proof that there is not one. That's why so many people continue to try.

[+] skrebbel|12 years ago|reply
I agree with what you're saying, but I feel that it's exactly what Fowler is saying too. The article is called "cannot measure productivity", after all.

What parts of what he writes do you disagree with?

[+] __--__|12 years ago|reply
you are not a coder unless other coders say you are

Not everybody needs the approval of an external entity. I couldn't care less what other coders think of me. What I ship speaks for itself. What I write speaks for itself. What studies scientists publish speak for themselves. The rest is politics.

[+] gdubs|12 years ago|reply
I agree with the sentiment of your last paragraph, though in fairness Fowler seemed to end his piece saying that measurement was seductive and likely to make things worse.

Still, I can't help cringe at some of the ideas posited -- for one, the idea that more features == a better product.

Or that profit is a measure of engineering productivity. As if it's that uncommon for great products to be badly marketed.

[+] rabino|12 years ago|reply
This. A million times this.
[+] ChuckMcM|12 years ago|reply
It has been more than 10 years, it has been at least 50 since there were moans about productivity in the early 60's.

Feynman had some interesting thoughts on minimal computation that sort of paralleled Shannon's information complexity. As you know Shannon was interested in absolute limits to the amount of information in a channel and Feynman was more about the amount of computation per joule of energy. But the essence is the same, programs are a process that use energy to either transform information or to comprehend & act on information so 'efficiency' at one level is the amount of transformation/action you get per kW and "productivity" is the first derivative of figuring out how long it takes to go from need to production.

It has been clear for years that you can produce inefficient code quickly, and conversely efficient code more slowly, so from a business value perspective it there is another factor which is the cost of running your process versus the value of running your process. Sort of the 'business efficiency' of the result.

Consider a goods economy comparison of the assembly line versus the craftsman. An assembly line uses more people but produced goods faster, that was orthogonal to the quality of the good produced. So the variables are quantity of goods over time (this gives a cost of goods), the quality of the good (which has some influence on the retail price), and the ability to change what sort of goods you make (which deals with the 'fashion' aspect of goods).

So what is productivity? Is it goods produced per capita? Or goods produced per $-GDP? Or $-GDP per goods produced? Its a bit of all three. Programmer productivity is just as intermixed.

[+] hcarvalhoalves|12 years ago|reply
> It has been clear for years that you can produce inefficient code quickly, and conversely efficient code more slowly (...)

That's not only false, but is often the opposite.

The symptom number one of an inexperienced programmer is to waste development hours reinventing the (square) wheel, while a good programmer is lazy (already knows which solution works best, and will probably just import it from a tested library).

So an experienced programmer not only doesn't waste computation power, also doesn't waste hours on the development cycle.

I agree with everything else you pointed.

[+] lifeisstillgood|12 years ago|reply
I love Feynman's idea of minimal computational energy - it suggests that we could actually measure elegance.

And to be fair we do - the great works of art, craft a d even science are almost always elegant. And it is something we atrive for. Some buggers have it and the rest of us wade around with feet of clay - but it is conceivably measurable.

Going to be easier to go with the editor's taste I think though.

[+] integraton|12 years ago|reply
The sad part is that even after decades of technologists debating this, the reality is that most non-technologists working in the industry don't know, don't care, and really just want their pet features. The real measure of productivity in organizations with non-technical stakeholders therefore becomes whether or not a stakeholder feels like they are getting what they want. Attempts to measure productivity, whether via lines of code or "velocity," are often little more than a way for everyone to pretend their opinion is backed by something quantitative. In especially bad cases with non-technical management, they'll just keep swapping out processes until they either get what they want or have something with numbers and graphs that makes it look like they should.

While I could be accused of excessive cynicism, I do believe this is common enough that it should be addressed. There's a pervasive delusion that decisions are made by rational, informed actors, when that is rarely the case.

[+] lifeisstillgood|12 years ago|reply
> becomes whether or not a stakeholder feels like they are getting what they want.

If the stakeholder you choose is a customer, then that is a valid measure of business productivity.

Which I guess is kind of the point - we are trying to measure on a granularity beyond what we can validly do.

Which indicates to me that a world of smaller organisations, made up of software literate people will be one where rewards will follow talent. That may not be a world we want to live in - and my cynicism sees your cynicism and raises :-)

[+] sergioschuler|12 years ago|reply
Speaking for the non-technologists here: I used to measure productivity crosschecking what people accomplished in a certain time and what they said they would accomplish in that time. Well, it is not really productivity, but at least estimating the amount of stuff you can get done. So, it is more like goals of development.
[+] mathattack|12 years ago|reply
There are two types of productivity:

1) Are you doing the right things?

2) Are you doing things right?

They can be imprecisely measured, but every metric has problems and can be gamed. Combining the measurements is extremely difficult.

Let's start with 1 - doing the right things. Someone who chooses to have their team work on 3 high value tasks, and stops their early on 6 low value tasks is by one definition more productive than someone who forces their team to do all 9 things. Or at the very least they are more effective. This is what Fowler is getting at.

On point 2... Let's assume that the appropriateness of what you are doing is immaterial. How fast are you doing it? This can be somewhat approximated. You can say "Speed versus function points" or "Speed versus budget" or "Speed versus other teams achieving the same output" and then bake in rework into the speed. All of these metrics are doable. Lines of code isn't a good base though.

The real question is, "What are you going to do with all of this productivity data?" If the answer is systemic improvement, you're on the right track. If you try to turn it into personal performance (or salary) then people wind up gaming the metrics.

[+] seiji|12 years ago|reply
Is measuring productivity isomorphic to the hiring problem?

Everybody says there's a "shortage of developers," but I know good developers who keep getting shitcanned after a few interviews where nothing seemingly went wrong.

We can't tell who's going to be productive. Since we can't tell, we come up with ten foot high marble walls to scale. Our sterile interview problems make us feel "well, at least the candidate can do our Arbitrary Task, and since we decided what Arbitrary Task would be, they must be good, because they did what we wanted them to do."

Productivity is pretty much the same. There's "just get it done" versus "solving the entire class of problems." Is it being productive if you do 50 copies of "just get it done" when it's really one case of a general problem? I'm sure doing 50 copies of nearly the same thing make you look very busy and generates great results, but solving the general problem could take 1/20th the time, but leave you sitting less fully utilized after (see: automating yourself out of a job).

[+] kasey_junk|12 years ago|reply
They are absolutely the same problem. Because we can't measure productivity, we can't determine relative quality in an objective way. If we could, it would make the hiring process much more simple.

The question I have is, how is this much different than any other profession? How do we measure doctor productivity? What keeps me up at night is that it is very likely that the 90/10 crap to good ratio in software developers is probably the same ratio as surgeons.

[+] pnathan|12 years ago|reply
This is a good insight. Both problems relate to the ability to socially interact with other people, determine what they want, and then to technically produce product to satisfy the other people.
[+] RogerL|12 years ago|reply
A very wise man said "there is no silver bullet". Yet we keep trying all these schemes to automagically solve what are hard optimization problems only amenable to heuristics and deliberate, intelligent introspection. Very simply, you cannot run some tool to measure the information density of a large project. Graphical programming isn't going to turn a bunch of marketers into programmers. Doing user stories and forcing people to stand up as they talk isn't going to remove all the need for planning and tracking. And so on.

You know how I figure out if something can be improved? I dig in, understand it, and then look for ways to improve it. If I don't find anything, of course it doesn't mean there is no room, but I'm a pretty bright guy and my results are about as good as any other bright guy/woman.

I was subjected to endless amounts of this because I did military work for 17 years. You'd have some really tiny project (6 months, 2-3 developers), and they'd impose just a huge infrastructure of 'oversight'. By which I mean bean counters, rule followers, and the like - unthinking automatons trying to use rules, automatic tools, and the like. Anything to produce a simple, single number. It was all so senseless. I know that can sound like sour grapes, but every time I was in control of schedule and budget I came in on time and on to under budget. But that is because I took it day by day, looked at and understood where we were and where we needed to go, and adjusted accordingly. Others would push buttons on CASE tools and spend most of their time explaining why they were behind and over budget.

I like Fowler's conclusion - we have to admit our ignorance. It is okay to say "I don't know". Yet some people insist that you have to give an answer, even if it is trivially provable that the answer must be wrong.

[+] chromatic|12 years ago|reply
Please excuse this small rant.

If you're referring to Fred Brooks, he wrote "[T]here is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity." (emphasis mine)

The surrounding context makes his comment a very specific prediction which means something different from what most people claim he meant. Much of the rest of his essay suggests techniques which address the issue of essential complexity and which, when applied together, he hoped would produce that order of magnitude productivity.

Perhaps there was no single such improvement in the years 1986 to 1996, but when people use the phrase "no silver bullet" to dismiss potential improvements in productivity, I believe they're doing Brooks and the rest of us a great disservice.

[+] nadam|12 years ago|reply
You can quite well measure productivity if you set a task, write tests for it, and tell two independent groups to implement it. You give them the same amount of time.

Now the more productive / better group is which can do the task with smaller complexity.

Complexity measures measure size of code and number of dependencies between blocks in different ways. But even the most simple comlexity measure is quite good: just measure number of tokens in source code. (It is a bitmore sophisticated than LOC). You can then make competitons between groups, and measure their productivity. (I am writing a book now titled 'Structure of Software' which discusses what is good software structure on a very generic/abstract level. It relates to 'Design Patterns' as abstract algebra relates to algebra.)

[+] alok-g|12 years ago|reply
Genuinely asking: Why not just stop at "tell two independent groups to implement it"? That is, why constrain to the same amount of time?
[+] artumi-richard|12 years ago|reply
The book "Making Software: What Really Works, and Why We Believe It" (http://www.amazon.co.uk/Making-Software-Really-Works-Believe...) has a section on this.

Chapter 8 "Beyond lines of Code: Do we need more complexity metrics?" by Israel Herraiz and Ahmed E Hassan.

Their short answer is that, in the case they looked at, all the suggested metrics correlated with LOC, so you may as well use LOC as it's so easy to measure.

IIRC they believe it's only good to compare LOC between different employees if they are doing pretty much the exact same task however, but since LOC is correlated with code complexity, there is some measure there.

I recommend the book, as really focusing on the science of computer science.

[+] gz5|12 years ago|reply
Heisenberg principle variant for software:

Measure it. Or optimize it. Can't do both without impacting the other.

Software is a work of art and creativity, not the work of a rules-based factory.

[+] stonemetal|12 years ago|reply
So two teams build identical databases in identical time frames. One becomes popular and has sells in millions of dollars. The other flops, with sells in the hundreds of dollars. Sure there is a difference in business results but I fail to see how the two teams were not equally productive at creating software. Sure I don't have a good definition of software development productivity but this is open to so many non software development productivity elements as to be nonsensical.

Basically I see this as marketing. We may not be the fastest but who cares about that we have the special insight to build the hits that keep you in business.

[+] wciu|12 years ago|reply
Most performance indicators are imprecise. P/E ratio is one of the stupidest measure of value, but it is widely use in finance. No one(at least no value investor) would invest based on P/E ratio alone though, there is a lot more due diligence that's done before investors put their money into a stock. (At least that's what you hope happens.)

The problem with productivity measures, is not how they are measured but what they are used for. Most managers want to use productivity measures to evaluate individual or team performance, however, performance is tied to incentives, so you always end up with a lot of push back from the team or someone gaming the system. (IMO, this is because of lazy managers wanting to "manage by numbers", without really understanding how to manage by numbers.)

Rather than using it as a performance management tool, productivity measures, however imprecise, can be used alongside other yardsticks as signals of potential issues. For example, if productivity measure is dropping with a particular module/subsystem, and defect rate is increasing, then one might want to find out if the code needs to be rearchitected or refactored. In these cases, it is okay to be imprecise, because the data are pointers not the end goal. When used correctly, even imprecise data can be very useful.

[+] dirtyaura|12 years ago|reply
The quest for a single measure of hard-to-define concept like productivity is doomed. Even Fowler's article highlights the fact that we don't have a shared understanding what the word productivity means: writong quality code, shipping useful products or making money? all of them? It's no surprise that there is no numerical measurement that captures a badly-defined concept.

In my opinion, we should approach measurement from a different angle: can we learn something useful about our profession by combining different types of measurements. Can we, for example, easily spot a person who is doing what Fowler is calling important supportive work. Can we detect problem categories that easily lead to buggy code and allocate more time for code quality work for tasks and less for those that are known to be more straight-forward.

[+] Jormundir|12 years ago|reply
It drives me nuts when programmers brag about their productivity, measured by how many lines of code they've written.

You end up with something like feature 1: +12,544 / -237 lines. Done in 2 weeks.

Then comes feature 2, 2 and a half months later, the stats: +5,428 / -9,845.

Look at that, you had to tear down everything they wrote because they cared about amount of code over code quality. The more they brag, the more you think "oh s$%t, every line they add is a line I'm going to have to completely untangle and refactor."

I think software engineering productivity can be measured, though not well by today's standards. There will probably be a decent algorithm to do it in the future that takes in to account the power of the code, how easy it is to build on top of, how robust it is, etc.

[+] RogerL|12 years ago|reply
There will probably be a decent algorithm to do it in the future

How could that be possible? Consider the following, very typical scenerio.

You write 2000 lines of code, implementing a feature. I write 2,200 lines of code, implementing a feature in a way that supports our vertical for the next 5 years (you just take my module and plug it in, instead of coding from scratch). Add in whatever time interval you want to make it more complicated - I took the same amount of time as you, less, or more.

Consider that this is a judgement call - did I or you do the right thing? We understand the risks and costs of premature design, but also understand the risk of coding exactly to today's requirements, with no insight into the future. No algorithm is going to tell you the right answer, and by the time we know (5 years from now) the measurement will be useless.

Or we each write a heuristic to the TSP. What algorithm could possible decide whose work is more "productive"? I put it in scare quotes because I don't even know how to define productivity in that regard. Yours runs faster, mine took 1/2 the time to code. Yours is 70% larger than mine, which has cache coherence implications as we continue to add to our programs. Yours is well documented (give me an algorithm to tell if code is 'well' documented), mine is sparse. I used gotos, you used exceptions to deal with errors. You wrote it in Haskell, I did it in C++. Yours will make features X,Y,and Z easily possible, mine makes A,B, and C easy. Your heuristic performs better for some graphs, mine performs better on others.

Who is more 'productive' here? It's not even a meaningful question. Bottom line is, we both tackled a hard problem, both did fine in very different ways, and both have implications on the future of the company (assume a,b,c,x,y,z are really important in the future).

It's an N-dimensional optimization problem with endless unknowns, and no knowledge or agreement on how to measure many of the axis', let alone their relative importance to each other.

[+] matwood|12 years ago|reply
Nothing makes me happier than removing code. If I can find ways to deliver the same functionality in less code I get excited.

Now, I do like to look at my personal lines of code because it gives me a gauge to compare features I implement on a relative basis. It also gives me a relative, rough measure how much effort a particular feature took to produce.

[+] kailuowang|12 years ago|reply
The purpose of measuring productivity is to manage it. There are two categories of factors that decide the overall productivity: the factors within the developers (capability, motivation, etc) and the factors outside the developers (tools, process, support, etc).

True, it's hard to objectively measure the overall productivity using a universal standard, but it is relatively easier to measure the productivity fluctuation caused by the external factors. Velocity measurement in Agile practice is mostly for that end.

For the internal factors, the best way, and arguably the only effective way, to manage it is probably to hire good motivated developers. I think most top level software companies have learned that.

[+] lifeisstillgood|12 years ago|reply
This is true - to an extent. Scrum screams out to measure relative story points, and never provide the data for "management" purposes. But even the same team estimating in succession will face external pressures - and if those pressures will be alleviated by gaming story points, they will. This catch-22 had me - I truly think the only way is to report only an estimated finish date. Any public posting of velocity eventually filters into a management by velocity - because that's the only metric management has. And we are back on the same old loop - we can have a measure of productivity as long as we do not use it in any manner as a measure of productivity.

Add to this I don't think scrum has become setup to take this to its logical conclusions - agile/scrum has been sold as a fairly fixed methodology, not as a means to get some relative metric out of teams and use that in a series of experiments to achieve productivity improvements. And even if it were, the major wins we know and can prove work (quiet conditions, minimal interruptions, trust, respect, time for reflection and education, are a long way from being accepted by today's enterprises.

In short there is no silver bullet, and while agile looked a magic bullet it just turned out to be plain old lead.

[+] mtdewcmu|12 years ago|reply
The article makes the point that the LOC metric is confounded by duplication:

> Copy and paste programming leads to high LOC counts and poor design because it breeds duplication.

This problem is not insurmountable. Compression tools work by finding duplication and representing copies as (more concise) references to the original.* The size of the compressed version is an estimate of the real information content of the original, with copies counted at a significantly discounted rate. The compressed size of code could be a more robust measure of the work that went into it.

* Sometimes this is done explicitly, other times it's implicit

[+] alightergreen|12 years ago|reply
And what about Iteration? The learning value that can come from doing things poorly?! Imagine if Microsoft had LEARNED something from what they did wrong in Windows 95? Or Windows ME! Imagine how amazing their software would be now. They couldn't have done it without having totally screwed up first. Of course they didn't do that in the end...so...
[+] chipsy|12 years ago|reply
Productivity by any volume measure seems meaningless in the software context. That's like measuring writing productivity by word count. Nobody really likes high-volume communication, unless the goal is to write a lot of trash.

Even if you deliver a system with a lot of features and no known bugs, if they aren't the right features, it's not valuable software.

[+] AlisdairSH|12 years ago|reply
If you don't work >43 hours/week, you aren't productive. At least according to one boss I've had. :|
[+] scotty79|12 years ago|reply
I think that the the one thing that enables science is that even though you cannot measure all you want, you can still measure some things and that measurements are useful, just not directly.