What about things like "Stuck on XYZ because no one fixes it because it doesn't show any good metrics and management doesn't care about it. So I fixed it at the expense of my own time."
>What about things like "Stuck on XYZ because no one fixes it because it doesn't show any good metrics and management doesn't care about it.
I'm confused by your comment. How did you decide this was something worthy of fixing if it "doesn't show any good metrics"? If you can't quantify the issue in any manner, how do you determine it's worth doing?
This is especially important when metrics would not be expected to be available -- for example, if you're designing a nuclear reactor, you need to think hard about ways to prevent a meltdown in advance, rather than collecting meltdown statistics and then fixing the piping problems that correlated with the most nuclear meltdowns.
This is also necessary when the true metric that matters is very hard to evaluate counterfactually. For example, perhaps your real task is "maximize profit for the company", but you can't actually evaluate how your actions have influenced that metric, even though you can see the number going up and down.
And necessary as well when a goal is too abstract to directly capture by metrics, resulting in bad surrogate metrics: for example, "improve user experience" is hard to measure directly, so "increase time spent interacting with website" might be measured as a substitute, with predictable outcomes that bad UI design can force users to waste more time on a page trying to find what they came for.
All of these problems are faced by metric designers, who need to pick directly-measurable metric B (UX design metric) in order to maximize metric A (long-term profits) that the shareholders actually care about, but they cannot evaluate the quality of their own metrics by a metric, for the same reason that they were not using metric A directly to begin with.
Let's say you've got a logging system that sometimes drops lines. And this sometimes makes debugging things hard, because you can't say whether that log line is missing because the code didn't run, or because the log line was lost.
Impact on end users? Nothing measurable. Impact on developers? Frustrating and slows them down, but by how much? It's impossible to say. How often does it happen? Well, difficult to count what isn't there. Would fixing the issue lead to a measurable increase in stories completed per week, or lines of code written, or employee retention? Probably not, as those are very noisy measures.
Nonetheless, that is not a fault I would tolerate or ignore.
> I'm confused by your comment. How did you decide this was something worthy of fixing if it "doesn't show any good metrics"? If you can't quantify the issue in any manner, how do you determine it's worth doing?
Because sometimes some things without metrics are incidental to the actual thing you set out to do.
E.g. a large refactor that switches libraries which is necessary for your new service that give 10% lower latency. But that library refactor will need to be done and it will take 2 months.
This is a challenge often encountered by product owners. At times, what appears to be a problem may not actually be one. Conversely, you might discover that a perceived issue holds greater significance than the current circumstances or priorities suggest.
itsoktocry|2 years ago
I'm confused by your comment. How did you decide this was something worthy of fixing if it "doesn't show any good metrics"? If you can't quantify the issue in any manner, how do you determine it's worth doing?
mitthrowaway2|2 years ago
This is especially important when metrics would not be expected to be available -- for example, if you're designing a nuclear reactor, you need to think hard about ways to prevent a meltdown in advance, rather than collecting meltdown statistics and then fixing the piping problems that correlated with the most nuclear meltdowns.
This is also necessary when the true metric that matters is very hard to evaluate counterfactually. For example, perhaps your real task is "maximize profit for the company", but you can't actually evaluate how your actions have influenced that metric, even though you can see the number going up and down.
And necessary as well when a goal is too abstract to directly capture by metrics, resulting in bad surrogate metrics: for example, "improve user experience" is hard to measure directly, so "increase time spent interacting with website" might be measured as a substitute, with predictable outcomes that bad UI design can force users to waste more time on a page trying to find what they came for.
All of these problems are faced by metric designers, who need to pick directly-measurable metric B (UX design metric) in order to maximize metric A (long-term profits) that the shareholders actually care about, but they cannot evaluate the quality of their own metrics by a metric, for the same reason that they were not using metric A directly to begin with.
(See also the McNamara fallacy, which parent comment is a splendid example of: https://en.m.wikipedia.org/wiki/McNamara_fallacy )
michaelt|2 years ago
Impact on end users? Nothing measurable. Impact on developers? Frustrating and slows them down, but by how much? It's impossible to say. How often does it happen? Well, difficult to count what isn't there. Would fixing the issue lead to a measurable increase in stories completed per week, or lines of code written, or employee retention? Probably not, as those are very noisy measures.
Nonetheless, that is not a fault I would tolerate or ignore.
NTDF9|2 years ago
Because sometimes some things without metrics are incidental to the actual thing you set out to do.
E.g. a large refactor that switches libraries which is necessary for your new service that give 10% lower latency. But that library refactor will need to be done and it will take 2 months.
meerita|2 years ago