I'm one of the co-authors of the study. Your critique is valid though by research standards, for this type of study, our sample is sufficient. We are planning to replicate this study on a larger scale in the future, though!
Are there any plans to figure out objective ways to measure productivity and what distinguishes “good devex” from “bad devex”?
I’ve worked at a lot of big tech companies that do surveys about internal tooling and every year it’s rated as a weak spot, across years and companies this seemed like a consistent trend.
And yet everyone had teams dedicated to improving various aspects of devex so it’s unclear if these teams are just improving the wrong things or if productivity really is improving and it’s something else (eg the amount of code debt grows faster than devex improvements or people are asked to go faster than the devex improvements can keep up or the devex is being improved but the size of the survey means not enough people feel it because you optimize smaller subsets of engineering orgs).
That’s another thing to be mindful about large scale and small scale surveys - the latter might be sampling specific teams adopting the tool whereas the former might find there’s no way to make everyone happy and it all turns into a wash.
"Are there any plans to figure out objective ways to measure productivity"
You can't measure developer productivity objectively, assuming you're referring to metrics like lines of code, number of pull requests, or velocity points which are infamous. There's broad agreement on this both within the research community as well as practitioners at leading tech companies.
Asking people whether the developer experience is good or bad is not going to be the most efficient of approaches: It's ultimately asking for a mood. When teams are asked what they are spending a lot of time on than they wish they shouldn't, you can at least see what are the heavier pain points. It doesn't help if your developer experience budget is zero, but it can at least organize the useful alternatives.
In most places I've worked at, the a survey asking for specific pain points gets great results, because the worst time sinks stick out like a sore thumb, especially if you have workers that have worked in high quality organizations.
Having been on the teams that improve developer experience, the problem is that one of my hands gives while the other takes away. I can address every complaint a developer has about the company platform, but at the same time requirements change. As a company grows they start caring more about security and firewalling between different data and services which makes developing harder and more annoying.
For your first question about good devex, there are definitely some objective ways to measure it.
* The time it takes for completed code to be deployed to production
* count of manual interventions it takes to get the code deployed
* count of other people that have to get involved for a given deploy
* how long it takes a new employee to set up their dev environment, count of manual steps involved
Having done internal developer & analyst tooling work (and used DX), this type of survey is great for internal prioritization when you have dedicated capacity for improvements.
I'd be curious to see more about organizational outcomes, as this is piece of DevOps/DevEx data that I feel is weakest and requires the most faith. DORA did some research here, but it's still not always enough to convince leadership.
For the uninitiated among us - can you share more context on the research standards and the reasoning behind it? I'm interested and would like this to influence some decisions I have but would like to understand the confidence here :).
vlovich123|2 years ago
I’ve worked at a lot of big tech companies that do surveys about internal tooling and every year it’s rated as a weak spot, across years and companies this seemed like a consistent trend.
And yet everyone had teams dedicated to improving various aspects of devex so it’s unclear if these teams are just improving the wrong things or if productivity really is improving and it’s something else (eg the amount of code debt grows faster than devex improvements or people are asked to go faster than the devex improvements can keep up or the devex is being improved but the size of the survey means not enough people feel it because you optimize smaller subsets of engineering orgs).
That’s another thing to be mindful about large scale and small scale surveys - the latter might be sampling specific teams adopting the tool whereas the former might find there’s no way to make everyone happy and it all turns into a wash.
geekjock|2 years ago
You can't measure developer productivity objectively, assuming you're referring to metrics like lines of code, number of pull requests, or velocity points which are infamous. There's broad agreement on this both within the research community as well as practitioners at leading tech companies.
Here are some examples: https://newsletter.pragmaticengineer.com/p/measuring-develop...
"... and what distinguishes 'good devex' from 'bad devex'"
Yes to this. This is an ongoing effort - we have two previous journal papers that touch on this which may be of interest to you:
- https://getdx.com/research/devex-what-actually-drives-produc...
- https://getdx.com/research/conceptual-framework-for-develope...
hibikir|2 years ago
In most places I've worked at, the a survey asking for specific pain points gets great results, because the worst time sinks stick out like a sore thumb, especially if you have workers that have worked in high quality organizations.
callalex|2 years ago
droopyEyelids|2 years ago
* The time it takes for completed code to be deployed to production * count of manual interventions it takes to get the code deployed * count of other people that have to get involved for a given deploy * how long it takes a new employee to set up their dev environment, count of manual steps involved
Stuff like that
datadrivenangel|2 years ago
Having done internal developer & analyst tooling work (and used DX), this type of survey is great for internal prioritization when you have dedicated capacity for improvements.
I'd be curious to see more about organizational outcomes, as this is piece of DevOps/DevEx data that I feel is weakest and requires the most faith. DORA did some research here, but it's still not always enough to convince leadership.
geekjock|2 years ago
Lienetic|2 years ago