A variant of this that has driven me to quit more than one job is having a non-technical manager look at a UI prototype and consider that 90% of the solution. "The UI guys had this page ready two months ago! Why doesn't this work yet?" It's even worse when you present a working prototype. They simply don't understand that the backend functionality is what's doing the bulk of the work, and just because you can see something, that doesn't mean it's secure, performant, scalable, or even functional beyond demoing with dummy data.
Oh this reminds me of what went from one of the most infuriating questions I would get asked by investors to one where I almost wanted to bait them into asking it...
"Why/how is this worth X dollars/time? I know someone who says they can do it in a week." To which, I eventually learned to reply: "Wow, well... In that case, let me shoot you an article on how to build a Twitter clone in 15 minutes. [awkward pause while I smile at them] There's a lot more than just literal lines of code that goes into building a successful software product."
It reminds me of a pycon talk [0] which, while i dont agree with the whole thing, has the message "we ship features, not code". That also reminds me of "when a measure becomes a target, it ceases to be a good measure".
LOC is a decent measure, but features are our targets
An alternative take to this article would be that this person wasted two days because he was reluctant to ask more questions from the person who filed the bug report.
How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.
> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.
Which seems like being stubborn and making a mistake because of it.
Couple other parts also seem a bit overdoing it:
> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps.
> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?
> How often do you actually receive quality bug reports at work?
Very, very often. I work as a QA engineer whose main responsibilities is to go through the bugfixing queue and add needed info where necessary. And I have to spend a lot of time every day doing this. Sometimes it gets so bad I have to assign the ticket back to the reporter to add more info, because even I don't know where to look without it.
Interestingly enough, it's always the more senior people at our company who are guilty of writing crap bug reports.
Almost all the developers I work with never started working on a bug until i gave them the right steps to replicate it - even if the bug is reported by a user. A very stupid example i can think of is this: the developer designed a login form on web and mobile with the password field as expected, but forgot to uncapitalize the first letter of entry in password. Like if your password is abcdef, on mobile keyboard (unless you are careful), it would be entered as 'Abcdef' and would not work. The issue was reported thrice and he said that there is no error, and did not fix. Then later it struck me (and i tried it on safari while he tried on the mobile responsive version of chrome) that it is this issue. Not saying the developer should not start working right away, but the expectation is that if they actually paid attention to what was reported, it would not have taken this much time to figure out what the issue was. There needs to a midway which varies from org to org depending on their workload.
> is it more probable that this work should be minimised
I guess this is where automated tests can come in. You fix something and see if passes the unit tests. But then everyone has their own approaches. For him, fixing a similar bug twice is worse than finding all possible mistakes at once.
Do people actually have fights like this with management at their companies? Not trying to knock the author, but I'm just surprised anyone would actually hear this kind of comment in 2020. I'd think by now any and all metrics tying lines of code to productivity would be long dead.
Yes. The industry moves at a snail's pace, and is very different from what you'd read on HN. A huge % of dev jobs are still using old software/processes, with managers that haven't written software in 20 years, if at all.
In 2014 I worked at a company that switched to Git and then started measuring LoC to assess performance/involvement. Engineers took to committing/removing things like node_modules directories to make the data meaningless.
I know someone that worked at some skeezy company in Menlo Park that got passed up for a raise after spending months navigating the bureaucracy to save the company millions on their operating costs because they didn't write enough code. This was in the last four years.
$JOB-2, admittedly about 4 years ago now, the good manager with a background in software left for a better opportunity and was replaced by someone who's background was management. With no insight into the subject matter, they fell back on whatever they thought they could quantize.
Got numerous things like that, though my team lead and our project manager did a great job of shielding the team from that crap, we still occasionally hear it come up in group meetings and the like.
I even got a task handed directly to me, bypassing everyone above me, to "estimate how much it would cost to migrate all those linus apps your team has to windows. They'll run better there". Just the windows licenses alone would have cost us about half the existing server costs since we were using AWS instances. Also included a line item that included recruitment costs for a new developer, and verbally informed him that it would likely involve hiring a new team, as the existing team was hired specifically as linux developers.
So far I've worked in one place which didn't use version control in 2007, one place which had reluctantly started using version control just before I started in 2010, several places where automated tests were considered pure waste, where everybody had full access to production, where backups were untested, where one or two people held crucial knowledge which was not shared with anybody else, etc. The real world moves a hell of a lot slower than best practice.
I'm a maintainer of an OSS project with other contributors. I have fights like this all the time because others are constantly just wanting to fix the symptoms. This is for a project that is a framework/API, where the saying "the best programmers remove more code than they add" should be even more true.
You would hope so, but I've seen an Engineering Director at an otherwise well run software company use number of Github commits as the central reason to put someone on a performance improvement plan as recently as 2019. Granted, this was someone who was a professional manager and hadn't been an engineer for the last 90% of their career, and many people were shocked by it, but it happened.
I regularly hear a friend complain about BS like this and variants. "Why did adding a button take a week, it's just a button?!" is also very popular. Not sure what's worse, that or the recurring "that element should be 1px to the left, drop everything you're working on and fix it asap"...
They don't always make the comment out loud, but you can tell they're thinking it.
They absolutely use lines of code metric at my company. I don't miss any chance to tell my manager it's complete bullshit. His answer: "Engineers are supposed to write code, just like construction workers are supposed to build houses."
At most of the big companies I worked at (over 500 employees) there is a steering committee doing prioritization work and continuous integration test suites and an elaborate change control committee and process such that fixing a spelling error will take much more than two days.
Whats nice about the proceduralism is you can document that the steering committee only meets once a week on Tuesday afternoons and change control meets on Thursday. And everyone knows the automated test suite on DEV takes about half a working day. So if a change can't be worked into the schedule in less than a day, it'll never pass CI testing before the change control meeting so it'll take more than a week.
Whats bad, is mgmt would like you to complete multiple changes perhaps at the same time which always complicates the change control process especially if change #7 failed last week so company policy is to roll everything back and now we have 13 changes, two weeks worth, to complete next weekend. Also whats bad, is knowing its a corporate nightmare to make any change, why did I make a mistake to begin with of having the buttons swapped or a misssspelling or whatever.
I find the big metric now a days is backlog. Lets see the number of request tickets decrease this week instead of increase. That leads to intense pressure to roll multiple problems into one ticket.
One would wish. Personally, I don't think I've ever worked under a manager who understood software development. It tends to be all about what they can see (GUI) or about nearly meaningless metrics on a dashboard (LOC, tickets closed, etc.). Again, just in my personal, limited experience.
A lot of software development still happens at businesses that are not 'software companies'. The experience is very different, with a much different culture around software.
I have never had anyone indicate to me that this is a problem. However, every time I spend days on a problem that ends up being a trivial number of LOC, I get a feeling of anxiety that I am going to be seen as incompetent. That's been the case my whole career even though I know it's unfounded.
I probably can't tell this story correctly because it is 2nd hand but ...
I was on a browser team. A fellow co-worker decided to add the Fullscreen API to it which means not just add the API but first discuss it in the relevant standards committees.
I'm pretty sure he thought, and so did management, this would be a 2-3 month project at most. IIRC though it was like 18 months, maybe longer.
Some problems that aren't obvious at first
* What is fullscreen mode? Is it a mode for the page, a mode for an individual element? What?
They eventually decided it was for an individual element
* What happens to the CSS above that element when none of its parents are being rendered?
I'm sure that took a while to argue over. Like if it was position: relative or absolute and suddenly it's parent is no longer displayed. What if the parent has CSS transforms? Okay, you say we ignore those and consider it the root element. Okay so does that mean none of the other styles from the parents apply like color or font-family? If some do and some don't we now have to go through every CSS property and decide if it does or does not continue to inherit. I don't actually know the answer to this.
* You have a DOM A->B->C->D->E. C asked to go fullscreen. While there E asks to go fullscreen. User presses ESC or whatever to exit fullscreen. Should it pop back to C or A? Does it matter if E is a video element and they clicked fullscreen? What if C is an iframe does it matter?
* Testing across all devices that support it requires all new testing infrastructure because going fullscreen is effectively something that happens outside the page not inside so testing that it actually happened, that a user can actually exit it correctly, requires entirely new test systems that were not there for previous APIs. Then multiply by 5 at least (Windows, MacOS, Linux, Android, ChromeOS, ...)
And so even though I'm sure everyone ended up understanding that, it turned out to be way more work than anyone expected. Yet, in the back of their minds it was arguably always "this is taking way longer than it should, goals not met" or at least that's how it seemed to be perceived.
Oh, very much so. I don't think it's a matter of how "old" the industry is - there will always be people who have never worked with software folks or are familiar enough with the ecosystem not to make the assumptions stated in the article.
This made me realize that working in a purely engineering team can sometimes be a perk. Not because technical people are "better", but because it leads to fewer frustrations like this one.
"My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."
This is achilles heel of non-tech managers. You can link as many articles like that as you want, it will still be a problem. They may be more aware of it, but it will still be, daily struggle. Non-tech manager should be paired with tech-lead with high trust relationship for this not to be a problem.
This example is just one case/symptom of much larger problem.
Non-tech managers will only see fast progress on combination of:
- poor developer
- doing fast progress
- with shitty code
- on good quality codebase
They will see tech lead/good coder as asshole in general, with poor performance in general, who for some unknown reason is respected for their code and sometimes magically ticks off hard problems quickly which "must be a coincidence, overestimated work to begin with or something like that" on combination of:
- person who actually cares about the project
- who repairs shitty code/tech debt
- who thinks more deeply about the problem
- and as a bigger picture issue, not just ticking off ticket with the lowest resistance possible
- writes good quality code
- if the problem breaks current abstrations, refactors abstration itself
- who cares about readers of the code
People who don't know how to write software have to use "boss" heuristics when determining who is a superstar. Qualities that take prominence over quickly writing high-quality software:
1. "Can do" attitude
2. Never backs down from a challenge
3. Being the glue that holds the company together
4. ...
As you can see these are all the kind of things you can't put your finger on but you "know it when you see it". When they see it, it nearly always looks like their reflection.
This rings very true to me. Unfortunately the subtleties of development like code quality aren't well represented if you only look at cards moving on a board.
In a lot of ways what you are describing is how it should work, that the tech lead works on the hard problems and big-picture problems, like abstractions and architectural issues, as well as mentoring junior devs.
In my opinion any manager (especially a non-technical one) should only measure the team as a unit. This is particularly important when evaluating the performance of a tech lead.
One thing I like to look for is natural lines of conflict in a situation that can arise when different individuals are working towards different goals, and question the underlying reasons why a manager may be acting the way they are. In a lot of cases you can get to a win-win situation if everyone is willing to play ball. Of course if the conflict arises from a fundamental organisational flaw, like poor management methods, or poor company culture, then it is time to move on.
The one that drives me up the wall is the schedule/cost anchoring question, "this should be easy to do, right?" every time they asking for a new feature. It's set to manipulate you to lower the schedule or the cost. If you say it's not easy, they would question your competency. If you say it's easy, they would say, well then you can get it done this week.
It always gives me a pause, and then I would double the schedule.
The assumption that asking for more information to recreate the bug is a lazy tactic to get out of the bug fix is also a terrible assumption and discredited the opinion IMO.
Often times asking for more information can speed things up and lead to a quicker resolution.
My most challenging bug was fixing a memory overwrite of a COM reference counter (of all things!) that would only happen under very rare conditions - when a 3rd party C library that was compiled with different calling convention would be called with a certain amount of parameters. It took me a month to chase down - frankly, it would have been impossible for me to figure out if Visual Studio did not have data breakpoints implemented for C++. It took a month... and the fix was also a two-liner. Still proud of that fix 17 years later!
It's like the old story of the engineer who charged 10k to fix a loose screw[0]. It's not just the obvious effort, it's the effort behind the experience and know-how to recognize and find the most appropriate fix for the problem.
One solution to the OP's problem is to continuously document the activity leading up to those two lines of code. That way you can point to the notes and say "here's why" .. and I've found that quite often justifiable and shows you've not been goofing off. Furthermore, it also helps someone who'll have to look at those two lines later on to grab some context and understand why they're there.
This could be as simple as notes logged against an issue about experiments done, etc.
There's no paradise anywhere. I've been self-employed for 13 years in a rather niche area, as an involuntarily solipsistic army of one for lack of ability to grow my business. Most of my customers are highly technical, but don't understand what I do quite enough to have a "lines of code" resolution on it.
I wish they _would_ ask me why those two lines took two days (in my case, it might be simple burnout; been coding for too long, and longer than a more conventional career track would have prescribed). Instead, nobody much cares whether I write 2 lines of code or 2000; same difference to them, and boils down to the all-important "delivering".
There's some intellectual and creative freedom in that which I suppose folks don't have with a code-involved boss who scrutinises their commits. But the opposite--nobody scrutinising your commits--isn't all it's cracked up to be, either. I almost never have to explain why I did something a certain way to anyone, not because I'm so important and command so much distinction or recognition of my expertise, but because nobody gives a crap. :-)
Oh, that reminded me of my "two lines" moment. Years ago I was writing software for some university project, and one feature took me a few days to figure out and implement, and when I checked the diff I realized that "all" it took was removing some 20 lines of code. I literally added a feature by removing some constraints that I had previously introduced.
If someone is measuring productivity in code, based on the amount of lines of code written, then they have never written code. Anyone with a tiny understanding of how programming works would totally get why something so small could take so long.
I rather enjoy fixing bugs, particularly really hard ones. They can be fun logic puzzles that take some sleuthing to figure out and offer multiple pay outs... First time reproducing it. Figuring out the problem. Figuring out the best fix. Test case fail -> test case pass.
> I know some developers don't like having to fix bugs, and so do whatever they can to get out of it. Claiming there isn't enough is a great way to look like you're trying to help but not have to do anything.
God, this behavior has annoyed me so much at times. I've worked with a few developers that were not bad overall, but would use the slightest excuse to punt on fixing an issue they were tasked with but didn't want to track down. Regularly weaseling out of tasks like this wastes the time of multiple people and either ends up back with the original dev or gets dumped on a more responsible worker.
> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
Not looking for other places in the code that are very likely to be affected by the same issue is bafflingly common, in my experience. Although I would say that managers are much more often to blame for this behavior than the devs. Any workplace that puts less weight on fixing an issue well than on artificial metrics like number of tickets closed is incentivizing exactly this type of behavior. Why bother getting criticized for spending all day fixing a simple bug the right way when you can fix 5 different iterations of that same bug and close 5 tickets in the same amount of time?
[+] [-] caymanjim|5 years ago|reply
[+] [-] rubyn00bie|5 years ago|reply
"Why/how is this worth X dollars/time? I know someone who says they can do it in a week." To which, I eventually learned to reply: "Wow, well... In that case, let me shoot you an article on how to build a Twitter clone in 15 minutes. [awkward pause while I smile at them] There's a lot more than just literal lines of code that goes into building a successful software product."
[+] [-] el_oni|5 years ago|reply
LOC is a decent measure, but features are our targets
[0] https://m.youtube.com/watch?v=o9pEzgHorH0&t=1235s
[+] [-] dboreham|5 years ago|reply
[+] [-] quickthrower2|5 years ago|reply
[+] [-] elnygren|5 years ago|reply
How often do you actually receive quality bug reports at work? My experience is that external or internal users almost never provide sufficient information and you as a coder are always expected to drill down on what they reported with a barrage of questions.
ie. if you are not doing https://en.wikipedia.org/wiki/Five_whys then you might be doing it wrong and wasting time because of it.
I'm referring to this:
> Some developers would have immediately gone back to the person reporting the problem and required more information before investigating. I try and do as much as I can with the information provided.
Which seems like being stubborn and making a mistake because of it.
Couple other parts also seem a bit overdoing it:
> Because I investigated if there were other ways of getting to the same problem, not just the reported reproduction steps. > Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
These seem like taking a gamble. Maybe something comes up, but is it more probable that this work should be minimised until there is more proof of "other ways of getting to the same problem"? Developer time is expensive, is this really the best way of using it? Would it make sense to just fix the issue at hand and only put in more time if more bug reports come in after the fix or if there is some other indication that this part of the code might be more broken?
[+] [-] dreamercz|5 years ago|reply
Very, very often. I work as a QA engineer whose main responsibilities is to go through the bugfixing queue and add needed info where necessary. And I have to spend a lot of time every day doing this. Sometimes it gets so bad I have to assign the ticket back to the reporter to add more info, because even I don't know where to look without it.
Interestingly enough, it's always the more senior people at our company who are guilty of writing crap bug reports.
[+] [-] ankit219|5 years ago|reply
> is it more probable that this work should be minimised
I guess this is where automated tests can come in. You fix something and see if passes the unit tests. But then everyone has their own approaches. For him, fixing a similar bug twice is worse than finding all possible mistakes at once.
[+] [-] kevsim|5 years ago|reply
[+] [-] alexbanks|5 years ago|reply
In 2014 I worked at a company that switched to Git and then started measuring LoC to assess performance/involvement. Engineers took to committing/removing things like node_modules directories to make the data meaningless.
It still happens, even today, quite a bit.
[+] [-] birdyrooster|5 years ago|reply
Edit: And they quit right afterwards.
[+] [-] ajford|5 years ago|reply
$JOB-2, admittedly about 4 years ago now, the good manager with a background in software left for a better opportunity and was replaced by someone who's background was management. With no insight into the subject matter, they fell back on whatever they thought they could quantize.
Got numerous things like that, though my team lead and our project manager did a great job of shielding the team from that crap, we still occasionally hear it come up in group meetings and the like.
I even got a task handed directly to me, bypassing everyone above me, to "estimate how much it would cost to migrate all those linus apps your team has to windows. They'll run better there". Just the windows licenses alone would have cost us about half the existing server costs since we were using AWS instances. Also included a line item that included recruitment costs for a new developer, and verbally informed him that it would likely involve hiring a new team, as the existing team was hired specifically as linux developers.
[+] [-] WalterBright|5 years ago|reply
The smart thing to do is to regularly keep your manager updated on what you're doing, especially if they don't come by regularly and ask you.
Especially if you are WFH.
[+] [-] l0b0|5 years ago|reply
[+] [-] cek|5 years ago|reply
[+] [-] seneca|5 years ago|reply
[+] [-] archi42|5 years ago|reply
[+] [-] duderific|5 years ago|reply
They absolutely use lines of code metric at my company. I don't miss any chance to tell my manager it's complete bullshit. His answer: "Engineers are supposed to write code, just like construction workers are supposed to build houses."
[+] [-] VLM|5 years ago|reply
Whats nice about the proceduralism is you can document that the steering committee only meets once a week on Tuesday afternoons and change control meets on Thursday. And everyone knows the automated test suite on DEV takes about half a working day. So if a change can't be worked into the schedule in less than a day, it'll never pass CI testing before the change control meeting so it'll take more than a week.
Whats bad, is mgmt would like you to complete multiple changes perhaps at the same time which always complicates the change control process especially if change #7 failed last week so company policy is to roll everything back and now we have 13 changes, two weeks worth, to complete next weekend. Also whats bad, is knowing its a corporate nightmare to make any change, why did I make a mistake to begin with of having the buttons swapped or a misssspelling or whatever.
I find the big metric now a days is backlog. Lets see the number of request tickets decrease this week instead of increase. That leads to intense pressure to roll multiple problems into one ticket.
[+] [-] beefield|5 years ago|reply
There was a time when I thought this video was funny:
https://www.youtube.com/watch?v=BKorP55Aqvg
[+] [-] trav4225|5 years ago|reply
[+] [-] cortesoft|5 years ago|reply
[+] [-] jakemal|5 years ago|reply
[+] [-] greggman3|5 years ago|reply
I was on a browser team. A fellow co-worker decided to add the Fullscreen API to it which means not just add the API but first discuss it in the relevant standards committees.
I'm pretty sure he thought, and so did management, this would be a 2-3 month project at most. IIRC though it was like 18 months, maybe longer.
Some problems that aren't obvious at first
* What is fullscreen mode? Is it a mode for the page, a mode for an individual element? What?
They eventually decided it was for an individual element
* What happens to the CSS above that element when none of its parents are being rendered?
I'm sure that took a while to argue over. Like if it was position: relative or absolute and suddenly it's parent is no longer displayed. What if the parent has CSS transforms? Okay, you say we ignore those and consider it the root element. Okay so does that mean none of the other styles from the parents apply like color or font-family? If some do and some don't we now have to go through every CSS property and decide if it does or does not continue to inherit. I don't actually know the answer to this.
* You have a DOM A->B->C->D->E. C asked to go fullscreen. While there E asks to go fullscreen. User presses ESC or whatever to exit fullscreen. Should it pop back to C or A? Does it matter if E is a video element and they clicked fullscreen? What if C is an iframe does it matter?
* Testing across all devices that support it requires all new testing infrastructure because going fullscreen is effectively something that happens outside the page not inside so testing that it actually happened, that a user can actually exit it correctly, requires entirely new test systems that were not there for previous APIs. Then multiply by 5 at least (Windows, MacOS, Linux, Android, ChromeOS, ...)
And so even though I'm sure everyone ended up understanding that, it turned out to be way more work than anyone expected. Yet, in the back of their minds it was arguably always "this is taking way longer than it should, goals not met" or at least that's how it seemed to be perceived.
[+] [-] fireflux_|5 years ago|reply
This made me realize that working in a purely engineering team can sometimes be a perk. Not because technical people are "better", but because it leads to fewer frustrations like this one.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] RandomInteger4|5 years ago|reply
[+] [-] Piskvorrr|5 years ago|reply
[+] [-] ww520|5 years ago|reply
[+] [-] decebalus1|5 years ago|reply
[+] [-] snidane|5 years ago|reply
~ Edsger Dijkstra
[+] [-] mirekrusin|5 years ago|reply
This example is just one case/symptom of much larger problem.
Non-tech managers will only see fast progress on combination of: - poor developer - doing fast progress - with shitty code - on good quality codebase
They will see tech lead/good coder as asshole in general, with poor performance in general, who for some unknown reason is respected for their code and sometimes magically ticks off hard problems quickly which "must be a coincidence, overestimated work to begin with or something like that" on combination of: - person who actually cares about the project - who repairs shitty code/tech debt - who thinks more deeply about the problem - and as a bigger picture issue, not just ticking off ticket with the lowest resistance possible - writes good quality code - if the problem breaks current abstrations, refactors abstration itself - who cares about readers of the code
[+] [-] ipnon|5 years ago|reply
1. "Can do" attitude 2. Never backs down from a challenge 3. Being the glue that holds the company together 4. ...
As you can see these are all the kind of things you can't put your finger on but you "know it when you see it". When they see it, it nearly always looks like their reflection.
[+] [-] pjgalbraith|5 years ago|reply
In a lot of ways what you are describing is how it should work, that the tech lead works on the hard problems and big-picture problems, like abstractions and architectural issues, as well as mentoring junior devs.
In my opinion any manager (especially a non-technical one) should only measure the team as a unit. This is particularly important when evaluating the performance of a tech lead.
One thing I like to look for is natural lines of conflict in a situation that can arise when different individuals are working towards different goals, and question the underlying reasons why a manager may be acting the way they are. In a lot of cases you can get to a win-win situation if everyone is willing to play ball. Of course if the conflict arises from a fundamental organisational flaw, like poor management methods, or poor company culture, then it is time to move on.
[+] [-] ww520|5 years ago|reply
It always gives me a pause, and then I would double the schedule.
[+] [-] newshorts|5 years ago|reply
Often times asking for more information can speed things up and lead to a quicker resolution.
[+] [-] pdpi|5 years ago|reply
[+] [-] rmason|5 years ago|reply
[+] [-] rburhum|5 years ago|reply
[+] [-] r_c_a_d|5 years ago|reply
[+] [-] dev_tty01|5 years ago|reply
[+] [-] tommyage|5 years ago|reply
[+] [-] phreack|5 years ago|reply
[0]https://www.snopes.com/fact-check/know-where-man/
[+] [-] sriku|5 years ago|reply
This could be as simple as notes logged against an issue about experiments done, etc.
[+] [-] abalashov|5 years ago|reply
I wish they _would_ ask me why those two lines took two days (in my case, it might be simple burnout; been coding for too long, and longer than a more conventional career track would have prescribed). Instead, nobody much cares whether I write 2 lines of code or 2000; same difference to them, and boils down to the all-important "delivering".
There's some intellectual and creative freedom in that which I suppose folks don't have with a code-involved boss who scrutinises their commits. But the opposite--nobody scrutinising your commits--isn't all it's cracked up to be, either. I almost never have to explain why I did something a certain way to anyone, not because I'm so important and command so much distinction or recognition of my expertise, but because nobody gives a crap. :-)
[+] [-] arnvald|5 years ago|reply
[+] [-] whoisjuan|5 years ago|reply
[+] [-] eikenberry|5 years ago|reply
???
I rather enjoy fixing bugs, particularly really hard ones. They can be fun logic puzzles that take some sleuthing to figure out and offer multiple pay outs... First time reproducing it. Figuring out the problem. Figuring out the best fix. Test case fail -> test case pass.
[+] [-] rurp|5 years ago|reply
> I know some developers don't like having to fix bugs, and so do whatever they can to get out of it. Claiming there isn't enough is a great way to look like you're trying to help but not have to do anything.
God, this behavior has annoyed me so much at times. I've worked with a few developers that were not bad overall, but would use the slightest excuse to punt on fixing an issue they were tasked with but didn't want to track down. Regularly weaseling out of tasks like this wastes the time of multiple people and either ends up back with the original dev or gets dumped on a more responsible worker.
> Because I took the time to verify if there were other parts of the code that might be affected in similar ways.
Not looking for other places in the code that are very likely to be affected by the same issue is bafflingly common, in my experience. Although I would say that managers are much more often to blame for this behavior than the devs. Any workplace that puts less weight on fixing an issue well than on artificial metrics like number of tickets closed is incentivizing exactly this type of behavior. Why bother getting criticized for spending all day fixing a simple bug the right way when you can fix 5 different iterations of that same bug and close 5 tickets in the same amount of time?
[+] [-] adrianmonk|5 years ago|reply