The core issue here is reviewer saying “if you change this, you also have to fix other pending issues in codebase”.
That should be the point of push back “Great feedback, and appreciate the direction to improve code quality, however the implications of changing Y means we need X, Y, Z approval and it will many days. I am making a tech debt task with what you described and will be done in a follow up PR based on priority and bandwidth.
Let’s focus on what it would take to get this surgical PR out.”
The biggest lesson I learned is to make focused PRs, and learn to pushback when reviewer asks for a scope creep. Mostly other engineers have been pragmatic.
It’s unrelated to number of lines as you can reformat while codebase but not change anything logically, or change some feature flags which could have a huge impact. But ONE FOCUSED CHANGE AT A TIME
> The core issue here is reviewer saying “if you change this, you also have to fix other pending issues in codebase”.
I don't agree. IMO, the worst issue being displayed here is that of the "6 days to change 1 line of code," almost half of them elapsed before an engineer ever looked at the issue. If this is such a high priority issue that the company would "need" (scare quotes intentional) to do a layoff if it's not implemented soon, those 2-3 days before anyone looked at it should never have happened. And that is apparently what passes for "the fast track" in this development process.
And then there's the 2 days at the end of this 6 day period where it looks like nothing happened, because the test plan was deemed insufficient.
"If you change this, you also have to fix other pending issues" only took up 2 hours here. There are at least 2 or 3 other things I could fill in here that I'd flag as core issues with this process before I'd even think about dealing with this part of the process.
Usually I avoid making any improvement that is not directly related to the tasks at hand. Even adding a missing semicolon risks bringing this to the attention of an overzealous reviewer that will then make me follow a rabbit hole of legacy fixes.
I’d rather create issues silently so I don’t forget instead of even adding a FIXME or TODO. I think this part of reviews is broken. Resolving tech debt should not be a requirement for completing a task, it should be planned.
Score creepers have no idea the architectural damage they cause. When you get over invested in one block of code, people will try to route around it. Then you get layers of these and pretty soon your code is the moral equivalent of Atlanta, GA, which is notorious for the number of ring roads it has.
An even better solution, imho, is that rules should be automated.
New rules are added, and the automation adds rule override comments to every existing violation - which also lets you track them. When you need to rush code out that needs to violate a rule, you add an override comment as well, with your name assigned to fix it.
Then over time, you build a culture that wants to fix these rule violations separate from feature development.
“I am very sorry to hear but my job depends on making sure we follow our procedures. Please follow outlined procedures before your change is promoted to production. Thank you for your cooperation. Sincerely, your IT department.”
That is absolutely true. Most companies have code review process that is totally full of nitpicking and trivial comments. I once suggested replacing that with static analysis tools to remove such comments and make feedback faster in our org, and was told that such code review is necessary for everyone. It helps people get promoted, makes people feel they prevented code issues, keeps the code review metrics looking good for the top level overlords who sit and monitor number of reviewer comments.
> replacing that with static analysis tools to remove such comments
I dislike excessive usage of these kind of tools because not infrequently you end up making your code worse just to satisfy some dumb tool.
The real solution is just accepting that not all code needs to be like you would have written it yourself, and asking yourself "does this comment address anything that's objectively wrong with the code?" A lot of times the answer to that will be "no".
Sometimes there's a real prisoner dilemma thing going on here too. As a senior reviewing a junior's PR there are things that could be better but sometimes it's just not important. Are their variable names a bit too wordy? Did they have uneven amounts of whitespace between methods? In an ideal world these are more a "feedback for next time if it becomes a pattern" type thing. But as a reviewer, someone is looking at your comments per PR as an indicator for how much guidance you're providing, or going "oh who let that get merged" so you make the comment anyway. And then as the reviewee, they're concerned that leaving comments unaddressed looks bad for their level of responsiveness to feedback, or they think the reviewer might give them bad feedback for pushing back, so they'll then go and make the changes. But now they need someone to approve the updated version and the latency cycle starts again.
Nitpicking is definitely a real thing. Perhaps it's the feel that they need to find something wrong with the code.
But there's also very real issues that one person thinks is a nitpick that really isn't. Perhaps because they can't see the issue with their own eyes, they don't understand the issue or they lack the ability to leave feelings aside and rethink what they've written.
We've all been attached to code we've written. We probably think it's the most elegant code ever written. But sometimes you've just gotta admit you were wrong, it's not readable or it's flawed and it will be worse off for the codebase.
I've also been called a nitpicker by someone more senior than me for pointing out some very real and potentially problematic race conditions in their code. To me, a race condition is a fundamental issue with what has been written and should be fixed. But to them it was acceptable because they hadn't seen it break naturally yet.
Static analysis tools and peer reviews can catch different types of issues, though. Just like a statically compiled language can catch some bugs that a dynamic language would not; but not everything. I'm a big fan of peer reviews, and I tend to focus my comments on
- This won't do what you appear to be expecting it to
- This will prevent <some other feature we're likely to need> from being implemented; at least without a much higher cost
- This will work, but is very hard to understand; it will negatively impact maintenance. Consider doing it this other way, or adding an explanation about what's going on
- This code is ok, but might read/work better <this other way>. I'm not failing the review because of it, but it's worth keeping in mind for future code.
This is what I implemented prior to my team growing beyond 1 person. If linting passes, it’s valid style. If you think the style is problematic, that’s an issue with the linting and formatting and not the PR. In practice this hasn’t been a problem. We’ve made 2 or 3 linting PRs over the years.
Many code bases that I've worked on suffered from too much, and badly engineered, code - not from not having enough code.
Yes, you can go to extremes, and some people are annoyingly nitpicky when it doesn't matter. But generally, code is often a liability, stuff that is hard to understand (or even potentially subtly wrong) will cause issues down the line, and so on. Code review helps mitigate these issues, something which I think is one of the few empirical results we actually have. Plus, it also spreads knowledge about the code base.
The submission is in any case not pointing out that code review itself is bad (or even that having to adapt to a new code style is the problem), but rather the whole bureaucracy around it. Code reviews are good when the turnaround is swift.
I've worked in spaces where static analysis tools were run automatically on every new PR. Trust me, it's not as good as it sounds. Static analyzers are fully incapable of detecting the more nuanced mistakes, so a human touch will still be necessary. Nearly all of the "bugs" found by the static analyzer won't actually be bugs, but you'll have to "fix" them anyway because the reviewer (again, you'll still need one) will demand that all the warnings be cleared before approving your code.
Build a culture that prefers succinct, non-nitpicky code reviews. Static analysis tools only give reviewers more crap to nitpick.
It depends on your team. If you have good juniors, they will relish the feedback. Of course it needs to be substantive. Often junior code will be far more complex and difficult to read than senior code, and code reviews are a chance to get that feedback across.
That said, often teams don't give credit for code reviews, or take into account the (sometimes considerable) time required to write them. To some extent, tools are also to blame - I think all teams would do well to pick a system that integrates with the dominant IDE. Doing code reviews in the same space you write code just makes sense; requiring that you use a friggin' webapp is HORRIBLE.
I _kind of_ understand in the sense that - it can feel bad/like wasted time to do code review when you don't leave a single comment.
But of course, it's very very important to ignore that feeling and do the sensible thing, which is be happy that the code review is smooth and the code can be merged. I can't imagine the existential dread of realizing that half your job is pointing out the same common errors every single time. Tools like clang-tidy, clang-format, etc are so vital for staying sane.
Hah, I have a very different experience where I work. At my org every time I submit a ticket requesting access for somebody to such-and-such a system it typically gets done within a couple minutes regardless of what priority I give. I’ve sometimes wondered if help desk staff snap them up as soon as they come in because they’re fast tickets to close and lets them improve their personal metrics.
It sucks when you say it like the title: "6 days to change 1 line of code",
But... The system improved in a few other ways:
1. The setting became configurable via the parameters table instead of being hard coded.
2. Auditing was introduced to track changes to that setting.
I'm not trying to defend bureaucracy, because I truly hate that aspect of large organizations, but just point out that additional value was created during those 6 days beyond what was originally set out to be achieved.
Incidentally, this is why you have to bake in a set amount of overhead into your estimates, and need to take this red tape into consideration if you do story pointing.
The only reason the parameters table would be useful is because making changes to the code had so many things blocking it. Similarly, auditing for this sounds unnecessary because it used to be in code which means that you had source control as your audit trail.
So your two wins were basically a "win" of not dealing with extra ceremony around code changes, followed by a "win" of recovering the functionality loss of the first "win" because future changes to this wouldn't go in the code.
Yes but it also did something that is potentially way more dangerous than the original ask. I would argue that pushing a parameter from a hard-coded value during an immediate outage or real production concern is stupid. There are way more potential pitfall there. IMO, This should have been handled with a:
OP: Please accept the 1 char PR, this is urgent. I just created a ticket to track the requested enhancements. Let's address production concern first then then I will get the rest after.
Reviewer: LGTM!
This is actually where seniority pays, if most of your engineering base can't navigate between rules and guidelines, the organization is crazy.
Step 1: determine true priority. How long can this take before it affects jobs? Everyone needs that in mind.
If a week won’t affect anyone’s jobs, follow the process or minimally alter it. If people are on furlough because of IT, you have everyone needed in the same room (physical or virtual) until the problem is solved.
We don’t have that context here.
But if Ed and the entire chain doesn’t have that context, That’s a system failure. The senior would likely just insist on a second ticket to fix it right after if he knew someone’s rent check was on the line. (And if not? That’s also a management issue to solve.)
Auditing requirements could have been covered by version history for the file containing that hardcoded value, I think? They would have other problems if they didn't have version control.
This is a story where a one line change of a hardcoded value actually went well.
I could image a scenerio where somebody stored the number of months of backlog as a 2 bit value. 0, 1, 2 or 3, you know, to be smart and clever. This may not appear as a problem during testing because it may be hidden many layers down, in some downstream service that is untested. Maybe in some low code automation service....
Changing it to 4 would mean the backlog is 0. Who knows what the consequences might be. Would that service go and cancel all jobs in the production queue?
Would it email all customers mentioning their stuff is cancelled?
I get that this is a seem gly easy change, but if a change of policy is expressed to the software team as an urgent problem, this seems like the management team needs better planning, and not randomly try to prioritize issues.....
Code review starts out with good intentions. But some gatekeeper inevitably sets up shop and starts rejecting everything for trivial reasons. Says they're interested in keeping up 'code quality'. But nothing is worse than keeping buggy code around long after the fix is ready, or delaying features so nobody can try them.
I'd recommend a process where comments are allowed but reviewers cannot block commits. Each developer is trusted to be careful, make such changes as are pertinent to the task. Use CI as well and depending on the team it can all go very well.
My comment is more a meta commentary on factory workers vs software developers.
This company leader is willing to lay off factory worker because of 10% under utilization, he can tweak some variables to increase productivity but ultimately the choice is either full utilization or unemployment. He can do this largely because, I assume, these workers are replaceable and can be rehired during busy season and because the profit generated per employee doesn't allow for any inefficiency.
I work as a software developer. Our under utilization has to get well over 90% before anyone is even considering getting rid of us. Many of us are putting in four hour work weeks. No one manages our minutes, our bathroom breaks, etc.
We are in a period of major capitalization of software. It won't last forever. Eventually the major infrastructure of the IT world will be built and the industry will shift to a maintenance mode. Most of us won't be needed, we will become replaceable and the profit we generate in maintenance mode will be insignificant to what we see today.
Factory workers are usually fired within minutes or hours of detections in low productivity of an individual worker. In our life time, I believe this will start happening with software developers.
Anecdotal experience: after some years in the team with formal code reviews, moved to the team/company without code reviews. Everyone is free to commit/merge to any branch. When joining the company, I had somewhat mixed feelings about this. Apparently, it felt so refreshing and empowering, I became productive within few days.
Using your code review process to hold changes ransom until they meet some lofty ever-changing team ideals, is disfunctional.
"Upgrade as you go" policies leave long tails of half-completed conversions that make the codebase harder to ramp new devs on. You'll also never finish the conversion because there's no guarantee that product focus will walk every part of the codebase with any regularity. Some product areas get left alone for years and years and years.
It's either important enough to cut over to the new policy in one concerted project, or it's not important.
Reading this as a problem with code review is just wrong. The problem is that a company let its processes, which are made up internal barriers, get in the way of its principles.
Every process needs an escape hatch. Changing something which will prevent layoffs should trigger every escape hatch.
A few years ago, working on a comment team, I created a function to do an AB test using an AB test team system. Someone else does the implantation, not me. Someone didn't like the function, as they didn't use the company's cache system (the AB team previously explained all the reasons why the cache is not used, etc).
I started to participate in several meetings to explain that it does not use the cache system because bla bla bla.
A week or two after a few meetings (and a lot of money spent), I got it deployed.
It's the same company that created a new meeting to discuss wasted time in all the other meetings....
At the time I thought: This is abnormal, but it is actually an extremely common scenario in many companies.
Shirley's review was accurate for a non-urgent MR, but was bogging an urgent ticket. Ignoring them would not have increased risk, but the policy is important to manage technical debt. A simple resolution in this case is to move the comments to technical tickets, assign them to Ed for later, and approve the MR.
yet during interviews you're expected to write 50 lines or 100 lines of code with tricky algorithms under pressure in 30-45 minutes from design to implementation to unit tests and corner case considerations, also you must keep talking while coding with your interviewer so you behave like a team player and is collaborating with your future team mate on the fly.
all these have nothing to do with real life coding, it's more like a coding monkey stress test to me, you can only perform well by doing daily leetcode for months, it's geared towards young people who has those times to practice.
I recall the days I spent two weeks to add 50 lines of code to linux kernel, 95% of the time was to figure out how that kernel subsystem works and where to hook the code, but that matters 0 to nowadays FAANG interview process, the 5% of time for 50 lines code is all that matters.
Im still unclear how people get bit by this kind of thing.
The way ive always operated is that nothing in a Code Review can stop a merge unless its a rule written down before the review or its just a blatantly obvious incorrect miss of requirements or functionality.
I love seeing peoples comments about how things can be improved in reviews and learning new approaches or context, but if there's not reference to a written rule I should have known about beforehand anyway, I'm not changing it if I don't agree with it.
There's another failure mode not described here: "product management doesn't understand the change".
I'm in the (perhaps enviable, perhaps unenviable) position of getting a small subset of my marching orders directly from our CTO. Engineering at my company is organized in such a way that, while engineers report to the CTO, product management (part of engineering) does not. When the CTO asks me to priority-override a new feature, or even a small change, I've sometimes gotten into arguments with product management over how this wasn't planned for, and how they need time to adjust schedules and collect requirements before I can begin work. Unfortunately, I can't use the "talk to your boss" excuse, due to that quirk of the reporting structure.
One memorable example of this was a feature request which came in hot in mid-June, which I had code complete later that day. We actually shipped it a few days ago. To product management's credit, they _did_ come up with requirements that were not met by my initial change, but I question whether releasing fast and iterating, or releasing correct after four weeks, was better for the customer.
Not sure I see the point, rules are in place for a reason and it's not like changing 2 lines would have taken 12 days.
This seems like it changes something with potentially super-high impact, 6 days from request by the CEO to release to production is a pretty good release time.
Yep. Nine months to change a single buggy line of cache line invalidation in the translation look-aside table handling IMUL opcode in QEMU 4.0.
imul eax, [ecx+0x41], 0x10
Entails hundreds of hours of single-stepping through that opcode in Linux kernel using an indirect operand pointing toward its own opcode (self-modifying code).
Even the extraordinaire Fabrice Bellard (author of QEMU) admitted that it is broke and did a total rewrite, which fixed tons of other issues.
This conflates two kinds of slowdowns. Post-implementation stuff like thorough peer reviews and automated testing followed by manual QA is usually a net gain on a long enough timeline. The damning and soul-draining part is pre-implementation -- how many people need to agree that the line of code will be changed and how many people must opine on the change before it can be shot over the Dev | QA wall.
The worst place I ever worked at had mandatory "ticket refinement" - a feature or a bug couldn't be worked on until it had been through a "refinement session", in which everyone had to chime in, and the ticket wasn't considered refined until everyone on the team fully agreed with the implementation. Naturally over time, this devolved first into writing pseudo-code in Jira tickets and then into near-production code in a google doc overflowing with comments. Leaving that company I told my manager "this is a team of 4 'senior' engineers, I'd suggest you get one proper senior engineer and 3 typists". Nowadays, you could probably shove minutes from those meetings into copilot and call it a day.
Of course, the flipside of this was that comp was about on par with the market, and the perks were top-tier. If you're okay with doing the bare minimum while spending ~20-30 hours a week on total nonsense, it's probably the dream job.
[+] [-] nojvek|2 years ago|reply
That should be the point of push back “Great feedback, and appreciate the direction to improve code quality, however the implications of changing Y means we need X, Y, Z approval and it will many days. I am making a tech debt task with what you described and will be done in a follow up PR based on priority and bandwidth.
Let’s focus on what it would take to get this surgical PR out.”
The biggest lesson I learned is to make focused PRs, and learn to pushback when reviewer asks for a scope creep. Mostly other engineers have been pragmatic.
It’s unrelated to number of lines as you can reformat while codebase but not change anything logically, or change some feature flags which could have a huge impact. But ONE FOCUSED CHANGE AT A TIME
[+] [-] Paul-Craft|2 years ago|reply
I don't agree. IMO, the worst issue being displayed here is that of the "6 days to change 1 line of code," almost half of them elapsed before an engineer ever looked at the issue. If this is such a high priority issue that the company would "need" (scare quotes intentional) to do a layoff if it's not implemented soon, those 2-3 days before anyone looked at it should never have happened. And that is apparently what passes for "the fast track" in this development process.
And then there's the 2 days at the end of this 6 day period where it looks like nothing happened, because the test plan was deemed insufficient.
"If you change this, you also have to fix other pending issues" only took up 2 hours here. There are at least 2 or 3 other things I could fill in here that I'd flag as core issues with this process before I'd even think about dealing with this part of the process.
[+] [-] figassis|2 years ago|reply
I’d rather create issues silently so I don’t forget instead of even adding a FIXME or TODO. I think this part of reviews is broken. Resolving tech debt should not be a requirement for completing a task, it should be planned.
[+] [-] hinkley|2 years ago|reply
[+] [-] mabbo|2 years ago|reply
New rules are added, and the automation adds rule override comments to every existing violation - which also lets you track them. When you need to rush code out that needs to violate a rule, you add an override comment as well, with your name assigned to fix it.
Then over time, you build a culture that wants to fix these rule violations separate from feature development.
[+] [-] foogazi|2 years ago|reply
Whenever that happens just add a TODO ticket. Unblock production and the system is not worse for it
[+] [-] rad_gruchalski|2 years ago|reply
[+] [-] guluarte|2 years ago|reply
[+] [-] crop_rotation|2 years ago|reply
[+] [-] arp242|2 years ago|reply
I dislike excessive usage of these kind of tools because not infrequently you end up making your code worse just to satisfy some dumb tool.
The real solution is just accepting that not all code needs to be like you would have written it yourself, and asking yourself "does this comment address anything that's objectively wrong with the code?" A lot of times the answer to that will be "no".
[+] [-] Macha|2 years ago|reply
[+] [-] GrinningFool|2 years ago|reply
[+] [-] Philip-J-Fry|2 years ago|reply
But there's also very real issues that one person thinks is a nitpick that really isn't. Perhaps because they can't see the issue with their own eyes, they don't understand the issue or they lack the ability to leave feelings aside and rethink what they've written.
We've all been attached to code we've written. We probably think it's the most elegant code ever written. But sometimes you've just gotta admit you were wrong, it's not readable or it's flawed and it will be worse off for the codebase.
I've also been called a nitpicker by someone more senior than me for pointing out some very real and potentially problematic race conditions in their code. To me, a race condition is a fundamental issue with what has been written and should be fixed. But to them it was acceptable because they hadn't seen it break naturally yet.
[+] [-] RHSeeger|2 years ago|reply
- This won't do what you appear to be expecting it to
- This will prevent <some other feature we're likely to need> from being implemented; at least without a much higher cost
- This will work, but is very hard to understand; it will negatively impact maintenance. Consider doing it this other way, or adding an explanation about what's going on
- This code is ok, but might read/work better <this other way>. I'm not failing the review because of it, but it's worth keeping in mind for future code.
[+] [-] Waterluvian|2 years ago|reply
[+] [-] Tainnor|2 years ago|reply
Yes, you can go to extremes, and some people are annoyingly nitpicky when it doesn't matter. But generally, code is often a liability, stuff that is hard to understand (or even potentially subtly wrong) will cause issues down the line, and so on. Code review helps mitigate these issues, something which I think is one of the few empirical results we actually have. Plus, it also spreads knowledge about the code base.
The submission is in any case not pointing out that code review itself is bad (or even that having to adapt to a new code style is the problem), but rather the whole bureaucracy around it. Code reviews are good when the turnaround is swift.
[+] [-] ryukoposting|2 years ago|reply
Build a culture that prefers succinct, non-nitpicky code reviews. Static analysis tools only give reviewers more crap to nitpick.
[+] [-] javajosh|2 years ago|reply
That said, often teams don't give credit for code reviews, or take into account the (sometimes considerable) time required to write them. To some extent, tools are also to blame - I think all teams would do well to pick a system that integrates with the dominant IDE. Doing code reviews in the same space you write code just makes sense; requiring that you use a friggin' webapp is HORRIBLE.
[+] [-] Forricide|2 years ago|reply
But of course, it's very very important to ignore that feeling and do the sensible thing, which is be happy that the code review is smooth and the code can be merged. I can't imagine the existential dread of realizing that half your job is pointing out the same common errors every single time. Tools like clang-tidy, clang-format, etc are so vital for staying sane.
[+] [-] jpollock|2 years ago|reply
1) O(N^2) in the hottest code we have.
2) O(N^3) from someone else in the same area.
3) Filling a cache on first access - this causes the system to page (high latency!).
4) Hunting for data themselves and creating null pointer errors.
5) Hunting for data themselves and missing all the fun edge cases.
6) Duplicating a feature that already existed.
I'm guessing (4) could have been caught by static analysis?
[+] [-] politelemon|2 years ago|reply
This is completely unrealistic, security teams would never respond so quickly.
[+] [-] switch007|2 years ago|reply
[+] [-] fargle|2 years ago|reply
[+] [-] 542458|2 years ago|reply
[+] [-] aezart|2 years ago|reply
[+] [-] seanthemon|2 years ago|reply
[+] [-] bsuvc|2 years ago|reply
But... The system improved in a few other ways:
1. The setting became configurable via the parameters table instead of being hard coded.
2. Auditing was introduced to track changes to that setting.
I'm not trying to defend bureaucracy, because I truly hate that aspect of large organizations, but just point out that additional value was created during those 6 days beyond what was originally set out to be achieved.
Incidentally, this is why you have to bake in a set amount of overhead into your estimates, and need to take this red tape into consideration if you do story pointing.
[+] [-] yuliyp|2 years ago|reply
So your two wins were basically a "win" of not dealing with extra ceremony around code changes, followed by a "win" of recovering the functionality loss of the first "win" because future changes to this wouldn't go in the code.
[+] [-] axpy|2 years ago|reply
OP: Please accept the 1 char PR, this is urgent. I just created a ticket to track the requested enhancements. Let's address production concern first then then I will get the rest after.
Reviewer: LGTM!
This is actually where seniority pays, if most of your engineering base can't navigate between rules and guidelines, the organization is crazy.
[+] [-] ebiester|2 years ago|reply
If a week won’t affect anyone’s jobs, follow the process or minimally alter it. If people are on furlough because of IT, you have everyone needed in the same room (physical or virtual) until the problem is solved.
We don’t have that context here.
But if Ed and the entire chain doesn’t have that context, That’s a system failure. The senior would likely just insist on a second ticket to fix it right after if he knew someone’s rent check was on the line. (And if not? That’s also a management issue to solve.)
[+] [-] pif|2 years ago|reply
Which just states the fact.
> The system improved in a few other ways
Which were not required.
[+] [-] omoikane|2 years ago|reply
[+] [-] cjalmeida|2 years ago|reply
[+] [-] hmaarrfk|2 years ago|reply
I could image a scenerio where somebody stored the number of months of backlog as a 2 bit value. 0, 1, 2 or 3, you know, to be smart and clever. This may not appear as a problem during testing because it may be hidden many layers down, in some downstream service that is untested. Maybe in some low code automation service....
Changing it to 4 would mean the backlog is 0. Who knows what the consequences might be. Would that service go and cancel all jobs in the production queue? Would it email all customers mentioning their stuff is cancelled?
I get that this is a seem gly easy change, but if a change of policy is expressed to the software team as an urgent problem, this seems like the management team needs better planning, and not randomly try to prioritize issues.....
[+] [-] JoeAltmaier|2 years ago|reply
I'd recommend a process where comments are allowed but reviewers cannot block commits. Each developer is trusted to be careful, make such changes as are pertinent to the task. Use CI as well and depending on the team it can all go very well.
[+] [-] hermannj314|2 years ago|reply
This company leader is willing to lay off factory worker because of 10% under utilization, he can tweak some variables to increase productivity but ultimately the choice is either full utilization or unemployment. He can do this largely because, I assume, these workers are replaceable and can be rehired during busy season and because the profit generated per employee doesn't allow for any inefficiency.
I work as a software developer. Our under utilization has to get well over 90% before anyone is even considering getting rid of us. Many of us are putting in four hour work weeks. No one manages our minutes, our bathroom breaks, etc.
We are in a period of major capitalization of software. It won't last forever. Eventually the major infrastructure of the IT world will be built and the industry will shift to a maintenance mode. Most of us won't be needed, we will become replaceable and the profit we generate in maintenance mode will be insignificant to what we see today.
Factory workers are usually fired within minutes or hours of detections in low productivity of an individual worker. In our life time, I believe this will start happening with software developers.
[+] [-] zerr|2 years ago|reply
[+] [-] a13o|2 years ago|reply
"Upgrade as you go" policies leave long tails of half-completed conversions that make the codebase harder to ramp new devs on. You'll also never finish the conversion because there's no guarantee that product focus will walk every part of the codebase with any regularity. Some product areas get left alone for years and years and years.
It's either important enough to cut over to the new policy in one concerted project, or it's not important.
[+] [-] habosa|2 years ago|reply
Every process needs an escape hatch. Changing something which will prevent layoffs should trigger every escape hatch.
[+] [-] zac23or|2 years ago|reply
I started to participate in several meetings to explain that it does not use the cache system because bla bla bla. A week or two after a few meetings (and a lot of money spent), I got it deployed.
It's the same company that created a new meeting to discuss wasted time in all the other meetings....
At the time I thought: This is abnormal, but it is actually an extremely common scenario in many companies.
[+] [-] qsantos|2 years ago|reply
[+] [-] synergy20|2 years ago|reply
all these have nothing to do with real life coding, it's more like a coding monkey stress test to me, you can only perform well by doing daily leetcode for months, it's geared towards young people who has those times to practice.
I recall the days I spent two weeks to add 50 lines of code to linux kernel, 95% of the time was to figure out how that kernel subsystem works and where to hook the code, but that matters 0 to nowadays FAANG interview process, the 5% of time for 50 lines code is all that matters.
[+] [-] crop_rotation|2 years ago|reply
[+] [-] arcbyte|2 years ago|reply
The way ive always operated is that nothing in a Code Review can stop a merge unless its a rule written down before the review or its just a blatantly obvious incorrect miss of requirements or functionality.
I love seeing peoples comments about how things can be improved in reviews and learning new approaches or context, but if there's not reference to a written rule I should have known about beforehand anyway, I'm not changing it if I don't agree with it.
[+] [-] don-code|2 years ago|reply
I'm in the (perhaps enviable, perhaps unenviable) position of getting a small subset of my marching orders directly from our CTO. Engineering at my company is organized in such a way that, while engineers report to the CTO, product management (part of engineering) does not. When the CTO asks me to priority-override a new feature, or even a small change, I've sometimes gotten into arguments with product management over how this wasn't planned for, and how they need time to adjust schedules and collect requirements before I can begin work. Unfortunately, I can't use the "talk to your boss" excuse, due to that quirk of the reporting structure.
One memorable example of this was a feature request which came in hot in mid-June, which I had code complete later that day. We actually shipped it a few days ago. To product management's credit, they _did_ come up with requirements that were not met by my initial change, but I question whether releasing fast and iterating, or releasing correct after four weeks, was better for the customer.
[+] [-] iLoveOncall|2 years ago|reply
This seems like it changes something with potentially super-high impact, 6 days from request by the CEO to release to production is a pretty good release time.
[+] [-] egberts1|2 years ago|reply
Even the extraordinaire Fabrice Bellard (author of QEMU) admitted that it is broke and did a total rewrite, which fixed tons of other issues.
https://github.com/unicorn-engine/unicorn/issues/364
[+] [-] jacknews|2 years ago|reply
[+] [-] activitypea|2 years ago|reply
The worst place I ever worked at had mandatory "ticket refinement" - a feature or a bug couldn't be worked on until it had been through a "refinement session", in which everyone had to chime in, and the ticket wasn't considered refined until everyone on the team fully agreed with the implementation. Naturally over time, this devolved first into writing pseudo-code in Jira tickets and then into near-production code in a google doc overflowing with comments. Leaving that company I told my manager "this is a team of 4 'senior' engineers, I'd suggest you get one proper senior engineer and 3 typists". Nowadays, you could probably shove minutes from those meetings into copilot and call it a day.
Of course, the flipside of this was that comp was about on par with the market, and the perks were top-tier. If you're okay with doing the bare minimum while spending ~20-30 hours a week on total nonsense, it's probably the dream job.
[+] [-] short_throw|2 years ago|reply
But actually this is a story about the process working.