In case anybody wants an answer to that, here's mine:
Cost estimates only are valuable if a) the amount of time involved is relatively large, b) you expect to learn nothing during that time, and c) the expected-value estimates are precise enough that you can calculate a narrow ROI range.
Instead, the way I prefer to work is with continuous delivery and small units of work. If you are releasing daily or more often in a context where you can learn a lot via experiments and user tests, then estimates cease to be valuable.
Another way to look at this is in terms of push systems versus pull systems. Scrum was made for push systems, where executives push large idea lumps through a plan/design/build/release process. In that world, estimates are valuable, if only because executives like to appear in control and need signals to tell them when to yell at people.
But in a pull system, you have small, cross-functional teams observing user behavior, developing problem and solution hypotheses. There, delivery is used mainly to test hypotheses, which are then revised based on new data. In this context, batch estimates are generally wasteful because new data and new hypotheses change the basis for estimation too often. There's rarely sense in estimating a month of work when every day you're learning things that change what you'll do and how you do it.
Author here. Sorry you think that. I see the article title has been changed, although I wrote this about 3 weeks ago so I'm not sure why it's been labelled as (2019).
This article focuses on the Timebox issue. I'll be posting the follow up in a couple of weeks, will post on HN... although there's plenty of other content on this site that covers where I'll be going with that solution.
Apologies if it didn't cover everything you wanted - but I find that long articles tend to lose the readers too easily, and I'm trying to keep to a single main point-per-article in my writing as a matter of discipline.
I'll look forward to your thoughts on the next one.
I’ve become extremely apathetic towards Agile. Whatever, this how a company needs to manage it, fine. I’m not wasting time wrapping my head around it. Tshirt size or point poker stuff or ‘As a developer, I would like to’ stories, like seriously, whatever. It’s not going anywhere and doesn’t always make sense, so I just suck it up and truck along.
Here are my stand up updates: I worked on the same shit as yesterday mostly, I’ll let whoever needs to know something know something. No roadblocks.
Source: I am a developer like you that has the exact same instinctual attitude about it. Here is why it's wrong.
Project managers need to build schedules so they could coordinate dependent features across multiple teams.
Marketing needs to orchestrate launch announcements and fixed activities that are not quite as inflexible as printing millions of CD-ROMs, but still require dates.
And managers, ultimately, are held responsible for a project delivery, so they need to track it's progress, understand what things are ahead (lol) or behind (yup), which areas need help from more people, more seniority, or less scope.
All of this needs to come together for any successful project of any reasonable complexity.
None of this is new, or news to you, of course. Here's the part you're missing: The way things USED to work was that PMs, Marketing, Sales, and Managers would come up with processes that worked for them, and forced it on development teams, regardless of how it made the devs feel.
Agile was an attempt for the developers to take ownership of their process bottoms-up. Planning poker, standups, sprints, all these ideas were invented by developers to make their lives EASIER not harder. And none of it was supposed to be panacea - the only way to do things. Each team is supposed to figure out the right balance of what works for them.
So don't be apathetic. You are an active contributor to this process. Take pride in the fact that you have a lot of autonomy about how to define your project structure in an agile environment. Change what you don't like, and figure out what works the best for your team.*
* Giant fucking asterix: I understand plenty of companies are extremely top-down and inflexible about "AGILE" development processes. They paid some consultants millions of dollars to define a process, and they're going to force their development teams to follow it regardless of what the front line devs want. If that is what is happening at your org, I'm sorry, that is truly unfortunate. But by and large, I find that even at companies where the devs have plenty of autonomy to define their process, they still grump about the very idea of having be forced to provide status updates or work in atomic chunks of effort and keep their software constantly releasable.
Jira is hell. There are now multiple boards, issues cloned from one board to another, and tons of lanes. We'll probably add more because documenting every detail is now the goal. We are a small team that is spending many hours per week having discussions on process. We even hired a consultant to come in and evaluate things.
Jira seems to help stakeholders and product owners conceptualize the work and timelines but I see zero productivity gains from continuously evaluating and tweaking Jira.
> Here are my stand up updates: I worked on the same shit as yesterday mostly, I’ll let whoever needs to know something know something. No roadblocks.
Working with you might be as painful for others as working with agile is for you...
All these things (scrum, agile, etc.) are just mechanisms for working effectively as a group. Like all tools, some work better in certain situations than others, and they should be adopted deliberately with an understanding of the tradeoffs. This is traditionally where "management" falls short. It doesn't have to be this way. Accept that there are no silver bullets or always-correct methodologies, just a hodgepodge of techniques for helping people work better together.
The issue with Scrum is everything revolving around 2 week sprints. That doesn’t allow for much in terms of variability, longer term scheduling, or planning dependencies across multiple teams. Plus, in order to try to make the commitment/forecast accurate you have to invest a hefty amount of planning time every 2 weeks.
The Scaled Agile (SAFe) approach is dramatically better here. You plan 8-12 weeks at a time across all teams and invest heavily in planning for 2-3 days of that entire time period. The planning time is synced up for all dev teams so they can communicate and work out cross dependencies.
The other perk here is that you’re goal is to commit to the 12 weeks and not every 2 weeks. That allows for more ups and downs and there’s a built in 1.5 week buffer at the end specifically to account for overrun.
You end up with a clear picture of what’s coming in the next quarter or so without flavor of the week course changes outside of a significant emergency.
To be honest when I saw the diagram for scaled agile https://www.scaledagileframework.com/ the first my BS meter went off big time :). It definitely doesn’t deserve the word “agile” in it. It feels more like the management pipe dream of the perfectly predictable machine for software development.
> The issue with Scrum is everything revolving around 2 week sprints. That doesn’t allow for much in terms of variability, longer term scheduling, or planning dependencies across multiple teams.
That assertion is clearly false. Just because a Sprint is x weeks that does not mean that planning needs to focus exclusively on what you can do in x weeks. The x weeks is only relevant to establish a target for deliverables, but naturally those deliverables can and more often than not are indeed aligned with multi-sprint goals.
Heck, even in the Agile buzzword universe there is the concept of epics and spikes.
Agile does not mean the development process needs to be a 2 week circus where planners have the memory of a goldfish. It's a framework to get all stakeholders on the same page and establish operations to accommodate changes.
I’ve been thinking what if we did six week cycles like back in grade school. Five weeks for dev work, 1 light week for sprint retro and sprint planning. I could sneak it in cus it’s a multiple of 2.
ended with a cliffhanger! OP eloquently picked apart the predominant system without a clear alternative :)
mostly agree with the premise though, and I'll add that "sprint" is insane nomenclature for something that you do continuously week-in and week out.
my question for the author (and ya'll) is how to reconcile the time-waste of estimating with the fact that you do indeed need some estimate of how long something is going to take in order to decide whether you should prioritize doing it (we like RICE [0]). as the designer on a startup team, i'm not going to push for designing / building some crazy VR UI no matter how much we hear a customer asking for it, but i'll definitely design some 3d button transform hover states or other small finesse if the front-end eng says it's easy to implement. i'm sure i can think of a less extreme example, but not today.
My most recent team ran a version of XP, and I found IPM (iteration planning meeting) to be one of the most effective and important things we did. We had a very good back and forth with our Product Manager, with conversations usually going something like this:
PM: Let's go through our prioritized features. First we would like Feature A.
Team: That's probably easy, we can do it in a day or so.
PM: Great. Next we need Feature B. I know this one looks a little more complicated.
Team: Yeah, that's more like a week.
PM: That probably isn't worth it to me. We can probably push this feature back a couple releases, and maybe take out part of it.
Team: Would it help if we just did Part X? That seems like it would give you most of the value and it would be much easier.
And so on. The constant cost-benefit analysis, reprioritization, and rescoping was a great lever and made the team very productive.
The other 4 things, all related I think, that made us effective were:
• Everyone sitting together. I could go ask the designer or PM a clarifying question at any time.
• TDD. I would constantly start to write a test and realize that something didn't make sense—some assumptions were conflicting, or we needed to do another story before the one I was working, etc.
• Pairing. Many times I'd have written the test, had cleared my plan with the PM, and my pairing partner would point out a different perspective or catch one of the things I described above.
• Frequent releases, about quarterly at first and down to bi-weekly at one point.
In all cases, what we were really doing is optimizing the value of NOT doing things. We were helping the PM understand her costs, but we were also helping her eliminate huge chunks of wasteful work, e.g. building things users didn't want, things that didn't integrate well, and so on. Estimating was a piece of it, but it was really the buy-in from the business to accept our estimates and more importantly to trust us as partners in the scoping and planning process, and doing that process fluidly, that made us successful.
> ended with a cliffhanger! OP eloquently picked apart the predominant system without a clear alternative :)
Author here. Yes, another comment made the same point. But I'm glad you feel what I covered was eloquent. Often, diagnosing exactly what the problem is takes you a long way towards solving it.
As I said in the other reply, I will post the follow-up to this article in a few weeks which will outline where I am going with this: I'm building a multi-part series looking at estimating in software development, so Sprints are an important aspect.
'How can we, as software developers, minimise the chance of building the wrong thing?'
Make sure you're hiring the right product people?
I've never "built the wrong thing"; I -have- been told to solve the wrong problem, or been given incorrect information, or been left with unresolvable ambiguities from product that I've just had to arbitrarily pick between.
That may seem like a picky distinction, but it matters. I've never gone back and said "Huh, yeah, you defined the problem really well, then were super responsive with my following up with you, but I somehow wrote code that did something completely different to what we specced out together". Invariably it's been product not knowing what they actually want, or being hellbent to build something stupid (massive amounts of effort for no revenue impact, etc).
> “How can we, as software developers, minimise the chance of building the wrong thing?”
I had an immediate histamine reaction to this statement. With a site title like Risk First, it’s like he’s standing right next to the solution but looking the other way. I suppose we’ll see in the next article if he’s being coy setting up a scenario so he can make a dramatic turn or not. I’m worried he’s instead being sincere.
But that question? It’s so much the wrong question. We know from surveys that couching questions the wrong way invites certain answers and precludes others. So I have some questions of my own, that are biased toward my agenda.
Who cares if you built the wrong thing? Or more informatively, why do you care that you built the wrong thing? Is it because you were wrong? Oh boo boo. This is a trap intellectuals set for themselves all the time.
You want to know the stuff I recall from school? It’s often the stuff I got wrong on the test. I get it. It’s a bad feeling. But bad feelings aren’t automatically bad. There are worse things in life and software than bad feelings. Like hollowness. A grey heckscape of empty mediocrity teaches you almost nothing and doesn’t even leave you with a story to tell. Nothing happened at all, and it was horrible.
How do we, as software developers, minimise our investment (time, energy, mindshare) in building the wrong thing? That’s a much better question. And some times it means steering toward the stupid, or the crazy, instead of just being averse. Steering past it, not away from it.
Sounds like you saying that the wrong thing has indeed been built in a project where you where doing the building, but you are in a great position to blame someone else for any bad products that resulted since they asked for the wrong thing or where vague.
Personally I’d say that I don’t care who’s fault it is if the product ends out bad, no one wins anything from playing the blame game, either you fix the process or you repeat the mistakes. This is why I want fast release cycles validating assumptions with end-users. Of cause there’s going to be problems in specs, don’t sit back and claim that you never had any bugs in your code ever! But the question is how do you push forwards knowing for sure that some assumptions are wrong, and that some are unneeded. Well it’s not a huge surprise, you test! Release feature incrementally with minimal work put in. Then you iterate on that. Your reviews are unit-tests for stakeholders and releases are integration tests into end-users.
Doing a big waterfall product only to then stand back from a huge tire fire saying “yea, but the root problem was that the users asked for the wrong thing in the requirements”. That’s just not me.
The triangle relationship between product Design <-> Product <-> Engineering seems so hard to get right and maintain. You need experts that can straddle these lines and make sure each branch is on the same page and their concerns are appropriatly attended to, regarding the needs of a business.
I hate blaming others, it's hard to enact productive solutions when battling against people you need to work with, I like blaming lack of processes.
How do we as engineers guide the most efficient solutions for the business problems, and in turn make our product development effective, nimble, less wasteful.
I for one have really focused on developing process around design and engineering communications.
It seems harder to make an impact between dev and product or design and product, often there is an imbalance of power and that branch is the executive and can order some really dumb and wasteful decisions if they don't have proper consult with and respect towards the other branches.
never have i ever been involved with a _successful_ software project where product decisions were made entirely by product people and handed off to design/eng.
in my limited experience, good product builders exploit leverage points that in order to solve customer problems / test assumptions quickly. for example, usually the product people don't know how flow X is really easy to restructure like Y and solve the same problem. kind of related, but pushing all the speccing responsibility to product is also kind of boring for me as an IC.
Honestly, Project/Product/Program staff, BAs, and Testers perform an important function when properly aligned and cooperating with development staff. When done well, they keep pesky 'business side's owners out of the way so the technical workers can get on with development work.
The only way you can fix scrum is by burning it on the stake. One of the core principles of Agile is to recognize that the best work emerges from self-organized teams. however, most scrum methodologies completely ignore this and put control back in the hands of senior management by introducing heavy handed processes (daily stand ups, retrospective, bi-weekly planning sessions, scrum masters, etc). You know you are in trouble when a team introduces a definition of done and you are in deep trouble when senior management starts discussion burn down rate charts of completely made up story points.
> One of the core principles of Agile is to recognize that the best work emerges from self-organized teams.
Do you have a source for that? I personally have never experienced that. In contrast, I have experienced that the best work emerges when true trust is ingrained in the organization/culture, meaning a) mutual respect on all levels and b) and willingness to criticize the work of others. These two points are not related to any levels. Senior management can deliver great value to their teams. It very much depends on how they are involved in the work.
Also note that no team can exist outside of the organization that it funds. A dev team of 5-7 devs + PM + UX will cost a company roughly 500-700k EUR/USD per year depending on location. If you want to work on your own ideas with your team and no management interference then go and found a startup.
I've done scrum for years and have very rarely had a manager involved in any of those processes, and I have certainly never had senior management involved.
Those processes exist for the benefit of the team and for the team to 'run' itself. There is no reason for management to be there. Stop doing that.
[+] [-] Terretta|6 years ago|reply
> If the thesis that “90% of everything is waste” then Planning Poker is also a waste, and we should devise a planning process to avoid this.
> In the next article we’ll look at how we might do that.
[+] [-] wpietri|6 years ago|reply
Cost estimates only are valuable if a) the amount of time involved is relatively large, b) you expect to learn nothing during that time, and c) the expected-value estimates are precise enough that you can calculate a narrow ROI range.
Instead, the way I prefer to work is with continuous delivery and small units of work. If you are releasing daily or more often in a context where you can learn a lot via experiments and user tests, then estimates cease to be valuable.
Another way to look at this is in terms of push systems versus pull systems. Scrum was made for push systems, where executives push large idea lumps through a plan/design/build/release process. In that world, estimates are valuable, if only because executives like to appear in control and need signals to tell them when to yell at people.
But in a pull system, you have small, cross-functional teams observing user behavior, developing problem and solution hypotheses. There, delivery is used mainly to test hypotheses, which are then revised based on new data. In this context, batch estimates are generally wasteful because new data and new hypotheses change the basis for estimation too often. There's rarely sense in estimating a month of work when every day you're learning things that change what you'll do and how you do it.
[+] [-] bobm_kite9|6 years ago|reply
This article focuses on the Timebox issue. I'll be posting the follow up in a couple of weeks, will post on HN... although there's plenty of other content on this site that covers where I'll be going with that solution.
Apologies if it didn't cover everything you wanted - but I find that long articles tend to lose the readers too easily, and I'm trying to keep to a single main point-per-article in my writing as a matter of discipline.
I'll look forward to your thoughts on the next one.
[+] [-] 29athrowaway|6 years ago|reply
They have something in common:
- Full-time software engineering managers? 0%
- Full-time product managers? 0%
- Full-time project managers? 0%
- Committed engineers focusing in implementation working on flexible time schedules: 100%
The true inefficiencies that managers spend their careers looking for, are right there in front of the mirror.
https://youtu.be/rQKis2Cfpeo?t=114
[+] [-] pinkfoot|6 years ago|reply
[+] [-] runawaybottle|6 years ago|reply
Here are my stand up updates: I worked on the same shit as yesterday mostly, I’ll let whoever needs to know something know something. No roadblocks.
[+] [-] deanCommie|6 years ago|reply
Source: I am a developer like you that has the exact same instinctual attitude about it. Here is why it's wrong.
Project managers need to build schedules so they could coordinate dependent features across multiple teams.
Marketing needs to orchestrate launch announcements and fixed activities that are not quite as inflexible as printing millions of CD-ROMs, but still require dates.
And managers, ultimately, are held responsible for a project delivery, so they need to track it's progress, understand what things are ahead (lol) or behind (yup), which areas need help from more people, more seniority, or less scope.
All of this needs to come together for any successful project of any reasonable complexity.
None of this is new, or news to you, of course. Here's the part you're missing: The way things USED to work was that PMs, Marketing, Sales, and Managers would come up with processes that worked for them, and forced it on development teams, regardless of how it made the devs feel.
Agile was an attempt for the developers to take ownership of their process bottoms-up. Planning poker, standups, sprints, all these ideas were invented by developers to make their lives EASIER not harder. And none of it was supposed to be panacea - the only way to do things. Each team is supposed to figure out the right balance of what works for them.
So don't be apathetic. You are an active contributor to this process. Take pride in the fact that you have a lot of autonomy about how to define your project structure in an agile environment. Change what you don't like, and figure out what works the best for your team.*
* Giant fucking asterix: I understand plenty of companies are extremely top-down and inflexible about "AGILE" development processes. They paid some consultants millions of dollars to define a process, and they're going to force their development teams to follow it regardless of what the front line devs want. If that is what is happening at your org, I'm sorry, that is truly unfortunate. But by and large, I find that even at companies where the devs have plenty of autonomy to define their process, they still grump about the very idea of having be forced to provide status updates or work in atomic chunks of effort and keep their software constantly releasable.
[+] [-] ketamine__|6 years ago|reply
Jira seems to help stakeholders and product owners conceptualize the work and timelines but I see zero productivity gains from continuously evaluating and tweaking Jira.
[+] [-] cle|6 years ago|reply
Working with you might be as painful for others as working with agile is for you...
All these things (scrum, agile, etc.) are just mechanisms for working effectively as a group. Like all tools, some work better in certain situations than others, and they should be adopted deliberately with an understanding of the tradeoffs. This is traditionally where "management" falls short. It doesn't have to be this way. Accept that there are no silver bullets or always-correct methodologies, just a hodgepodge of techniques for helping people work better together.
[+] [-] brightball|6 years ago|reply
The Scaled Agile (SAFe) approach is dramatically better here. You plan 8-12 weeks at a time across all teams and invest heavily in planning for 2-3 days of that entire time period. The planning time is synced up for all dev teams so they can communicate and work out cross dependencies.
The other perk here is that you’re goal is to commit to the 12 weeks and not every 2 weeks. That allows for more ups and downs and there’s a built in 1.5 week buffer at the end specifically to account for overrun.
You end up with a clear picture of what’s coming in the next quarter or so without flavor of the week course changes outside of a significant emergency.
[+] [-] Ididntdothis|6 years ago|reply
[+] [-] rumanator|6 years ago|reply
That assertion is clearly false. Just because a Sprint is x weeks that does not mean that planning needs to focus exclusively on what you can do in x weeks. The x weeks is only relevant to establish a target for deliverables, but naturally those deliverables can and more often than not are indeed aligned with multi-sprint goals.
Heck, even in the Agile buzzword universe there is the concept of epics and spikes.
Agile does not mean the development process needs to be a 2 week circus where planners have the memory of a goldfish. It's a framework to get all stakeholders on the same page and establish operations to accommodate changes.
[+] [-] ulisesrmzroche|6 years ago|reply
[+] [-] RoughedUpEdge|6 years ago|reply
[+] [-] eberyvody|6 years ago|reply
mostly agree with the premise though, and I'll add that "sprint" is insane nomenclature for something that you do continuously week-in and week out.
my question for the author (and ya'll) is how to reconcile the time-waste of estimating with the fact that you do indeed need some estimate of how long something is going to take in order to decide whether you should prioritize doing it (we like RICE [0]). as the designer on a startup team, i'm not going to push for designing / building some crazy VR UI no matter how much we hear a customer asking for it, but i'll definitely design some 3d button transform hover states or other small finesse if the front-end eng says it's easy to implement. i'm sure i can think of a less extreme example, but not today.
[0]: https://www.intercom.com/blog/rice-simple-prioritization-for...
[+] [-] ellius|6 years ago|reply
PM: Let's go through our prioritized features. First we would like Feature A.
Team: That's probably easy, we can do it in a day or so.
PM: Great. Next we need Feature B. I know this one looks a little more complicated.
Team: Yeah, that's more like a week.
PM: That probably isn't worth it to me. We can probably push this feature back a couple releases, and maybe take out part of it.
Team: Would it help if we just did Part X? That seems like it would give you most of the value and it would be much easier.
And so on. The constant cost-benefit analysis, reprioritization, and rescoping was a great lever and made the team very productive.
The other 4 things, all related I think, that made us effective were:
• Everyone sitting together. I could go ask the designer or PM a clarifying question at any time.
• TDD. I would constantly start to write a test and realize that something didn't make sense—some assumptions were conflicting, or we needed to do another story before the one I was working, etc.
• Pairing. Many times I'd have written the test, had cleared my plan with the PM, and my pairing partner would point out a different perspective or catch one of the things I described above.
• Frequent releases, about quarterly at first and down to bi-weekly at one point.
In all cases, what we were really doing is optimizing the value of NOT doing things. We were helping the PM understand her costs, but we were also helping her eliminate huge chunks of wasteful work, e.g. building things users didn't want, things that didn't integrate well, and so on. Estimating was a piece of it, but it was really the buy-in from the business to accept our estimates and more importantly to trust us as partners in the scoping and planning process, and doing that process fluidly, that made us successful.
[+] [-] bobm_kite9|6 years ago|reply
Author here. Yes, another comment made the same point. But I'm glad you feel what I covered was eloquent. Often, diagnosing exactly what the problem is takes you a long way towards solving it.
As I said in the other reply, I will post the follow-up to this article in a few weeks which will outline where I am going with this: I'm building a multi-part series looking at estimating in software development, so Sprints are an important aspect.
[+] [-] lostcolony|6 years ago|reply
Make sure you're hiring the right product people?
I've never "built the wrong thing"; I -have- been told to solve the wrong problem, or been given incorrect information, or been left with unresolvable ambiguities from product that I've just had to arbitrarily pick between.
That may seem like a picky distinction, but it matters. I've never gone back and said "Huh, yeah, you defined the problem really well, then were super responsive with my following up with you, but I somehow wrote code that did something completely different to what we specced out together". Invariably it's been product not knowing what they actually want, or being hellbent to build something stupid (massive amounts of effort for no revenue impact, etc).
[+] [-] hinkley|6 years ago|reply
I had an immediate histamine reaction to this statement. With a site title like Risk First, it’s like he’s standing right next to the solution but looking the other way. I suppose we’ll see in the next article if he’s being coy setting up a scenario so he can make a dramatic turn or not. I’m worried he’s instead being sincere.
But that question? It’s so much the wrong question. We know from surveys that couching questions the wrong way invites certain answers and precludes others. So I have some questions of my own, that are biased toward my agenda.
Who cares if you built the wrong thing? Or more informatively, why do you care that you built the wrong thing? Is it because you were wrong? Oh boo boo. This is a trap intellectuals set for themselves all the time.
You want to know the stuff I recall from school? It’s often the stuff I got wrong on the test. I get it. It’s a bad feeling. But bad feelings aren’t automatically bad. There are worse things in life and software than bad feelings. Like hollowness. A grey heckscape of empty mediocrity teaches you almost nothing and doesn’t even leave you with a story to tell. Nothing happened at all, and it was horrible.
How do we, as software developers, minimise our investment (time, energy, mindshare) in building the wrong thing? That’s a much better question. And some times it means steering toward the stupid, or the crazy, instead of just being averse. Steering past it, not away from it.
[+] [-] boublepop|6 years ago|reply
Personally I’d say that I don’t care who’s fault it is if the product ends out bad, no one wins anything from playing the blame game, either you fix the process or you repeat the mistakes. This is why I want fast release cycles validating assumptions with end-users. Of cause there’s going to be problems in specs, don’t sit back and claim that you never had any bugs in your code ever! But the question is how do you push forwards knowing for sure that some assumptions are wrong, and that some are unneeded. Well it’s not a huge surprise, you test! Release feature incrementally with minimal work put in. Then you iterate on that. Your reviews are unit-tests for stakeholders and releases are integration tests into end-users.
Doing a big waterfall product only to then stand back from a huge tire fire saying “yea, but the root problem was that the users asked for the wrong thing in the requirements”. That’s just not me.
[+] [-] luckyscs|6 years ago|reply
I hate blaming others, it's hard to enact productive solutions when battling against people you need to work with, I like blaming lack of processes.
How do we as engineers guide the most efficient solutions for the business problems, and in turn make our product development effective, nimble, less wasteful.
I for one have really focused on developing process around design and engineering communications.
It seems harder to make an impact between dev and product or design and product, often there is an imbalance of power and that branch is the executive and can order some really dumb and wasteful decisions if they don't have proper consult with and respect towards the other branches.
[+] [-] bobm_kite9|6 years ago|reply
[+] [-] mobjack|6 years ago|reply
Software is an iterative process and it takes some trial and error to get it in a good state.
I always expect product to be wrong about something so it is my job to help identify issues early and plan for the specs to change.
[+] [-] eberyvody|6 years ago|reply
in my limited experience, good product builders exploit leverage points that in order to solve customer problems / test assumptions quickly. for example, usually the product people don't know how flow X is really easy to restructure like Y and solve the same problem. kind of related, but pushing all the speccing responsibility to product is also kind of boring for me as an IC.
[+] [-] hackbinary|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] oxfordmale|6 years ago|reply
[+] [-] andrekandre|6 years ago|reply
said slightly more succinctly:
managers deciding that you will do daily stand ups, retrospectives and 2 week sprints is the EXACT OPPOSITE of team self-management
[+] [-] baxtr|6 years ago|reply
Do you have a source for that? I personally have never experienced that. In contrast, I have experienced that the best work emerges when true trust is ingrained in the organization/culture, meaning a) mutual respect on all levels and b) and willingness to criticize the work of others. These two points are not related to any levels. Senior management can deliver great value to their teams. It very much depends on how they are involved in the work.
Also note that no team can exist outside of the organization that it funds. A dev team of 5-7 devs + PM + UX will cost a company roughly 500-700k EUR/USD per year depending on location. If you want to work on your own ideas with your team and no management interference then go and found a startup.
[+] [-] sime2009|6 years ago|reply
Those processes exist for the benefit of the team and for the team to 'run' itself. There is no reason for management to be there. Stop doing that.
[+] [-] aivosha|6 years ago|reply