> “At first,” he says, “the numbers were not encouraging. Even the low-scoring managers were doing pretty well. How could we find evidence that better management mattered when all managers seemed so similar?” The solution came from applying sophisticated multivariate statistical techniques, which showed that even “the smallest incremental increases in manager quality were quite powerful.”
As someone without a strong statistical background, this really sounds like "we got data that didn't agree with the point we were trying to make, so we tried a bunch of different ways to look at it until we found the one that matched our hypothesis".
Can someone explain to my why my reaction is wrong? I'm sure it probably is.
Often data contains structural features that initially aren't observed and without them appear to support a hypothesis only weakly. By exploiting the structure, we can see things much more clearly.
An example: say we want to know how a drug affects cognition. We give a simple test to a bunch of people on and off it, blinded, etc. The control group's average score is 74, and the test group's average score is 72. We can use a t test to see if there's a statistical difference, and find there isn't. We can't conclude anything about the drug.
Now imagine we have exactly that same data, but we were careful to give two tests to each person (in a random order, and different tests, of course). We take another look at the data and find out that every single participant scored lower when they were on the drug. With even a fairly small sample size this provides strong evidence that the drug impairs cognition, and probably tells us quite a bit about how much it does.
The article is probably talking about multivariate regression; the more important number comes a few sentences later---"retention was related more strongly to manager quality than to seniority, performance, tenure, or promotions". So presumably, they did the same sort of analysis, carefully pairing people who were similar in as many ways as possible, and found out that good managers are more important than seniority in terms of employee retention. The more variables you have, the more even large differences can hide in raw group averages.
As I understand it, and bearing in mind I'm not super statistics girl:
'For example, in 2008, the high-scoring managers saw less turnover on their teams than the others did—and retention was related more strongly to manager quality than to seniority, performance, tenure, or promotions. The data also showed a tight connection between managers’ quality and workers’ happiness: Employees with high-scoring bosses consistently reported greater satisfaction in multiple areas, including innovation, work-life balance, and career development.'
If their scores predicted those things, then they were measuring something real, regardless of whether they went looking for what they wanted or not. The question then becomes one of whether altering those scores alters the dependent variable or whether you've just created a correlation by doing evil to your numbers.
Which... they did look at their results down the line and I'd imagine they'd have looked at turnover, it'd seem really odd not to considering the other things they looked at and the obvious business case for doing so.
I thought the whole article was pretty interesting, but I responded to that part exactly the way you did.
One of the big things I retained from my stats classes is the idea that, once you deviate from a pre-specified analysis technique, the strength of your conclusion is strongly diminished. Also, sophisticated statistical techniques are often less robust than simple ones. Maybe some other ideas apply that I can't think of off the top of my head.
On the other hand, the author may not have appreciated the statistical iffyness of that phrasing, and perhaps misrepresented the rigor of the actual analysis.
A model is often a tool to ask the right questions, intended to be tweaked until it does what you want (or you give up thinking you can get what you want if the data won't allow it).
That's actually how one performs experiments and develops theories in the social (and managerial) sciences, such as economics: given data, and a hypothesis, and try to develop a model that fits both and offers demonstrable predictive power for future circumstances and datasets.
Of course, one's math might be wrong, and the model may still be falsified by future data. But that's what makes social science interesting.
It depends entirely on how the analysis is carried out, and whether they can convincingly explain a causal link between the variable and the effect they are measuring. I'm guessing Google's engineers are smart enough to see through statistical BS, if that's what it is.
I keep reading these Google stories that appear flawless examples of exceptional thinking and execution. But heck, at some point, the stories just wash over without effect. Do they have no problem employees? Management feudal battles? Studies and organizational change movements that produce nothing appreciative?
I understand that Google is basically a giant marketing company ran by engineers who own the web, so I get the fact that there's a lot of creative spin that goes with anything you're going to see published on the web about them. But hell, it sure would be interesting to poke around behind the scenes and see how things really work.
I'm not saying that to disparage this article or work, it's actually quite impressive. I simply wanted to point out that at some point, a company that does no evil and always is inventing things along the lines of time machines and faster-than-light travel every month starts to be a bit much for a reader to consume. Surely with 40K+ employees and that much money there is some other story here.
Do you recognize, that GOOG is primarily a PR firm based on surveillance? (revenues, ads etc.) Their business is telling stories to influence (euphemism for fooling and diversion of) people using surveillance.
-8≤----
Upon this layer of commercial surveillance activity two things, then, happen with respect to government: the complicity and the thievery.
The data-mining companies believed, by and large, with respect to the United States and other governments around the world with whom they deal, that they were merely in a situation of complicity.
Having created unsafe technological structures that mined you, they thought they were merely engaged in quiet—that is to say, undisclosed—bargaining with power over how much of what they had on you they should deliver to others.
This was, of course, a mingled game of greed and fear.
But what the US data-mining giants of the West Coast basically believed, until Mr. Snowden woke them, was that by complicity they had gained immunity from actual thievery.
What sent both Facebook and Google into orbit since we were all last together—or rather, what had come out two weeks ago on the Wednesday that we were last together—was the news that their complicity had bought them nothing.
[...]
So the problem is that, for the data-miners, the situation is not controllable, just as for the American listeners it is no longer controllable.
And it will only be controllable for Us if we bend our attention closely to the environmental nature of the problem that we face because environmental problems—like climate change, or water pollution, or slavery—are not solved transactionally by individuals.
There are problem employees and management feud battles, but measured as a percentage of my time that I have to spend dealing with them, they are less of a problem in 35,000-person Google than in the 10-20 person startups I've worked in.
We do have problem employees and management feudal battles -- although I have only personally witnessed the latter. I have only worked here, not counting internships before I graduated, but I hear from colleagues who worked elsewhere that these two problems are of a lesser degree here.
There was a story about Google and sex at the workplace, recently. So we are seeing both positive and negative articles about Google. While Google would like to portray itself as something akin to what Toyota was, naysayers would try to do the opposite.
> I keep reading these Google stories that appear flawless examples of exceptional thinking and execution.
I also can't help but add that they seem to go the long way round to figuring out what everybody else already knows in the end.
I can't wait till they do this kind of intense analysis on why their social network efforts seem to grind so many people the wrong way and then arrive at the same conclusion that's already been talked to death out in the world at large.
It's a case study of managers by managers for managers, of course everything went well.
It would take an epically shitty manager to not be able to sweep that kind of failure under the rug and not manage to blame some programmers / QA for their failure.
One thing I would recommend though is to spread out their 360-degree annual reviews into a daily 1 degree review which will net Google an additional 5 degrees of review every year.
I believe management feudal battles only expose another type of problem employee. With an adequate model that can measure and predict future problems, it may just be possible to get rid of the problem employees (and managers who engage in feudal disputes are just that too) or even fix them by removing the factors that created the behavior in the first place.
If anyone can turn management into a real science, it's Google.
"And as the company grew, the founders soon realized that managers contributed in many other, important ways—for instance, by communicating strategy, helping employees prioritize projects, facilitating collaboration, supporting career development, and ensuring that processes and systems aligned with company goals"
That's right, but this describes a "support" function. If they were truly filling the roles in the quotes, managers would not be "bosses", but simply "colleagues". A better term than manager would be "facilitator". I don't think anyone would argue that "facilitators" are useless. What engineers tend to disagree with is the notion of "managers" as "bosses", with significantly higher salaries, etc..
I'm not sure at Google that managers have significantly higher salaries than all of their reports. Sure, they're expected to have experience, and so are making more than entry-level new grad engineers. But are they making more than individual contributors who have the same level? It's even likely there are managers making less than their direct reports, because one of their reports is considered a top-performing individual contributor.
1. Ask what makes a good manager (the behaviors)
2. Correlate this with an outcome (turnover).
3. Move the needle on other managers.
4. See result.
This is a decent enough model, but you're working on the predicate that #1 is actually building a comprehensive model.
These things may be important, but the chance of them being confounded with other variables is high. For example, you'll find that people that floss are healthier. Getting people to floss, however, isn't the solution. It's that people that care enough to floss embody numerous healthy behaviors (and future/intended behaviors).
This is not to detract from what was done, but an exercise like this needs multiple loops to normalize. I'd be interested to compare this approach with just rolling out sound management training that is based on the last 10-20 years of literature.
We've seen a lot of articles on Google about the product side. This is the first I've seen in a while on their internal management. This is important for them to nail to avoid becoming the next IBM or Microsoft. (Both did well in their own ways, but Google aspires to be neither.)
Are there any GOOG alums who would like to comment on the article?
My impression is that data driven HR is a good start, but that it can leave outliers. I have heard anecdotal stories that things are generally well run, but you have to please a lot of people to be promoted as a result of the 360 degree feedback loop and general flat structure. Again purely anecdotally I've heard that there is a lot of email communication required of management, but this is true of most large tech firms. (Oracle, IBM and HP to name a few)
As an outsider my impression is that it's a great place to hire engineers from because they're learned the right things. The salespeople aren't as reliably solid, as they may not have needed to be scrappy.
The study contains "examples and descriptions of best practices...details [that] make overarching principles, such as "empowers the team and does not micromanage," more concrete..."
Their "concrete" example is given by an employee who says, "Early on in my role, [my manager] asked me to pull together a cross-functional team to develop a goal-setting process."
I can't think of anything more illustrative of why engineers to be dismissive of management than strings of abstract buzzwords that conveys little information, such as "Pull together a cross-functional team to develop a goal-setting process."
Every profession has jargon. I'd translate that into plain english as "stick a product manager and a designer in the same room with a couple of engineers and make sure that priorities are clearly managed at all times (as opposed to trying to enforce rigid deadlines from above).
'Project Oxygen colead Neal Patel recalls, “We knew the team had to be careful. Google has high standards of proof, even for what, at other places, might be considered obvious truths. Simple correlations weren’t going to be enough. So we actually ended up trying to prove the opposite case—that managers don’t matter. Luckily, we failed.”'
This almost made me laugh out loud. So, what he's saying is, 'We knew we could not prove managers matter, so we went ahead and tried to prove managers don't matter, knowing we would fail."
I'm sure there's more detailed data for this in Google's implementation of this but largely this reads to me that this program is more of a way of HR showing management how effective or ineffective they are. I didn't see a part where there was large scale opposition to management by engineers and then a reversal of that thought, proven with data.
[+] [-] kevincennis|12 years ago|reply
As someone without a strong statistical background, this really sounds like "we got data that didn't agree with the point we were trying to make, so we tried a bunch of different ways to look at it until we found the one that matched our hypothesis".
Can someone explain to my why my reaction is wrong? I'm sure it probably is.
[+] [-] huggah|12 years ago|reply
An example: say we want to know how a drug affects cognition. We give a simple test to a bunch of people on and off it, blinded, etc. The control group's average score is 74, and the test group's average score is 72. We can use a t test to see if there's a statistical difference, and find there isn't. We can't conclude anything about the drug.
Now imagine we have exactly that same data, but we were careful to give two tests to each person (in a random order, and different tests, of course). We take another look at the data and find out that every single participant scored lower when they were on the drug. With even a fairly small sample size this provides strong evidence that the drug impairs cognition, and probably tells us quite a bit about how much it does.
The article is probably talking about multivariate regression; the more important number comes a few sentences later---"retention was related more strongly to manager quality than to seniority, performance, tenure, or promotions". So presumably, they did the same sort of analysis, carefully pairing people who were similar in as many ways as possible, and found out that good managers are more important than seniority in terms of employee retention. The more variables you have, the more even large differences can hide in raw group averages.
[+] [-] 6d0debc071|12 years ago|reply
'For example, in 2008, the high-scoring managers saw less turnover on their teams than the others did—and retention was related more strongly to manager quality than to seniority, performance, tenure, or promotions. The data also showed a tight connection between managers’ quality and workers’ happiness: Employees with high-scoring bosses consistently reported greater satisfaction in multiple areas, including innovation, work-life balance, and career development.'
If their scores predicted those things, then they were measuring something real, regardless of whether they went looking for what they wanted or not. The question then becomes one of whether altering those scores alters the dependent variable or whether you've just created a correlation by doing evil to your numbers.
Which... they did look at their results down the line and I'd imagine they'd have looked at turnover, it'd seem really odd not to considering the other things they looked at and the obvious business case for doing so.
[+] [-] jjjeffrey|12 years ago|reply
One of the big things I retained from my stats classes is the idea that, once you deviate from a pre-specified analysis technique, the strength of your conclusion is strongly diminished. Also, sophisticated statistical techniques are often less robust than simple ones. Maybe some other ideas apply that I can't think of off the top of my head.
On the other hand, the author may not have appreciated the statistical iffyness of that phrasing, and perhaps misrepresented the rigor of the actual analysis.
[+] [-] parasubvert|12 years ago|reply
That's actually how one performs experiments and develops theories in the social (and managerial) sciences, such as economics: given data, and a hypothesis, and try to develop a model that fits both and offers demonstrable predictive power for future circumstances and datasets.
Of course, one's math might be wrong, and the model may still be falsified by future data. But that's what makes social science interesting.
[+] [-] LordHumungous|12 years ago|reply
[+] [-] asdfologist|12 years ago|reply
[+] [-] DanielBMarkham|12 years ago|reply
I understand that Google is basically a giant marketing company ran by engineers who own the web, so I get the fact that there's a lot of creative spin that goes with anything you're going to see published on the web about them. But hell, it sure would be interesting to poke around behind the scenes and see how things really work.
I'm not saying that to disparage this article or work, it's actually quite impressive. I simply wanted to point out that at some point, a company that does no evil and always is inventing things along the lines of time machines and faster-than-light travel every month starts to be a bit much for a reader to consume. Surely with 40K+ employees and that much money there is some other story here.
[+] [-] Create|12 years ago|reply
-8≤----
Upon this layer of commercial surveillance activity two things, then, happen with respect to government: the complicity and the thievery.
The data-mining companies believed, by and large, with respect to the United States and other governments around the world with whom they deal, that they were merely in a situation of complicity.
Having created unsafe technological structures that mined you, they thought they were merely engaged in quiet—that is to say, undisclosed—bargaining with power over how much of what they had on you they should deliver to others.
This was, of course, a mingled game of greed and fear.
But what the US data-mining giants of the West Coast basically believed, until Mr. Snowden woke them, was that by complicity they had gained immunity from actual thievery.
What sent both Facebook and Google into orbit since we were all last together—or rather, what had come out two weeks ago on the Wednesday that we were last together—was the news that their complicity had bought them nothing. [...]
So the problem is that, for the data-miners, the situation is not controllable, just as for the American listeners it is no longer controllable.
And it will only be controllable for Us if we bend our attention closely to the environmental nature of the problem that we face because environmental problems—like climate change, or water pollution, or slavery—are not solved transactionally by individuals.
http://snowdenandthefuture.info/PartIII.html
[+] [-] nostrademons|12 years ago|reply
[+] [-] jamesaguilar|12 years ago|reply
[+] [-] anoncow|12 years ago|reply
[+] [-] bane|12 years ago|reply
I also can't help but add that they seem to go the long way round to figuring out what everybody else already knows in the end.
I can't wait till they do this kind of intense analysis on why their social network efforts seem to grind so many people the wrong way and then arrive at the same conclusion that's already been talked to death out in the world at large.
[+] [-] fleitz|12 years ago|reply
It would take an epically shitty manager to not be able to sweep that kind of failure under the rug and not manage to blame some programmers / QA for their failure.
One thing I would recommend though is to spread out their 360-degree annual reviews into a daily 1 degree review which will net Google an additional 5 degrees of review every year.
[+] [-] rbanffy|12 years ago|reply
If anyone can turn management into a real science, it's Google.
[+] [-] abraxasz|12 years ago|reply
That's right, but this describes a "support" function. If they were truly filling the roles in the quotes, managers would not be "bosses", but simply "colleagues". A better term than manager would be "facilitator". I don't think anyone would argue that "facilitators" are useless. What engineers tend to disagree with is the notion of "managers" as "bosses", with significantly higher salaries, etc..
[+] [-] nostrademons|12 years ago|reply
[+] [-] mjmahone17|12 years ago|reply
[+] [-] jwilliams|12 years ago|reply
1. Ask what makes a good manager (the behaviors) 2. Correlate this with an outcome (turnover). 3. Move the needle on other managers. 4. See result.
This is a decent enough model, but you're working on the predicate that #1 is actually building a comprehensive model.
These things may be important, but the chance of them being confounded with other variables is high. For example, you'll find that people that floss are healthier. Getting people to floss, however, isn't the solution. It's that people that care enough to floss embody numerous healthy behaviors (and future/intended behaviors).
This is not to detract from what was done, but an exercise like this needs multiple loops to normalize. I'd be interested to compare this approach with just rolling out sound management training that is based on the last 10-20 years of literature.
[+] [-] mathattack|12 years ago|reply
Are there any GOOG alums who would like to comment on the article?
My impression is that data driven HR is a good start, but that it can leave outliers. I have heard anecdotal stories that things are generally well run, but you have to please a lot of people to be promoted as a result of the 360 degree feedback loop and general flat structure. Again purely anecdotally I've heard that there is a lot of email communication required of management, but this is true of most large tech firms. (Oracle, IBM and HP to name a few)
As an outsider my impression is that it's a great place to hire engineers from because they're learned the right things. The salespeople aren't as reliably solid, as they may not have needed to be scrappy.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] csense|12 years ago|reply
Their "concrete" example is given by an employee who says, "Early on in my role, [my manager] asked me to pull together a cross-functional team to develop a goal-setting process."
I can't think of anything more illustrative of why engineers to be dismissive of management than strings of abstract buzzwords that conveys little information, such as "Pull together a cross-functional team to develop a goal-setting process."
That's where I stopped reading.
[+] [-] ojbyrne|12 years ago|reply
[+] [-] momo1|12 years ago|reply
This almost made me laugh out loud. So, what he's saying is, 'We knew we could not prove managers matter, so we went ahead and tried to prove managers don't matter, knowing we would fail."
[+] [-] Yhippa|12 years ago|reply
[+] [-] Meltdown|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] mayureshpep|12 years ago|reply
[+] [-] power2u|12 years ago|reply
[deleted]