I deal with this on a regular basis in the research world, there is what is a really toxic and dangerous bias towards constructing the positive, i.e., making things seem, appear, and even fabricate outcomes that will be affirming of desired outcomes or other group biases.
It is not research, let alone science when you are striking a positive tone, it’s fraud.
This is a really important point. Executive-land is all about image and framing, so I can totally see how execs think it's not a big deal to "just make the case" or frame things less negatively.
But while that's ok in a slide deck, it's _not_ ok in a research paper. Researchers aren't (shouldn't be) accountable to business outcomes, but rather the truth of what they argue.
From what the article mentions, it looks like the sensitive subjects they worry about are those which can get a company burned at the stake on social media these days. It's not great, but it's un unsurprising effect of the current culture wars.
That isn’t just true in the research world, it’s even worse in the corporate world. Something about human nature where naysayers don’t get much acceptance.
bit of slippery cliff you've tripped over there. From "strike a positive tone" to "fabricate outcomes" (data?). The latter is clearly fraud, and qualitatively different from the Google manager's suggestion.
Every specialty has taboos and learning what they are and how to avoid them is part of being successful with the job. As a physics graduate student I made an impassioned case to a high energy researcher that the money spent on high energy research was low yield and should be pushed to the biological sciences ... suffice it to say he did not take my points on their merits. Upton Sinclair’s famous “ It is difficult to get a man to understand something, when his salary depends on his not understanding it.” applies in the sciences.
This seems very different to me than hiring researchers doing work in fields that directly impact your business and telling them their work will not be censored. And then censoring them from a position of power, since you now provide their paycheck.
Keep mind that this is a new, recently instituted process. During the interview process when these researchers were hired, this policy did not exist. Perhaps they were naive to believe that such a policy wouldn't be instituted later but I think that's a different discussion.
I am a physicist, and I would say that, perhaps, you don't understand why we "waste" money on high energy physics. It's very similar to the reason why rich people buy useless status symbols, when they could instead buy useful things. It's essentially telling everyone, "I am so rich I don't care about money anymore and can afford to waste some of it on this useless thing".
In science, we as a society throw money at something like high energy physics, because we are telling ourselves, "We love knowledge so much, that we will throw it at high energy physics, which has little economic benefit to us" [1].
In other words, you can't prove that you love knowledge for its own sake, if you only invest in fields which have economic benefits. You have to invest in fields with no benefit, to prove to ourselves that we still care. That has deep and lasting philosophical impact on society.
[1] If you read grand funding agencies internal documents, you will learn that the primary economic benefit of fields like high energy physics is training people on hard problems, who will then go and apply those skills and raise the bar in other fields.
How did you know, as a graduate student no less, that research money spent on high energy research was low yield compared to the biological sciences? I think that's a pretty bold and naive statement to make.
For example, research on quantum-mechanical light-matter interaction was considered not very practically relevant in the 80s and 90s. In the 2000s the results were picked up by the superconducting qubit research community and used to great effect to build a viable approach to quantum computing.
The outcome of research is inherently hard to predict, and telling people that their field of study is "low yield" because it's not the hot topic of the day seems quite naive and arrogant to me.
Reminds me of an interview a cosmologist from my university gave to the CBC around the time the higgs boson was confirmed by the CERN. When asked about real world consequences of the discovery, he answered "None."
There are many comments that identify the work by corporate sponsored researchers as fraud and not research.
I have the unpopular opinion, and I vehemently disagree.
There is no altruistic, sponsored research, no matter of the sponsor, be it corporate, government, non profit, or the Holy See.
There is always a specific outcome the sponsor hopes for. The question is, how much pressure the sponsor puts on the researcher not to publish failed or unfavorable outcomes? And, we all know majority of failed research never gets published.
Let's presume that we accept sponsorship. What is acceptable level of pressure? I think this is very subjective.
To be clear, falsifying by modification, omission or addition of data is fraud. Drawing the wrong conclusion is not.
The only time I have seen altruistic, "clean" research was performed by individuals as hobby. Once there are more than one person involved, there will be some preferred outcome. Nature of humanity.
Can you name a single occurrence where a publicly funded university in the US or Europe forced a researcher to retract a paper because it conflicted with the universities' commercial interests?
From my time at academia I also don't recall the university or research institute asking to check a manuscript before submitting it to a journal, I'd even go as far as saying that this was unheard of as it would go against the spirit of research.
And in terms of research outcomes the funding bodies usually want publications in high-impact journals, public acknowledgment or results that can be commercialized. There are disciplines where commercial funding plays a larger role (e.g. pharma, medicine), but for most publicly founded research it's a rather bold assumption to claim that the funding is conditioned on achieving a particular result and not on the general impact of the results.
This is nonsense. Publically funded research does not have the same issues as commercially funded research, full stop. A properly designed experiment (one capable of receiving public funds from the DOE, for example) will produce useful data regardless of the outcome. Not observing something vs observing something with a well-quantified uncertainty in either case are equally important results.
If a certain experimental result means the experiment is less worthy of publication, then it should not have been funded in the first place. Publically funded research must meet this standard, and in fact receiving funds from all the governmental sources im familiar with have reporting requirements attached.
Good research is interesting regardless of the result. This is the core concept the corporate world fails to grasp.
I think there is a useful distinction to be made between the impact that financial support has just by existing, and the leverage that can give the sponsor if they choose to use it.
If I offer a 1mm grant the use of X drug on Y disease, I am being opinionated about where time should be spent and what diseases are "important". This is not neutral, and can be very political.
However, this is in a different universe than if I offer the grant with strings attached like editorial control on the paper(s), or a veto on publishing.
As you note, most failed research doesn't get published but vanishingly small amounts of that are due to outside pressure, it's mostly more mundane reasons.
Research is sponsored by, and done by people, hence it is flawed. But the best of is places value on the rigor of the research regardless of outcome, and certainly does not attempt to silence results that are not favourable to the sponsors goals. Commercially sponsored work should aspire to this in the same way that public research does - the specific outcome the sponsor hopes for is progress in our understanding. In both cases, sometimes the results are damaging to other projects you have (business goals, policy programs, etc.) which is the real test.
> The only time I have seen altruistic, "clean" research was performed by individuals as hobby. Once there are more than one person involved, there will be some preferred outcome. Nature of humanity.
I also think it's worth noting that even though "clean" research does exist coming from individuals, there's also a significant chunk of individuals whose research is even _more_ predetermined for a specific outcome.
I've definitely seen this in academia. Some institutions will bias in favor of supporting high-visibility research over "meat and potatoes" base-line research (such as reproduction of other experimenters' results, or chasing down un-sexy problems of elaboration or exploration of an information space). Their incentives are to chase grant money and get published, even if the published results aren't thorough or rigorous but will capture column inches (leaving someone else to do the un-sexy work of verifying the claims from a single experiment run).
It leaves gaps in the robustness of human knowledge, a bit of a Swiss-cheese at the edges that isn't necessary, but for the way we reward people for cutting the edge instead of filling the back.
This is just not true, and sounds like someone talking about a field about which they have no idea. NSF mostly for example doesn't give a shit about your results, all it matters to them you publish something (productivity).
I have worked in both academia and national labs and never heard about funders even caring about my results. Far more prevalent is that funders want your publications to be in high IF journals.
I published several papers while working at Google on Chrome. There is a publication review process that at that time was not onerous at all. In fact, all of my publication submissions were reviewed post facto and basically rubber-stamped. All, with the exception of one, our publication on Spectre, which was carefully vetted to not disclose any sensitive information about further vulnerabilities or violate NDAs with partners. So my experience for publishing at Google, admittedly in a non-controversial area of Programming Languages, was positive.
That said, the corporate culture is bending authoritarian and Orwellian. The new changes are just an further evolution in that arc. This is part of the reason I left in 2019.
Public institutions tend to observe the same policies, even when there's no direct commercial interest.
For example, if you join a Math department and start publishing papers that cast the department's research interests in a negative light, that's often a good way to get fired.
I mean, seriously, go ask just about any professor of any topic at any university if their field is undervalued or overvalued. They'll almost universally tell you that their field is undervalued, underappreciated, and more important than people think. They deserve more funding, and you should definitely consider majoring in their field.
> The manager added, “This doesn’t mean we should hide from the real challenges” posed by the software.
If I ever plucked a single sentence out of a long post and called attention to it context-free, I could probably make whoever posted it look fairly bad. But, in the context of news articles, it becomes the headline? Was the context "Try to strike a positive tone...or else you're fired" - or "Try to strike a positive tone...you've shown that these problems are solvable"
This whole saga is such a PR “own goal” by Google. The vast majority of corporate research labs just don’t fund research doubting their own ethics in the first place. Company leadership should choose an ethical stance and defend it. It’s an abdication of responsibility to say, well the ethics are an open question, we’re going to fund some research into it. And now Google is being punished for their abdication of responsibility. If they had simply established a corporate policy that said, we believe facial recognition is good when XYZ conditions are met, and defended it, they wouldn’t be in this situation.
A lot of the stuff really is an open question though.
For example if you’re training models on historical data to determine credit worthiness, you can end up with models that penalize people based on race, or zip code (as a proxy for race), or names (same thing again).
How to correct for this is not obvious, removing the fields from the data usually just has it appear in some other way that’s correlated to race.
My understanding of current approach is to leave the fields in and then check to see how out of whack things are and there are a few approaches to try and correct for this (also an open question).
The reason modeling risk on race is unfair is because while the color of your skin may be strongly correlated with being a bad (or good) credit risk due to a whole host of complex historical reasons that lead to bias in the data, it’s not because of someone’s skin color that they should be considered a bad (or good) credit risk. You want the models to be judging based on more causal details. Not data that’s biased due to historical favoritism or exclusion.
Doing this poorly can further cement the pattern in the data and make it worse over time in addition to just being less accurate.
That said, you want people working on this looking for pragmatic solutions and not just promoting their own agenda while spinning things to seem worse than they are with emotionally loaded language targeting the company. Making demands and then going public on Twitter about your failed ultimatum doesn’t inspire much confidence.
We already know that ,,self regulation'' doesn't work. It made sense to me that when I was working at Google I didn't talk bad about the company that I was getting my paycheck from.
Alphabet is still a great company, but that doesn't mean that it doesn't need checks and balances from outside.
The directive to strike a positive tone could be nefarious or benign; we can’t tell from the info in the article. Every research paper has a tone that depends on the personality of the author - I’ve seen it even in my dry field of quantum optics. Every result has many implications and how you discuss them, which implications you emphasize, and the language you use will affect the tone, even if you are not explicitly furthering an agenda. A given article could have a range of legitimate tones, depending on who wrote it. Asking a researcher to be on the positive end of the legitimate spectrum is benign. It can of course go to far.
Consider the historical parallel to the tobacco industry’s research departments. And asbestos. And lead. And talc. And etc. The researchers found that the products were harmful, but their results were squelched. “Do no harm,” indeed.
It seems to be an unfortunate reality of doing research for anything other than the sake of research.
My physics teacher worked for some kind of government air pollution modelling group in the past, and effectively said that they would often be told "Here's the result, find us data to make it work".
Wasn’t the requirement to add a mention of the things google has done to improve the situation, rather than a squelching?
It’s like if an asbestos company did a bunch of research on healthy insulators, then found that the original products were in fact harmful. Before publishing it, the bosses demand mention of the prior work towards healthy insulators.
tl;dr - if we don't create it, control it, and make sure everyone knows about it, someone else is going to make all those decisions for you. Its coming regardless
These are not equivalent comparisons to me. Tobacco was always harmful, inhaling smoke in general is just bad, and the others could be through improper use but safe otherwise. Let alone all of what you mentioned will not have the impact AI will
AI's potential is equally good and equally bad and any of the bad is purely because someone or some group employs it that way.
The real danger of AI isn't the obvious tinpot dictators using it drive war but in politicians and government officials hiding behind the term as an excuse for their bad decisions. The more you demand of your government to give you stuff or services the more likely you will be subject to this tech
I say this each time the subject of AI abuse, facial recognition or similar comes up.
This is not something that cancel culture will stop. Nothing is going to stop it from coming. So the best alternative to the technically and morally (hate using that word) driven people is to hit it head on from all directions. This means working to make sure it does what it is supposed to do but also that everyone knows what it really cannot do, working the legal end to insure there are laws and consequences for employing the technology incorrectly, and through the information end in keeping the public fully aware of who is using, how they use it, and how to legally protect yourself from it.
You may well be right that this is Google’s current thinking, but it doesn’t look like a winning strategy to me. Once more, they are on the defensive trying to offer flailing justifications for an opportunistic initial move. Even if you don’t care about the ethics, or are on their side, I think you’d have to question their execution.
It will also continue to ramp up even more as growth dries up. The internet can only get so big, especially on a quarterly basis, and these companies are already very big. They're public and pressure to grow is huge.
It's going to be a rough ride throughout the next decade.
No. More like, researchers can do as they please, but if they're going to attack Google or be negative about something related to Google, then there's going to be a review and words will have be chosen carefully. There's a big difference.
It's important to draw a distinction between positive tonality and censoring negative results.
For example, the recent COVID-19 vaccines: they're commonly reported to be 95% effective and have some risk of side-effects. This can be reported factually, or spun in a positive or negative light. For example, a sensationalist might try to stir up interest by spinning a narrative that focuses on the vaccines being unreliable and dangerous, as proven by science that Big Pharma doesn't want you to know about!
And as we've seen, sensationalists can get cult followings. However bad their ploy may be for society overall, the sensationalist themself may stand to profit.
So as reprehensible as censoring legitimate scientific research would be, a directive to "strike a positive tone" sounds like it could be a very different, far more sensible thing.
Hah, reminds me of this literature teacher I had in school. She asked us to write a report on the positive aspects of Hamlet. I remember sitting at home and thinking "Well... It could have been worse - more people could have died, that's positive thinking, right?"
The “sensitive topics” extra review seems completely fair and obviously necessary. Anyone acting like that amounts to censorship in a private company is totally off their rocker.
This article seems really weak, and of course there’s the obligatory attempt to tie it in with Timnit Gebru’s resignation and her very inappropriate behavior, even though for all of Google’s flaws, Google clearly and obviously did the right thing both to disapprove Gebru’s weak paper and accept her resignation based on her ultimatum.
[+] [-] frankfrankfrank|5 years ago|reply
It is not research, let alone science when you are striking a positive tone, it’s fraud.
[+] [-] curiousllama|5 years ago|reply
But while that's ok in a slide deck, it's _not_ ok in a research paper. Researchers aren't (shouldn't be) accountable to business outcomes, but rather the truth of what they argue.
It's tough to see this play out real time.
[+] [-] danmaz74|5 years ago|reply
[+] [-] seanmcdirmid|5 years ago|reply
[+] [-] bagacrap|5 years ago|reply
[+] [-] mitjak|5 years ago|reply
[+] [-] jchallis|5 years ago|reply
[+] [-] cmiles74|5 years ago|reply
Keep mind that this is a new, recently instituted process. During the interview process when these researchers were hired, this policy did not exist. Perhaps they were naive to believe that such a policy wouldn't be instituted later but I think that's a different discussion.
[+] [-] abdullahkhalids|5 years ago|reply
In science, we as a society throw money at something like high energy physics, because we are telling ourselves, "We love knowledge so much, that we will throw it at high energy physics, which has little economic benefit to us" [1].
In other words, you can't prove that you love knowledge for its own sake, if you only invest in fields which have economic benefits. You have to invest in fields with no benefit, to prove to ourselves that we still care. That has deep and lasting philosophical impact on society.
[1] If you read grand funding agencies internal documents, you will learn that the primary economic benefit of fields like high energy physics is training people on hard problems, who will then go and apply those skills and raise the bar in other fields.
[+] [-] ThePhysicist|5 years ago|reply
For example, research on quantum-mechanical light-matter interaction was considered not very practically relevant in the 80s and 90s. In the 2000s the results were picked up by the superconducting qubit research community and used to great effect to build a viable approach to quantum computing.
The outcome of research is inherently hard to predict, and telling people that their field of study is "low yield" because it's not the hot topic of the day seems quite naive and arrogant to me.
[+] [-] aaaxyz|5 years ago|reply
[+] [-] yyyymmddh|5 years ago|reply
[deleted]
[+] [-] WaitWaitWha|5 years ago|reply
I have the unpopular opinion, and I vehemently disagree.
There is no altruistic, sponsored research, no matter of the sponsor, be it corporate, government, non profit, or the Holy See.
There is always a specific outcome the sponsor hopes for. The question is, how much pressure the sponsor puts on the researcher not to publish failed or unfavorable outcomes? And, we all know majority of failed research never gets published.
Let's presume that we accept sponsorship. What is acceptable level of pressure? I think this is very subjective.
To be clear, falsifying by modification, omission or addition of data is fraud. Drawing the wrong conclusion is not.
The only time I have seen altruistic, "clean" research was performed by individuals as hobby. Once there are more than one person involved, there will be some preferred outcome. Nature of humanity.
[+] [-] ThePhysicist|5 years ago|reply
From my time at academia I also don't recall the university or research institute asking to check a manuscript before submitting it to a journal, I'd even go as far as saying that this was unheard of as it would go against the spirit of research.
And in terms of research outcomes the funding bodies usually want publications in high-impact journals, public acknowledgment or results that can be commercialized. There are disciplines where commercial funding plays a larger role (e.g. pharma, medicine), but for most publicly founded research it's a rather bold assumption to claim that the funding is conditioned on achieving a particular result and not on the general impact of the results.
[+] [-] yummypaint|5 years ago|reply
If a certain experimental result means the experiment is less worthy of publication, then it should not have been funded in the first place. Publically funded research must meet this standard, and in fact receiving funds from all the governmental sources im familiar with have reporting requirements attached.
Good research is interesting regardless of the result. This is the core concept the corporate world fails to grasp.
[+] [-] ska|5 years ago|reply
If I offer a 1mm grant the use of X drug on Y disease, I am being opinionated about where time should be spent and what diseases are "important". This is not neutral, and can be very political.
However, this is in a different universe than if I offer the grant with strings attached like editorial control on the paper(s), or a veto on publishing.
As you note, most failed research doesn't get published but vanishingly small amounts of that are due to outside pressure, it's mostly more mundane reasons.
Research is sponsored by, and done by people, hence it is flawed. But the best of is places value on the rigor of the research regardless of outcome, and certainly does not attempt to silence results that are not favourable to the sponsors goals. Commercially sponsored work should aspire to this in the same way that public research does - the specific outcome the sponsor hopes for is progress in our understanding. In both cases, sometimes the results are damaging to other projects you have (business goals, policy programs, etc.) which is the real test.
[+] [-] jonahrd|5 years ago|reply
I also think it's worth noting that even though "clean" research does exist coming from individuals, there's also a significant chunk of individuals whose research is even _more_ predetermined for a specific outcome.
[+] [-] shadowgovt|5 years ago|reply
It leaves gaps in the robustness of human knowledge, a bit of a Swiss-cheese at the edges that isn't necessary, but for the way we reward people for cutting the edge instead of filling the back.
[+] [-] N1H1L|5 years ago|reply
I have worked in both academia and national labs and never heard about funders even caring about my results. Far more prevalent is that funders want your publications to be in high IF journals.
[+] [-] titzer|5 years ago|reply
That said, the corporate culture is bending authoritarian and Orwellian. The new changes are just an further evolution in that arc. This is part of the reason I left in 2019.
[+] [-] nisuni|5 years ago|reply
[+] [-] _Nat_|5 years ago|reply
For example, if you join a Math department and start publishing papers that cast the department's research interests in a negative light, that's often a good way to get fired.
I mean, seriously, go ask just about any professor of any topic at any university if their field is undervalued or overvalued. They'll almost universally tell you that their field is undervalued, underappreciated, and more important than people think. They deserve more funding, and you should definitely consider majoring in their field.
[+] [-] N1H1L|5 years ago|reply
[+] [-] Laarlf|5 years ago|reply
[+] [-] oh_sigh|5 years ago|reply
If I ever plucked a single sentence out of a long post and called attention to it context-free, I could probably make whoever posted it look fairly bad. But, in the context of news articles, it becomes the headline? Was the context "Try to strike a positive tone...or else you're fired" - or "Try to strike a positive tone...you've shown that these problems are solvable"
[+] [-] tanilama|5 years ago|reply
If you want independence, goes to academia instead (though it is not easy to obtain it either)
[+] [-] tim333|5 years ago|reply
[+] [-] bra-ket|5 years ago|reply
Jeff Dean’s letter with details: https://docs.google.com/document/u/0/d/1f2kYWDXwhzYnq8ebVtuk...
[+] [-] _y5hn|5 years ago|reply
[+] [-] throw_m239339|5 years ago|reply
[+] [-] unishark|5 years ago|reply
[+] [-] lacker|5 years ago|reply
[+] [-] gonehome|5 years ago|reply
For example if you’re training models on historical data to determine credit worthiness, you can end up with models that penalize people based on race, or zip code (as a proxy for race), or names (same thing again).
How to correct for this is not obvious, removing the fields from the data usually just has it appear in some other way that’s correlated to race.
My understanding of current approach is to leave the fields in and then check to see how out of whack things are and there are a few approaches to try and correct for this (also an open question).
The reason modeling risk on race is unfair is because while the color of your skin may be strongly correlated with being a bad (or good) credit risk due to a whole host of complex historical reasons that lead to bias in the data, it’s not because of someone’s skin color that they should be considered a bad (or good) credit risk. You want the models to be judging based on more causal details. Not data that’s biased due to historical favoritism or exclusion.
Doing this poorly can further cement the pattern in the data and make it worse over time in addition to just being less accurate.
That said, you want people working on this looking for pragmatic solutions and not just promoting their own agenda while spinning things to seem worse than they are with emotionally loaded language targeting the company. Making demands and then going public on Twitter about your failed ultimatum doesn’t inspire much confidence.
[+] [-] bite_tongue|5 years ago|reply
[+] [-] xiphias2|5 years ago|reply
Alphabet is still a great company, but that doesn't mean that it doesn't need checks and balances from outside.
[+] [-] op03|5 years ago|reply
[+] [-] gmkiv|5 years ago|reply
[+] [-] drfuchs|5 years ago|reply
[+] [-] mhh__|5 years ago|reply
My physics teacher worked for some kind of government air pollution modelling group in the past, and effectively said that they would often be told "Here's the result, find us data to make it work".
[+] [-] bitcharmer|5 years ago|reply
[+] [-] 6gvONxR4sf7o|5 years ago|reply
It’s like if an asbestos company did a bunch of research on healthy insulators, then found that the original products were in fact harmful. Before publishing it, the bosses demand mention of the prior work towards healthy insulators.
[+] [-] ruslanbogan|5 years ago|reply
[deleted]
[+] [-] Shivetya|5 years ago|reply
These are not equivalent comparisons to me. Tobacco was always harmful, inhaling smoke in general is just bad, and the others could be through improper use but safe otherwise. Let alone all of what you mentioned will not have the impact AI will
AI's potential is equally good and equally bad and any of the bad is purely because someone or some group employs it that way.
The real danger of AI isn't the obvious tinpot dictators using it drive war but in politicians and government officials hiding behind the term as an excuse for their bad decisions. The more you demand of your government to give you stuff or services the more likely you will be subject to this tech
I say this each time the subject of AI abuse, facial recognition or similar comes up.
This is not something that cancel culture will stop. Nothing is going to stop it from coming. So the best alternative to the technically and morally (hate using that word) driven people is to hit it head on from all directions. This means working to make sure it does what it is supposed to do but also that everyone knows what it really cannot do, working the legal end to insure there are laws and consequences for employing the technology incorrectly, and through the information end in keeping the public fully aware of who is using, how they use it, and how to legally protect yourself from it.
[+] [-] curiousllama|5 years ago|reply
Previously, ethics was good PR; now, it’s a risk in antitrust cases. This will continue to ramp up.
[+] [-] microtherion|5 years ago|reply
[+] [-] oblio|5 years ago|reply
It's going to be a rough ride throughout the next decade.
[+] [-] jariel|5 years ago|reply
[+] [-] _Nat_|5 years ago|reply
For example, the recent COVID-19 vaccines: they're commonly reported to be 95% effective and have some risk of side-effects. This can be reported factually, or spun in a positive or negative light. For example, a sensationalist might try to stir up interest by spinning a narrative that focuses on the vaccines being unreliable and dangerous, as proven by science that Big Pharma doesn't want you to know about!
And as we've seen, sensationalists can get cult followings. However bad their ploy may be for society overall, the sensationalist themself may stand to profit.
So as reprehensible as censoring legitimate scientific research would be, a directive to "strike a positive tone" sounds like it could be a very different, far more sensible thing.
[+] [-] axegon_|5 years ago|reply
[+] [-] mlthoughts2018|5 years ago|reply
This article seems really weak, and of course there’s the obligatory attempt to tie it in with Timnit Gebru’s resignation and her very inappropriate behavior, even though for all of Google’s flaws, Google clearly and obviously did the right thing both to disapprove Gebru’s weak paper and accept her resignation based on her ultimatum.