There is a major error made by the research group. It starts and ends here: "we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches."
I am a Red Teamer and work with companies to understand how their detective/preventative/recovery controls and processes are working. Here's how you resolve this:
You work with maintainers to get their coordination on the research. You work out a mechanism to prevent submitted patches from being merged (e.g. maintainers are notified before bad patches accepted by code review processes are merged).
You do not tell them when the patches are coming. You do not tell them which identities are going to be used for the patches (e.g. from which email addresses). You do not tell them which area of code will be targeted. You set rules and time bounds for the study.
You wait some amount of time before submitting such patches (weeks to months). Realistically this is all that's needed. If hypersensitive, set this up earlier and let it bake longer.
At this point, you submit patches from a variety of addresses (probably not associated with your university - it is easy to create many such identities). You also can coordinate with other researchers, universities, and companies to submit patches under identities as needed. You also study submitting from yandex, gmail, .cn and other email addresses (because isn't that interesting to know?).
The premise that there's some ultimatum between working with the community and performing the research is on its face incorrect. This is either ignorance or laziness on behalf of the researchers. Clearly, they hadn't taken the time to work with the community to work out an approach that could be mutually acceptable.
I also empathize with the plight of the researchers — Linux is a bit different than normal Red Team engagements, in that a normal organization has a hunch of administrative / management layers who typically do not participate in the operations of the system being tested. A VP of engineering at a medium to large company is unlikely to be committing code, much less maintaining the build pipeline etc.
This is not the case with Linux. The people “at the top” are also reviewers, and so it’s pretty likely that notifying them will result in a change of behavior.
I wonder if there is some way to build some sort of Red Team consent “blind trust” organization, such that willing open source projects could agree to responsible attacks, and the attackers could register their work (including disclosure / mitigation plans) with the blind trust ahead of time.
This is how it should've been done. Like a war game (at least,as depicted in movies like Periscope Down), the organizers of the study and the project maintainers need to agree on (and be aware of!) bounds of the study, without necessarily knowing all of the details.
> You work out a mechanism to prevent submitted patches from being merged (e.g. maintainers are notified before bad patches accepted by code review processes are merged).
It is my understanding that this happened and that no bad patches were actually merged.
OK, how do you test SocEng vectors? Do you obtain consent and coordinate with every employee that might be targeted to receive your e-mail?
>You also study submitting from yandex, gmail, .cn and other email addresses
No, the point was submitting from a known and respectable entity, which might affect the level of scrutiny. They weren't testing a whole patching process, but a specific human component of it.
While there are other requirements, a sincere apology cannot in any way entertain doubt about the fact that there WAS harm.
Truly acknowledging the harm done is foundational to a real apology, and most of us (myself included) end up sneaking in weasel words or phrases like this.
Psychologically, its nice for the apologizer, since it allows one to think "i'm being good by apologizing, but maybe I didn't do anything bad after all?".
But from the apologizee standpoint, these phrases are often devastating and can make it clear that the apologizer has no real recognition or care of what happened.
Personally I've worked pretty hard to try to remove these sorts of phrases from my apologies. It's not easy. It makes you feel much more vulnerable and you really have to let whatever you did sit with you in a very uncomfortable way. But it's worth it.
I really appreciate the apology and as they stated, its unconditional nature. Good.
However, I find something very problematic. This quote shows it:
"We have learned some important lessons about research with
the open source community from this incident."
This is something I don't like. This is not something about "research with the open source community". If anything, they should have learned something about treating human beings as persons and not as involuntary guinea pigs. They should have learned something about not breaking anyone's (not only any open source community) good faith and trust, and respecting that.
They behaved like jerks and they still cannot see that.
I think there is something to be applauded about people who genuinely apologize even though they can't see things from the other person's point of view.
They didn't decide to conduct this research on a whim. They had full approval of their university ethics board as well. They published a paper and had it peer reviewed without (as far as I know) anyone immediately calling for their heads.
They can't immediately turn heel face and believe their actions are wrong, always were wrong, and the wrongness should have been obvious to them.
Yet, despite this they recognize the hurt they've caused and are genuinely apologetic and will never do it again. Asking them to dismantle their world view in a week is a bit much, even criminals are given a few years of quiet contemplation before being asked to tell a parole board they have changed their hearts and minds.
"If anything, they should have learned something about not treating human beings as persons and not as involuntary guinea pigs."
Perhaps the entire "tech" industry needs to learn that lesson. Non-technical end users should be entitled to that same level of trust as nerds. I can download free open source code, extract a tarball and build the software without worrying too much about scanning through all the files first for phone home/telemetry/OriginTrials nonsense.^1 However non-technical end users who use programs compiled for them by "tech" companies with ads and surveillance as their "business model" are not entitled to the same trust. I cannot think of any justification for the difference.
> we are very sorry that the method used in the “hypocrite commits” paper was inappropriate
This reads more as "sorry you were offended" than "It was inappropriate and we are sorry".
> As many observers have pointed out to us, we made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.
Bringing up why you did something in an apology is very shaky. Like in this case, it can sound more like justifying / excusing.
>They behaved like jerks and they still cannot see that.
They just come off as disconnected academics. Samething with the experiments with algorithms from Facebook. They are so wrapped up in what they are doing, they no longer see the "users" as human beings. There are some times this is almost required to survive like being an ER doctor. If you lose that many people, it could be debilitating unless you were able to disconnect. That's far and away much different than these robots.
Imagine if instead they did this research with the closed source community. You know, get hired under false pretenses, sneak some vulns into some commercial product, write a paper about how easy it was. Pretty sure if they tried that they would be in jail right now.
This falls under behavioral research. I believe psychologists have solved the problem of how to research that is both blind and in line with ethical standards. All they had to do was ask one.
The thing I’m still missing is a detailed explanation of what the heck was going on with the recent bogus commit that triggered the banning. Supposedly the “hypocrite commit” research was all done in 2020 and is now in the past. So what was going on with this latest bad commit? The student who submitted it claimed it was generated by a static analysis tool, which kernel maintainers have plausibly called bullshit on. Was that student lying? If so, what were they actually doing? If not, it seems absolutely necessary at this point to prove that they were telling the truth by publishing how that commit was produced, including the tool’s source code and how it was invoked. Even if the campaign to intentionally introduce security flaws was over in 2020, the more recent issue is that the same research group submitted an apparently intentionally incorrect patch and then seems to have lied about its provenance and why it was submitted. Until that’s cleared up any kind of apology feels premature and impossible to evaluate. How can anyone decide if an apology is sincere without understanding what was done?
I'd like to give them the benefit of the doubt, but this is written like an apology they know they must write. It does not come across as apologetic. It comes across as rationalization veiled as an apology, and it doesn't sit well with me.
Indeed. This letter is an attempt to justify and rationalise their actions. Essentially, it amounts to saying "we're sorry you were offended and felt hurt by our legitimate work but we had no choice but to lie to you and unethically experiment on you without your consent or we wouldn't have been able to do it". Their statement is not an actual apology, even if it is phrased in the language of apology, and it is an excellent example of what not to write if you are seeking forgiveness.
The core issue here is that the system under which they were working, and the researchers themselves, did not consider this work to amount to unethical human experimentation - even though the work directly involved infiltrating and exploiting humans and human social systems. There is no mention of this nor of any substantial desire to understand or address this systemic failing and, until there is an unconditional acceptance of responsibility for the harm caused and a real actionable change for the better with accountability, I don't see how we can consider this matter to have been apologised for.
I agree. There are weasel-words throughout, starting with apologizing for "any harm". When you've been told what the harm was, don't apologize for "any harm" you might have caused. You should start an apology by demonstrating that you have listened and understood exactly what harm you caused.
Listing the harm you caused is also an important part of the public record, so that anyone else considering a similar scheme in the future can see what consequences were agreed to have happened the last time.
Yeah, it's not a great apology. I would say as far as justifications go, I think it's a certain level of depth expected from academics, you shouldn't overthink it.
As far as giving benefit of the doubt, it's likely not done out of malice in the first place. Just a combination of poor reasoning and doesn't exactly clear up if they even considered alternatives to control their variables. Unfortunately, it seems like they were already given the benefit of the doubt from their first offense, but did not take any lessons away from that, so it would be understandable if the maintainers kept their ban.
To me, it's always a distinct sign that an apology isn't sincere when it hedges apologizing for "any harm" that they might have caused. In my mind, a sincere apology clearly acknowledges harm that has been caused and apologizes for that.
The kernel community isn't a spurned lover or anything; the apology just has to admit fault, promise not to do that again and be truthful about it.
There is no need for the researchers to be in any specific state of mind or even apologetic as long as they don't try to submit bad faith patches again.
Really the only reason there is even a need for a public apology is to signal how much pressure they are under so other people think twice before doing the same silly thing. Otherwise they could just apologise to the maintainers directly.
I agree, but let's give them a little extra benefit of the doubt. I thought as I was reading it that it seemed stilted and forced, then I wondered if the author(s) don't speak / write English as a primary language.
I'll be interested to see how they react to feedback / responses.
I was just about to post: "This is a great apology."
Context is everything, of course. I think you're right that it's tainted by the fact that they absolutely did not have a choice, and coming from people who've deceived the same tribes they're now trying to apologize to.
Social media insanity has greatly reduced our capacity to accept public apologies at face value. There is always a seething mob that is ready to question the sincerity of the apology once there are no more demands left to be made.
There are some relatively minor issues with this apology that appear to already have ample discussion here, and I'll not repeat it. I want something more: I want to hear from the sponsoring faculty, research ethics board, and editors of the journal that published the article. There appear to be some systemic issues in addition to the investigators' ill-considered project. How was it that this research, which is clearly unethical, ended up being published? I've sat on an IRB: this study would not even have been a close call. Did sponsoring faculty even send it for IRB approval? Did they disclose that they were misleading their subjects? If so, did the IRB approve it? Did it make any recommendations? When submitted for publication, did they state they had IRB approval? Did they disclose that they were misleading experimental subjects? Did any reviewers express ethical concerns? Were the ethics of this study discussed by editors? Because there's either some serious systemic flaws in the review and publication process for this paper or the investigators engaged in serious misconduct. If the latter an apology (while absolutely required) is far from sufficient to address the issue. If the former, then there are several other apologies due, along with confirmation that the process will be reviewed and corrected.
Another query about IRB and human subjects: In the case of software which is not only written/maintained by a community of humans, but used by a (vast) community of humans, do the latter also become "human subjects" as well in such an experiment?
What are your thoughts about this article[0], my reading or article is that; author fails in similar way (to some extent) as researchers and there's IRB in regards to ethics of such research.
Not pictured: any attempt to argue that experimenting on the team that reviews kernel patches is a legitimate thing to do. They discuss it as if they were researching something naturally occuring, like the animals living in a tidepool.
I can imagine research on a "process" made up of humans as having some value. For example, testing EMTs on whether they correctly diagnose injuries, or testing TAs on whether they catch exam cheating. But I would expect the researchers to ask someone (in those cases, maybe the manager of the EMTs or the TAs, in this case I suppose Linus) whether this research is wanted, and for guidance on how to go about it. For example they could submit the questionable patches during slow times. Does anyone know if that happened? I assume if it had, they would've mentioned it.
With that kind of permission, I wouldn't have a problem with this research and I don't think most people here would either. Without, this is the academic equivalent of those "just a prank, bro" youtube videos.
> As many observers have pointed out to us, we made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.
This shows one thing to me, clearly, they still don't understand how such studies are conducted. not authorized penetration testing is illegal, to a very large degree.
For example, as far I understand, and as many pointed out; you get permission from members for an study without specifying time or any details of a study, then conduct study in 6 month.
-they claim asking permission would have defeated their research
-they claim the other 200 were legitimate patch attempts (a sample of which were, from my personal reading, from 'innocuous but useless' to 'slightly harmful').
If we don't somehow make it socially acceptable to let any security researchers conduct the required social engineering to test supply chain attack susceptibility of OSS maintainers without tipping them off in advance, dangerous state actors -will- continue to do it and -not- tell anyone when they are successful.
Punishing these researchers this harshly about bad manners is creating a chilling effect that will scare researchers away from evaluating potentially one of the single biggest vulnerabilities in our industry.
To be honest if anyone went around anonymously trying to merge security exploits all across well used open source supply chains and always told everyone when they were successful immediately after and helped everyone become more vigilant in code review, I would call them a public servant even if they were almost universally hated for it.
I don't think most people realize just how easy supply chain attacks are and how widely they are being exploited by very dangerous organizations.
If something does not change fast to dramatically increase the level of scrutiny we give to code contributions, it is going to get much worse.
I have gone very far in pentests. Planting malicious USB cables, modifying keyboard firmware, straight up taking unlocked laptops and walking off, sniping recently expired domains to do XSS attacks, obtaining password reset links for the email accounts of maintainers of highly depended on third party dependencies.
Even with consent from high levels at orgs it still upsets unknowing people that are tricked at lower levels in the org. I don't apologize for this because it is my job.
I have seen real and successful social engineering attacks by state actors up close and you would way prefer a security researcher with bad manners to break you of your your survivors bias over being hit by the real thing.
The reality is big companies can afford to pay people like me. Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't.
We should be very thankful for people that risk public rebuke to do research like this in open source at minimal cost to the receiving organization.
> If we don't somehow make it socially acceptable to let any security researchers conduct the required social engineering to test supply chain attack susceptibility of OSS maintainers without tipping them off in advance, dangerous state actors -will- continue to do it and -not- tell anyone when they are successful.
What's the result of that? Lots of known bad patches being sent in, wasting everyone's time, because now every researcher finds "do they spot my intentional bugs" as an easy way to get something published?
There's no reason why you can't run these studies with their consent. Ask the maintainers. "But it will tip them off", yeah, in a "they would be aware that somebody might send faulty code at some point in the future" kind of way, which they certainly already are. To get consent, you don't need to tell them exactly when you will be sending it, who will be sending it, or what exactly it would be.
If it did put them into permanent hyper-vigilance mode, then ... goal achieved?
FWIW I completely agree with you but I see no way out.
The Linux kernel project needs and deserves the best security methods but (a) likely can't afford the people and (b) bringing more money aboard might draw unmotivated people only after the paycheck which could objectively deteriorate the project's code quality.
Dual cryptographic signoff still doesn't address any social issues (or I'm grossly misunderstanding it if it does). F. ex. a state actor can threaten or bribe two, five or thirty people into merging a harmful patch.
What's your suggestions on how oss maintainers can fend off state actor attacks? Always need two people to sign off on a patch?
I strongly agree that supply chain attack is a huge deal and worse attacks will come to light eventually but what should be done at the level of oss maintainers?
Like you said, I feel like Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't do a lot of things.
Wasn’t it the case that in their last exchange with the maintainer (that email that caused all the stir) they were accused of submitting patches that did nothing at all and wasting time in doing so? In this letter they claim these patches were real fixes.
Having got their University banned and overturned the effort it took to write 190 previous patches from others which have now been reverted, it seems likely to me that they are under internal pressure. In all it makes me doubt the sincerity of any of this especially given the tone in the last email with the maintainer. A lot of sudden learning seems to have taken place between then and now.
> We just want you to know that we would never intentionally hurt the
Linux kernel community and never introduce security vulnerabilities.
Our work was conducted with the best of intentions and is all about
Uhh.. but that was the exact intent of the paper. And they did it successfully. So.. mission failed successfully?
What a bizarre attempt to save-face. This is academic misconduct and they're trying to save their asses.
finding and fixing security vulnerabilities.
Obviously the intent was never to introduce security vulnerabilities, no matter how naive and badly thought-out their methodology was. The intent was to show it could be done. None of the proposed vulnerabilities ever got in the kernel, whenever they were at risk of being accepted, the maintainer was warned and the process was aborted.
People here don’t seem to be convinced. There’s a good amount of defense in this letter, so I get it, and it could have been framed better, but ultimately it seems like they learned a valuable lesson and have value to add, so why not let people learn from mistakes and move on? They’ve already been publicly shamed…
Reading this thread I am saddened to see hn apparently conform to the law of maximum offence. We see the least charitable explanation for everything said in the apology because outrage gets upvotes.
A BS non-apology is not enough. These assholes should at the very least be reprimanded by the University of Minnesota for unethical research practises (psychological experiments on humans without their consent) and bringing the whole institution in disrepute.
[+] [-] createdapril24|5 years ago|reply
I am a Red Teamer and work with companies to understand how their detective/preventative/recovery controls and processes are working. Here's how you resolve this:
You work with maintainers to get their coordination on the research. You work out a mechanism to prevent submitted patches from being merged (e.g. maintainers are notified before bad patches accepted by code review processes are merged).
You do not tell them when the patches are coming. You do not tell them which identities are going to be used for the patches (e.g. from which email addresses). You do not tell them which area of code will be targeted. You set rules and time bounds for the study.
You wait some amount of time before submitting such patches (weeks to months). Realistically this is all that's needed. If hypersensitive, set this up earlier and let it bake longer.
At this point, you submit patches from a variety of addresses (probably not associated with your university - it is easy to create many such identities). You also can coordinate with other researchers, universities, and companies to submit patches under identities as needed. You also study submitting from yandex, gmail, .cn and other email addresses (because isn't that interesting to know?).
The premise that there's some ultimatum between working with the community and performing the research is on its face incorrect. This is either ignorance or laziness on behalf of the researchers. Clearly, they hadn't taken the time to work with the community to work out an approach that could be mutually acceptable.
[+] [-] pcl|5 years ago|reply
I also empathize with the plight of the researchers — Linux is a bit different than normal Red Team engagements, in that a normal organization has a hunch of administrative / management layers who typically do not participate in the operations of the system being tested. A VP of engineering at a medium to large company is unlikely to be committing code, much less maintaining the build pipeline etc.
This is not the case with Linux. The people “at the top” are also reviewers, and so it’s pretty likely that notifying them will result in a change of behavior.
I wonder if there is some way to build some sort of Red Team consent “blind trust” organization, such that willing open source projects could agree to responsible attacks, and the attackers could register their work (including disclosure / mitigation plans) with the blind trust ahead of time.
[+] [-] drewzero1|5 years ago|reply
[+] [-] varajelle|5 years ago|reply
It is my understanding that this happened and that no bad patches were actually merged.
[+] [-] specialist|5 years ago|reply
In your experience, do you create a "fail safe"?
So for this study, some way for the researchers to prevent any of their patches from ever being released.
[+] [-] ComodoHacker|5 years ago|reply
OK, how do you test SocEng vectors? Do you obtain consent and coordinate with every employee that might be targeted to receive your e-mail?
>You also study submitting from yandex, gmail, .cn and other email addresses
No, the point was submitting from a known and respectable entity, which might affect the level of scrutiny. They weren't testing a whole patching process, but a specific human component of it.
[+] [-] juliansimioni|5 years ago|reply
"We sincerely apologize for any harm..."
While there are other requirements, a sincere apology cannot in any way entertain doubt about the fact that there WAS harm.
Truly acknowledging the harm done is foundational to a real apology, and most of us (myself included) end up sneaking in weasel words or phrases like this.
Psychologically, its nice for the apologizer, since it allows one to think "i'm being good by apologizing, but maybe I didn't do anything bad after all?".
But from the apologizee standpoint, these phrases are often devastating and can make it clear that the apologizer has no real recognition or care of what happened.
Personally I've worked pretty hard to try to remove these sorts of phrases from my apologies. It's not easy. It makes you feel much more vulnerable and you really have to let whatever you did sit with you in a very uncomfortable way. But it's worth it.
[+] [-] sombragris|5 years ago|reply
However, I find something very problematic. This quote shows it:
"We have learned some important lessons about research with the open source community from this incident."
This is something I don't like. This is not something about "research with the open source community". If anything, they should have learned something about treating human beings as persons and not as involuntary guinea pigs. They should have learned something about not breaking anyone's (not only any open source community) good faith and trust, and respecting that.
They behaved like jerks and they still cannot see that.
[+] [-] true_religion|5 years ago|reply
They didn't decide to conduct this research on a whim. They had full approval of their university ethics board as well. They published a paper and had it peer reviewed without (as far as I know) anyone immediately calling for their heads.
They can't immediately turn heel face and believe their actions are wrong, always were wrong, and the wrongness should have been obvious to them.
Yet, despite this they recognize the hurt they've caused and are genuinely apologetic and will never do it again. Asking them to dismantle their world view in a week is a bit much, even criminals are given a few years of quiet contemplation before being asked to tell a parole board they have changed their hearts and minds.
[+] [-] 1vuio0pswjnm7|5 years ago|reply
Perhaps the entire "tech" industry needs to learn that lesson. Non-technical end users should be entitled to that same level of trust as nerds. I can download free open source code, extract a tarball and build the software without worrying too much about scanning through all the files first for phone home/telemetry/OriginTrials nonsense.^1 However non-technical end users who use programs compiled for them by "tech" companies with ads and surveillance as their "business model" are not entitled to the same trust. I cannot think of any justification for the difference.
[+] [-] epage|5 years ago|reply
> we are very sorry that the method used in the “hypocrite commits” paper was inappropriate
This reads more as "sorry you were offended" than "It was inappropriate and we are sorry".
> As many observers have pointed out to us, we made a mistake by not finding a way to consult with the community and obtain permission before running this study; we did that because we knew we could not ask the maintainers of Linux for permission, or they would be on the lookout for the hypocrite patches.
Bringing up why you did something in an apology is very shaky. Like in this case, it can sound more like justifying / excusing.
[+] [-] dylan604|5 years ago|reply
They just come off as disconnected academics. Samething with the experiments with algorithms from Facebook. They are so wrapped up in what they are doing, they no longer see the "users" as human beings. There are some times this is almost required to survive like being an ER doctor. If you lose that many people, it could be debilitating unless you were able to disconnect. That's far and away much different than these robots.
[+] [-] bawolff|5 years ago|reply
[+] [-] at_a_remove|5 years ago|reply
[+] [-] neatze|5 years ago|reply
After all before a ban, there where complaint(s) made regarding this paper, by other researchers and community members.
[+] [-] cblconfederate|5 years ago|reply
[+] [-] maweki|5 years ago|reply
[+] [-] temp8964|5 years ago|reply
Are we not allowed to discuss the possibility of someone hacked an university email address and then submit a patch?
[+] [-] StefanKarpinski|5 years ago|reply
[+] [-] haswell|5 years ago|reply
I hope I'm just being overly sensitive here.
[+] [-] justinjlynn|5 years ago|reply
The core issue here is that the system under which they were working, and the researchers themselves, did not consider this work to amount to unethical human experimentation - even though the work directly involved infiltrating and exploiting humans and human social systems. There is no mention of this nor of any substantial desire to understand or address this systemic failing and, until there is an unconditional acceptance of responsibility for the harm caused and a real actionable change for the better with accountability, I don't see how we can consider this matter to have been apologised for.
[+] [-] tlb|5 years ago|reply
Listing the harm you caused is also an important part of the public record, so that anyone else considering a similar scheme in the future can see what consequences were agreed to have happened the last time.
[+] [-] tmotwu|5 years ago|reply
As far as giving benefit of the doubt, it's likely not done out of malice in the first place. Just a combination of poor reasoning and doesn't exactly clear up if they even considered alternatives to control their variables. Unfortunately, it seems like they were already given the benefit of the doubt from their first offense, but did not take any lessons away from that, so it would be understandable if the maintainers kept their ban.
[+] [-] jf|5 years ago|reply
To me, it's always a distinct sign that an apology isn't sincere when it hedges apologizing for "any harm" that they might have caused. In my mind, a sincere apology clearly acknowledges harm that has been caused and apologizes for that.
[+] [-] roenxi|5 years ago|reply
There is no need for the researchers to be in any specific state of mind or even apologetic as long as they don't try to submit bad faith patches again.
Really the only reason there is even a need for a public apology is to signal how much pressure they are under so other people think twice before doing the same silly thing. Otherwise they could just apologise to the maintainers directly.
[+] [-] johnklos|5 years ago|reply
I'll be interested to see how they react to feedback / responses.
[+] [-] CharlesW|5 years ago|reply
Context is everything, of course. I think you're right that it's tainted by the fact that they absolutely did not have a choice, and coming from people who've deceived the same tribes they're now trying to apologize to.
[+] [-] Igelau|5 years ago|reply
[+] [-] ajarmst|5 years ago|reply
[+] [-] trop|5 years ago|reply
[+] [-] neatze|5 years ago|reply
[0] https://dave-dittrich.medium.com/security-research-ethics-re...
[+] [-] ineptech|5 years ago|reply
I can imagine research on a "process" made up of humans as having some value. For example, testing EMTs on whether they correctly diagnose injuries, or testing TAs on whether they catch exam cheating. But I would expect the researchers to ask someone (in those cases, maybe the manager of the EMTs or the TAs, in this case I suppose Linus) whether this research is wanted, and for guidance on how to go about it. For example they could submit the questionable patches during slow times. Does anyone know if that happened? I assume if it had, they would've mentioned it.
With that kind of permission, I wouldn't have a problem with this research and I don't think most people here would either. Without, this is the academic equivalent of those "just a prank, bro" youtube videos.
[+] [-] neatze|5 years ago|reply
This shows one thing to me, clearly, they still don't understand how such studies are conducted. not authorized penetration testing is illegal, to a very large degree.
For example, as far I understand, and as many pointed out; you get permission from members for an study without specifying time or any details of a study, then conduct study in 6 month.
[+] [-] sweettea|5 years ago|reply
-they apologize for the three patches in 2020
-they claim asking permission would have defeated their research
-they claim the other 200 were legitimate patch attempts (a sample of which were, from my personal reading, from 'innocuous but useless' to 'slightly harmful').
Hard to believe, but plausible.
[+] [-] lrvick|5 years ago|reply
Punishing these researchers this harshly about bad manners is creating a chilling effect that will scare researchers away from evaluating potentially one of the single biggest vulnerabilities in our industry.
To be honest if anyone went around anonymously trying to merge security exploits all across well used open source supply chains and always told everyone when they were successful immediately after and helped everyone become more vigilant in code review, I would call them a public servant even if they were almost universally hated for it.
I don't think most people realize just how easy supply chain attacks are and how widely they are being exploited by very dangerous organizations.
If something does not change fast to dramatically increase the level of scrutiny we give to code contributions, it is going to get much worse.
I have gone very far in pentests. Planting malicious USB cables, modifying keyboard firmware, straight up taking unlocked laptops and walking off, sniping recently expired domains to do XSS attacks, obtaining password reset links for the email accounts of maintainers of highly depended on third party dependencies.
Even with consent from high levels at orgs it still upsets unknowing people that are tricked at lower levels in the org. I don't apologize for this because it is my job.
I have seen real and successful social engineering attacks by state actors up close and you would way prefer a security researcher with bad manners to break you of your your survivors bias over being hit by the real thing.
The reality is big companies can afford to pay people like me. Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't.
We should be very thankful for people that risk public rebuke to do research like this in open source at minimal cost to the receiving organization.
[+] [-] luckylion|5 years ago|reply
What's the result of that? Lots of known bad patches being sent in, wasting everyone's time, because now every researcher finds "do they spot my intentional bugs" as an easy way to get something published?
There's no reason why you can't run these studies with their consent. Ask the maintainers. "But it will tip them off", yeah, in a "they would be aware that somebody might send faulty code at some point in the future" kind of way, which they certainly already are. To get consent, you don't need to tell them exactly when you will be sending it, who will be sending it, or what exactly it would be.
If it did put them into permanent hyper-vigilance mode, then ... goal achieved?
[+] [-] pdimitar|5 years ago|reply
The Linux kernel project needs and deserves the best security methods but (a) likely can't afford the people and (b) bringing more money aboard might draw unmotivated people only after the paycheck which could objectively deteriorate the project's code quality.
Dual cryptographic signoff still doesn't address any social issues (or I'm grossly misunderstanding it if it does). F. ex. a state actor can threaten or bribe two, five or thirty people into merging a harmful patch.
So what can actually be done?
[+] [-] squaresmile|5 years ago|reply
I strongly agree that supply chain attack is a huge deal and worse attacks will come to light eventually but what should be done at the level of oss maintainers?
Like you said, I feel like Open source projects by random solo maintainers that the security of almost everyone on the internet relies on... can't do a lot of things.
[+] [-] shaggyfrog|5 years ago|reply
[+] [-] usgroup|5 years ago|reply
Having got their University banned and overturned the effort it took to write 190 previous patches from others which have now been reverted, it seems likely to me that they are under internal pressure. In all it makes me doubt the sincerity of any of this especially given the tone in the last email with the maintainer. A lot of sudden learning seems to have taken place between then and now.
[+] [-] asaph|5 years ago|reply
[+] [-] neatze|5 years ago|reply
Since when it is a ok to acquire data through unethical means and then publish it in a journal ?
[+] [-] junon|5 years ago|reply
Uhh.. but that was the exact intent of the paper. And they did it successfully. So.. mission failed successfully?
What a bizarre attempt to save-face. This is academic misconduct and they're trying to save their asses. finding and fixing security vulnerabilities.
[+] [-] kmm|5 years ago|reply
[+] [-] pabs3|5 years ago|reply
https://sfconservancy.org/blog/2021/apr/20/how-to-apologize/
[+] [-] azinman2|5 years ago|reply
[+] [-] sideshowb|5 years ago|reply
[+] [-] voldacar|5 years ago|reply
[+] [-] generalizations|5 years ago|reply
This doesn't admit to harm done, and sounds like an apology that doesn't admit guilt.
[+] [-] mikl|5 years ago|reply