This is a great statement, they confirm they're aware of the issue, they acknowledge the concerns and they set out their intention to gather the full facts whilst suspending the operation of the research in the meantime. They also acknowledge the systematic way the need to deal with this.
I hope their follow up is as thorough but I want to applaud this, it's a good approach.
This statement rings true because it is the wordsmith'd version of exactly what the department head probably said when he first heard, which probably went something like "what the f*k did you do, who the f*k thought this was a good idea, and who the f*k told you you could do this?"
It is a good statement, and I believe they'll follow through, but it's missing something important that is often missing from otherwise professional communication.
The last line is "We will report our findings back to the community as soon as practical." It should be followed by "and we will provide an update in no more than 30 days".
Without any explicit time frame, holding them publicly accountable becomes trickier. At any point they can just say "we're still investigating" until enough time has passed people aren't paying attention any more.
Note that the companies that do ongoing incident updates on status sites best all do this -- "We will provide an update by 10:30pm PST".
I agree, and given that they have only just started to look into it, I think it shows an appropriate amount of concern and urgency. They'll at least want to talk to the researchers and get their point of view, before committing any further. This is about the best you could expect at this point, they'll want to proceed methodically.
There was one thing that I found to be lacking from their statement. They never said that what they had done was wrong. The university already knows what the researchers did and are aware of the paper that was written about the subject by those same researchers. [1]
> I do work in Social Computing, and this situation is directly analogous to a number of incidents on Wikipedia quite awhile ago that led to that community and researchers reaching an understanding on research methods that are and are not acceptable.
> Yes! This was an IRB 'failure'. Again, in my area of Social Computing, it's been clear for awhile that IRBs often do not understand the issues and the potential risks/harms of doing research in online communities / social media.
An interesting question is IMHO who people actually tried to contact in the first round of this getting publicity (and I guess how many did so at all), and if leadership should thus have been aware earlier that there was controversy worth looking into.
Basically now they can do a meta study on how the review process of IRB has flaws.
As shown, if you intentionally try to bypass the IRB, apparently you can. It's even reproducible.
Oh so interesting ! In my french uni, psych teachers were all doing wikipedia research to prove it was unreliable by timing correction delay of purposedly introduced mistakes,
They were so proud of their discovery. They didnt think to time what that would take if they went to a printer and changed the text there before a book is printed.
Terveen was always honest about what works and what doesn't work. Its good to see them acknowledge that this is something they need to dive into and fix.
>it's been clear for awhile that IRBs often do not understand the issues and the potential risks/harms of doing research in online communities / social media.
Too bad he's not in a position of power to implement that additional review to CS department research.
First time they initially skipped IRB review for sending malicious patches to the mailing list, which people do install. (So IRB exemption should not apply.) A top security conference allowed a paper with a broken IRB process, and UMN IRB, when a later IRB exempt request was filed, explicitly allowed this. Bad, bad, and bad.
The latest Linux incident seems like a repeat by the same CS dept, advisor, student, & presumably, UMN IRB. No naivete excuse anymore, this is now business as usual.
The bigger fail to me is the UMN dept head + IRB, and the security conference review process around IRB norms. Especially damning that it's a repeat of the same IRB mess. IRB exemption matters a lot for practicing scientists, and leadership tolerating this stuff is what will get it taken away for everyone else.
> looks like the 2nd time they're doing the same thing
It's not clear that they're doing the same thing--we don't know that these recent patches are deliberately bogus. See this comment with clarifications from the other post: https://news.ycombinator.com/item?id=26890583
Yeah I agree. To me, the biggest problem is with Oakland, who accepted the paper. A single faculty member doing something really stupid is one thing. But the field's top conference accepting the work? Christ.
I don't know why, but somehow I am not too bothered by the research itself. Sure, in retrospect, it does not sounds like it was the right thing to do (or the right way to do). But, you know, stuff happens.
Instead, what bothered me immensely is the way the PhD student handled that interaction: immediately claiming "bias", "slander", playing "victim", etc... I don't know if he learned such a way to communicate from his professors, other students at the CS&E department, or UMN environment in general, but it's very bothersome to me. Such way to communicate precludes constructive (and perhaps heated) exchange of ideas. I think people like that should be nowhere near important CS/EE projects.
It's entirely possible that Aditya is actually just working on a static analysis tool, it is buggy, and he wasn't aware of the other research his advisor does. If that is the case, I can kind of understand his response. I would be pretty upset if I knew I was just trying to submit some honest(if buggy) patches, but was accused of being a scoundrel because of something that didn't even involve me.
Of course, it's also possible that he is just gaslighting Greg and actually was doing the same kind of "research" as other people did before.
I think that UMN will get to the bottom of it - it will be pretty clear to them what kind of research he was doing, and whether he represented himself honestly.
I think this can be an interesting topic in itself, how those trigger words and victim playing can get you through code reviews faster. It's certainly true in my company...
Here is some background for those who are not familiar with the rules of academic research in the US. Usually, any experiments involving human subjects are subject to Univesity Institutional Review Board (IRB) [1] approval. IRB should evaluate both ethical and safety concerns. I recall having to jump through quite a few hoops just to do a simple touch screen gesture recognition test of a dozen willing fellow students. Apparently, the IRB approval step was skipped, or IRB failed to do its job in this case.
This wasn't a direct test on the maintainers, so it is easy to see how the IRB would miss that. Still not excusable to miss that there are humans involved.
I do some maintenance work for the linux kernel dvb and infrared subsystems. I reviewed and accepted some patches from umn.edu addresses. They looked fine to me, however they're all around error handling, which can get pretty tricky with long error paths.
gregkh sent a 190 patch series to revert all of the "easy" UMN reverts, pending review. People are now looking at the patches and saying things like, "that's one OK, don't revert".
There are another 68 commits which did not revert cleanly, in some cases because they were later fixed up, already reverted, or some other patch has touched those lines of code. This will require further manual work.
We basically at this point assuming bad faith for all UMN patches and reviewing them all before allowing them to stay in. (Or if they get reverted by default, someone else can manually apply them after they go through strict review.)
I think everybody is missing the point. If one grad student was able to do this, imagine what a team of dozens of well-paid, well-equipped, and highly experienced security experts could do.
In other news, we just learned that any half-decent security agency has already injected their own vulnerabilities and back-doors in OSS.
There are so many security flaws in critical software that you really don't need to inject vulnerabilities. You just need your engineers to find, catalog, and script exploits for them - ready to use whenever needed.
If you do inject vulnerabilities you need to assume your adversaries will find, catalog, and script an exploit for it. And you risk your reputation loss if you do get caught. So I'm sure it has happened, but I bet not that often.
The other reason this is a hotbutton issue is because it is related to the general problem of abusing the universities as a protected platform for general subversion for its own sake.
It's not innovation or research, or even progress, it's actively destabilizing and subverting a targetted community that a huge part (even majority) of the economy depends on the integrity of. This is a hawkish view, but in a challenging cultural moment, their activities are very difficult to be charitable about.
The research approach is distasteful and dangerous. Any one should not introduce bugs or malicious code intentionally, even for research purpose. The results are also a bit trivial. It is easy to imagine that this type of code injection would be possible. So let this be an example of what is the consequence of intentional malicious code injection, even in the name of research.
I wish I could be on the researchers' side, as I am both Chinese and an alum from UMN. But No - wrong is wrong.
I guess the question I have is "Did any *previous* research done by UMN successfully introduce bugs into the Linux Kernel git commit log?"
There are weasel words in this statement that make it unclear and the researchers have been really dishonest already. But! If it's true that their research has never made it out of email chains then it does seem like the reaction is a bit disproportionate to the damages here.
e: Not getting pulled into a maintainer tree isn't enough to be safe about what was posted to a kernel mailing list. People can (and testing scripts blindly do) grab and apply patches on the mailing list.
Much focus is on the 3 patches from the paper last year, but others have been submitted before and since by the same group, and some that have been found to be malicious did make it into the Stable branch: https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
That was quick. I'm glad they will look into this and I hope we don't see this kind of research in the future.
This story reminded me of facebook experimenting on their users to control their emotional state (but they said users were volunteers because "terms of service"...)
This is a statement announcing an investigation. Expecting a mea culpa right at this time is a bit premature considering they still need to figure out where they went wrong.
It's not just the linux kernel. You can imagine this having some fallout to other open source projects where they all come out and say they won't accept contributions from the university. Huge PR nightmare.
you should notice that the professor involved is listed as being on the Program Committee for IEEE S&P 2022 already, meaning he must be well-connected with various higher ups at the conference. This is probably why they are reluctant to remove the paper. There were similar issues with the ISCA'19 peer review fraud case. IEEE/ACM are having some major issues these days.
It seems concerning that the investigation is being conducted by the CS&E department itself, rather than an independent third party. There's a risk that the results of the investigation won't be seen as impartial, since the CS&E faculty obviously have an interest in protecting the department's reputation.
[+] [-] dang|4 years ago|reply
“They introduce kernel bugs on purpose” - https://news.ycombinator.com/item?id=26887670 - April 2021 (1562 comments and counting)
[+] [-] Traster|4 years ago|reply
I hope their follow up is as thorough but I want to applaud this, it's a good approach.
[+] [-] AdamJacobMuller|4 years ago|reply
[+] [-] agf|4 years ago|reply
The last line is "We will report our findings back to the community as soon as practical." It should be followed by "and we will provide an update in no more than 30 days".
Without any explicit time frame, holding them publicly accountable becomes trickier. At any point they can just say "we're still investigating" until enough time has passed people aren't paying attention any more.
Note that the companies that do ongoing incident updates on status sites best all do this -- "We will provide an update by 10:30pm PST".
[+] [-] Zarathust|4 years ago|reply
[+] [-] spfzero|4 years ago|reply
[+] [-] aspaceman|4 years ago|reply
[+] [-] dleslie|4 years ago|reply
Their ethics committee approved the research, and yet I see no acknowledgement of their responsibility.
[+] [-] mariksolo|4 years ago|reply
[+] [-] Causality1|4 years ago|reply
[+] [-] mewse-hn|4 years ago|reply
The way their statement stands they can investigate themselves and determine they did nothing wrong, we'll have to see what they say down the road.
[+] [-] IncRnd|4 years ago|reply
[1] On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits -- https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
[+] [-] detaro|4 years ago|reply
> I do work in Social Computing, and this situation is directly analogous to a number of incidents on Wikipedia quite awhile ago that led to that community and researchers reaching an understanding on research methods that are and are not acceptable.
and https://twitter.com/lorenterveen/status/1384966202301337603
> Yes! This was an IRB 'failure'. Again, in my area of Social Computing, it's been clear for awhile that IRBs often do not understand the issues and the potential risks/harms of doing research in online communities / social media.
An interesting question is IMHO who people actually tried to contact in the first round of this getting publicity (and I guess how many did so at all), and if leadership should thus have been aware earlier that there was controversy worth looking into.
[+] [-] fhars|4 years ago|reply
[+] [-] zenir|4 years ago|reply
[+] [-] xwolfi|4 years ago|reply
They were so proud of their discovery. They didnt think to time what that would take if they went to a printer and changed the text there before a book is printed.
[+] [-] CaffeineSqurr|4 years ago|reply
[+] [-] azernik|4 years ago|reply
[+] [-] finnthehuman|4 years ago|reply
Too bad he's not in a position of power to implement that additional review to CS department research.
[+] [-] lmeyerov|4 years ago|reply
First time they initially skipped IRB review for sending malicious patches to the mailing list, which people do install. (So IRB exemption should not apply.) A top security conference allowed a paper with a broken IRB process, and UMN IRB, when a later IRB exempt request was filed, explicitly allowed this. Bad, bad, and bad.
The latest Linux incident seems like a repeat by the same CS dept, advisor, student, & presumably, UMN IRB. No naivete excuse anymore, this is now business as usual.
The bigger fail to me is the UMN dept head + IRB, and the security conference review process around IRB norms. Especially damning that it's a repeat of the same IRB mess. IRB exemption matters a lot for practicing scientists, and leadership tolerating this stuff is what will get it taken away for everyone else.
Fool me once..
[+] [-] re|4 years ago|reply
It's not clear that they're doing the same thing--we don't know that these recent patches are deliberately bogus. See this comment with clarifications from the other post: https://news.ycombinator.com/item?id=26890583
[+] [-] UncleMeat|4 years ago|reply
[+] [-] ShamblingMound|4 years ago|reply
[deleted]
[+] [-] g42gregory|4 years ago|reply
Instead, what bothered me immensely is the way the PhD student handled that interaction: immediately claiming "bias", "slander", playing "victim", etc... I don't know if he learned such a way to communicate from his professors, other students at the CS&E department, or UMN environment in general, but it's very bothersome to me. Such way to communicate precludes constructive (and perhaps heated) exchange of ideas. I think people like that should be nowhere near important CS/EE projects.
[+] [-] oh_sigh|4 years ago|reply
Of course, it's also possible that he is just gaslighting Greg and actually was doing the same kind of "research" as other people did before.
I think that UMN will get to the bottom of it - it will be pretty clear to them what kind of research he was doing, and whether he represented himself honestly.
[+] [-] dshpala|4 years ago|reply
I think this can be an interesting topic in itself, how those trigger words and victim playing can get you through code reviews faster. It's certainly true in my company...
[+] [-] vzaliva|4 years ago|reply
[1] https://en.wikipedia.org/wiki/Institutional_review_board
[+] [-] bluGill|4 years ago|reply
[+] [-] stabbles|4 years ago|reply
[+] [-] Animats|4 years ago|reply
No word from DHS Cybersecurity yet.
[1] https://www.google.com/search?channel=fs&q=university+of+min...
[+] [-] iasmseanyoung|4 years ago|reply
What else can I do than revert the lot?
[+] [-] tytso|4 years ago|reply
There are another 68 commits which did not revert cleanly, in some cases because they were later fixed up, already reverted, or some other patch has touched those lines of code. This will require further manual work.
We basically at this point assuming bad faith for all UMN patches and reviewing them all before allowing them to stay in. (Or if they get reverted by default, someone else can manually apply them after they go through strict review.)
Fool me once, shame on you, fool me twice....
[+] [-] locacorten|4 years ago|reply
In other news, we just learned that any half-decent security agency has already injected their own vulnerabilities and back-doors in OSS.
[+] [-] jacobsenscott|4 years ago|reply
If you do inject vulnerabilities you need to assume your adversaries will find, catalog, and script an exploit for it. And you risk your reputation loss if you do get caught. So I'm sure it has happened, but I bet not that often.
[+] [-] motohagiography|4 years ago|reply
It's not innovation or research, or even progress, it's actively destabilizing and subverting a targetted community that a huge part (even majority) of the economy depends on the integrity of. This is a hawkish view, but in a challenging cultural moment, their activities are very difficult to be charitable about.
[+] [-] zadwang|4 years ago|reply
I wish I could be on the researchers' side, as I am both Chinese and an alum from UMN. But No - wrong is wrong.
[+] [-] isatty|4 years ago|reply
[+] [-] Will_Do|4 years ago|reply
There are weasel words in this statement that make it unclear and the researchers have been really dishonest already. But! If it's true that their research has never made it out of email chains then it does seem like the reaction is a bit disproportionate to the damages here.
[+] [-] finnthehuman|4 years ago|reply
https://lore.kernel.org/lkml/[email protected]...
e: Not getting pulled into a maintainer tree isn't enough to be safe about what was posted to a kernel mailing list. People can (and testing scripts blindly do) grab and apply patches on the mailing list.
[+] [-] hannasanarion|4 years ago|reply
[+] [-] elcomet|4 years ago|reply
This story reminded me of facebook experimenting on their users to control their emotional state (but they said users were volunteers because "terms of service"...)
[+] [-] kbumsik|4 years ago|reply
“They introduce kernel bugs on purpose”
https://news.ycombinator.com/item?id=26887670
[+] [-] janoc|4 years ago|reply
[+] [-] eganist|4 years ago|reply
There's value in setting down the pitchforks.
[+] [-] jtchang|4 years ago|reply
[+] [-] pbzcnepu|4 years ago|reply
[+] [-] jasonhansel|4 years ago|reply