top | item 43491968

(no title)

nukem222 | 11 months ago

Eh, finger pointing does nobody any good, emphatically including this comment. Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.

Maintaining software is hard, but this does not imply a right to be babied. People should simply lower their expectations of security to match reality. Vulnerabilities happen and only extremely rarely do they indicate personal flaws that should be held against the person who introduced it. But it's your job to fix them. Stop complaining.

discuss

order

gruez|11 months ago

>Finger pointing towards someone who actually found a vulnerability is just bleak. I would not willingly associate with anyone who engaged in such behavior.

Nobody is "finger pointing" Rachel for the vulnerability. They're calling her out for how she communicated it. I feel that's totally justified. For instance if someone found a critical RCE, but the report was a barely coherent stream of consciousness, it's totally fine to call the latter part out. That's not "finger pointing".

>But it's your job to fix them. Stop complaining.

It's the developers job to respond to bug reports in the form of vaguely written blog posts?

cenamus|11 months ago

Yeah shame on the people irresponsibely publishing the vulnerability, but the people putting them in? Who cares

nukem222|11 months ago

[deleted]

amiga386|11 months ago

Fingerpointing is bad, but we have to have an honest conversation.

One person posted the vague post. They clearly did not expect the reaction it got, though they could have anticipated some of it, they are aware their blog is widely read. Their reaction is commendable, to quickly post a followup appealing for calm and sharing some details, to quell the problems caused by the intense vagueness.

What people from HN did, because of the vagueness, was assume this a super-secret-squirrel mega-vulnerability and Rachel is gagged by NDAs or the CIA or whatever... and they've gone off and harrassed the developers of atop while trying to find the issue.

Imagine a person of note saying "the people at 29 Acacia Road are suspicious", then a mob breaks down the door and start rifling through all the stuff there, muttering to themselves "hmm, this lamp looks suspicious... this fork looks suspicious"... absolute clowns, all of them.

For example, this asshole who went straight in there with bad-faith assumptions on the first thing they saw: https://github.com/Atoptool/atop/issues/330#issuecomment-275...

No, you dummies, it's not going to be in the latest commit, or easily greppable.

This is exactly why CVEs, coordinated disclosure, and general security reporting practises exist. So every single issue doesn't result in mindless panic and speculation.

There's now even a CVE purely based on the vaguepost, assigned to a reporter who clearly knows fuck all about what the problem is: https://www.cve.org/CVERecord?id=CVE-2025-31160 - versions "0" through "2.11.0" vulnerable, eh? That would be all versions, and the reason the reporter chose that is because they don't know which versions are vulnerable, and they don't know what it's vulnerable to either. But somehow, "don't know", the absence of information, has become a concrete "versions 0 to 2.11.0 inclusive"... just spreading the panic.

I don't know why Rachel is vagueposting, but I can only hope she has reported this correctly, which is to:

1. Contact the security of the distro you're using. e.g. if you're using atop on debian, then email security@debian.org with the details.

2. Allow them to help coordinate a response with the packager, the upstream maintainer(s) if appropriate, and other distros, if appropriate. They have done this hundreds of times before. If it's critically important, it can be fixed and published within days, and your worries about people being vulnerable because you know something they don't can be relieved, all the more quickly.

freeopinion|11 months ago

I commend you for writing what you think should be done and not just complaining about what was done. It is more helpful to express the correct procedure than to only label things as the wrong procedure.

whatnow37373|11 months ago

I never quite understood why computing is so different from literally all other branches of reality. Systems need to be secure, I get it. But if we have a bunch of folks dedicating their life to breaking your shit I don't get how that is in any way acceptable and why the weight of responsibility solely lies with people responsible for security.

We apparently have a society/world that normalizes breaking everyone's shit. That's not normal - IMO.

If I break into a factory or laboratory of some kind and just walk out again I have not found a "vulnerability" and I certainly won't be remunerated or awarded status or prestige in any way shape or form. I will be prosecuted. Everyone can break into stuff. It's not that stuff is unbreakable, it's that you just don't do that because the consequences are enormous (besides obvious issues with morality). Again, breaking stuff is the easy part.

I am certainly completely ignorant and should be drawn and quartered for it, but for me it is hard to put my finger where I'm so wrong.

I can see how the immaterial nature of software systems changes the nature of the defense, but I don't see how it immediately follows that breaking stuff that's not allowed to be broken by you is suddenly the norm and nothing can be done against that. We just have to shrug and accept our fate?

hyperpape|11 months ago

Leaving aside the ethics of vulnerability research in server-side software, you're neglecting the fact that atop runs on your own machine.

So it's not like breaking into a factory. It's like noticing that your dishwasher makes the deadbolts in your house stop working (yes...a weird analogy--there are ways software isn't like physical appliances).

Surely you have the right to explore the behavior of your own house's appliances and locks, and the manufacturer does not have the right to complain.

As for server side software, I think the argument is a simple consequentialist one. The system where vulnerability researchers find vulnerabilities and report them quietly (perhaps for a bounty, perhaps not) works better than the one where we leave it up to organized crime to find and exploit those issues. It generates more secure systems, and less harm to businesses and users.

dcminter|11 months ago

I find your view bizarre.

If I buy a physical product, take it home, and then publish the various issues I find with it then ... nobody has a problem with that

I'm as sad as the next guy that the safe and trusting internet of academia is long gone, but the generally accepted view nowadays is that it's absolutely full to the gills with opportunistic criminals. Letting people know that their software is insecure so they don't get absolutely screwed by that ravening horde is a public service and should be appreciated as such.

Pen testing third party systems is a grey area. Pen testing publicly available software in your own environment and warning of the issues is not, particularly when the disclosure is done with care.

dagss|11 months ago

Well also in the real world, if you look at history, people DID exploit the neighbouring tribe with impunity if they could not defend themselves ("what idiots don't have a guard during night"), or built stone fortresses with 3 metre stone walls.

When living under those conditions, people probably did put the responsibility to be safe on the victim..

We have been able to remove this waste due to the introduction of the national state, laws, "monopoly on violence", police...

It is THOSE things that allows the factory in your analogy to not spend resources on a 3 metre stone wall and armed guards 24/7.

Now on the internet the police, at least relatively to the physical world, almost completely lack the ability to either investigate or enforce anything. They may use what tools they can, but it does not give them much in the digital world compared to the physical.

If we want internet to be like the real world in this respect, we would have to develop ways to let the police see a lot more and enforce a lot more. Like they can in the physical world.

cturner|11 months ago

"If I break into a factory or laboratory of some kind and just walk out" This is a weak analogy. In the situation you describe, right-and-wrong is easily understood by the layman, there is a common legal framework, there is muscle to enforce the legal framework.

In the computing space - if someone breaks the rules, it is only a bunch of us that understand what rule was broken, and even then we are likely to argue over the details of it. The people doing the breaks are often anonymous. There is no shared legal framework, or enforcement, or courts. The consequences of a break are usually weak. Consider the lack of jail time for anyone involved with Superfish. Many of these people were located in the developed world.

The computing world often resembles the lawlessness of earlier eras - where only locally-run fortifications separated civilian farmers from barbarian horsemen. A breach in this wall leads to catastrophe. It needs to be unbreakable. People who maintain fortifications shoulder a heavy responsibility.

immibis|11 months ago

We can lock down the Internet so hard that every IP packet is associated with a physical address, then go and arrest people who allow bad packets to be sent from their address. This is what many governments are persistently trying to do. Is it a good idea?

frontfor|11 months ago

I second this. The pompous holier-than-thou I-know-better attitude some members of the computer security community has always rubbed me the wrong way. This behaviour of complaining is a manifestation of the typical “putting down” and dismissing someone who isn’t part of the tribe.