If software development was a true profession, then I firmly believe that many developers would be struck off for extreme negligence or incompetence.
I’ve found and reported serious security vulnerabilities to many companies that I’ve worked with, and become very disillusioned with some of the responses. Companies that operate in fields which materially affect people's lives (such as healthcare, finance and telecoms) will deploy software that is so badly designed that there is often no need to break any technical aspect to get access to private and sensitive data.
Yet, when I report a breach, the same people who deployed software with broken (or sometimes no) authorisation models, access control, etc, are suddenly competent enough to investigate their own failure. Invariably, they always have perfect logging and reporting that could not possibly have been evaded and which proves that no breach occurred or data was exfiltrated before the vulnerability was reported.
If another professional, say an engineer, lawyer, or doctor, had demonstrated the incompetence or negligence in their field that I’ve seen some software developers display (sometimes wilfully - “It’s a feature”), they would never be allowed to work again. Software is now so important that I believe that some of the developers and technical leaders that I have dealt with in resolving security vulnerabilities should never again be allowed to work with software that interacts with personal or sensitive data (or, more generally, with software that could affect human life, safety, or privacy).
The stack is too large, complicated, and abstracted to put the blame on a single engineer.
Vulnerability in struts? Go after the open source engineers.
CPU vulnerability? Go after the engineers at AMD and Intel.
Bad firmware? Go after the network engineer who setup the box.
In a time when even the highest people in companies are basically untouchable, for example Lehman Bros, and you want to start going after the engineers?
This isn't an engineering failure, but a failure of management. Non-technical management has no clue how expensive it is to properly maintain a system and design it for security when all they can see is an output of a widget. In almost every case, it is non-technical management who decide when work stops not the engineer tasked with building it.
With that being said, the only way change will come is either through government intervention (but they barely understand the internet, so good luck) or through organized labor movements that then codify it into law. However, there is a large anti-union block within technology so that has it's own challenges.
Realistically, nothing will happen within our life time unless there is a crisis that changes the norms or a particularly likable person makes it their life mission.
I agree emphatically, and it’s why I’m a member of the BCS (British Computing Society). It’s absurd that we have people building essential public infrastructure with close to no repercussions when their failure screws people over.
I agree. I simply think that if people want to use the "Engineer" moniker then they should be required to abide by the profession's code of ethics. I really want our profession to have a set of standards that people can trust. I have my P.Eng in software engineering (Canadian); people tell me that it is "useless" but I want to be ahead of the professional curve. I think we will see demand for traditional engineering rigor in software. I already know my clients take safety extremely seriously (Industrial automation) and being able to say I belong to our provincial body of engineers does mean something (i think)
I think this rests on the architect / technical lead to force their minions to use TDD or something similar. You can't expect a noob out of college to be responsible, that's just asking for trouble.
We know how to build secure software. It's just very time consuming and expensive. For the most part nobody wants to pay for this so we have a constant stream of hacks instead.
One piece of spam I've got on a brand new email account was ~1 day after ordering a brand new XPS. It was a fake tracking code email about my dell order with correct details like laptop, account name, price. I contacted dell and only managed to find out my order wasn't even in the post yet. They weren't interested in anything.
And I also never got any more than that specific 1 piece of spam.
It's insane that companies are allowed to say "yes there was a security hole, but no we don't have logs, therefore nothing was stolen, so stop asking."
Their refusal to give the number of exposed accoundlts makes it seem like it's pretty bad.
What is a “hashed password”?
Hashing is a cryptographic security mechanism, similar to encryption, that scrambles customers’ passwords into an unreadable format. Dell ‘hashes’ all Dell.com customer account passwords prior to storing them in our database using a hashing algorithm that has been tested and validated by an expert third-party firm. This security measure limits the risk of customers’ passwords being revealed if a hashed version of their password were to ever be taken.
Bleh. Maybe it's too much to hope for a company like that to give any specifics but that's pretty empty by itself. I mean, great, they didn't use plain text(!), but "MD5 with no salt" would fit that blurb just fine too. I really hope Dell was properly using an adaptive hash, but usually when companies do a good job there they want to tout it because it does in some small way show they care somewhat despite the breach. Even if it should be the norm saying "we used bcrypt with 65k+ rounds" or whatever is legitimately reasonable to put in there.
[+] [-] donaltroddyn|7 years ago|reply
I’ve found and reported serious security vulnerabilities to many companies that I’ve worked with, and become very disillusioned with some of the responses. Companies that operate in fields which materially affect people's lives (such as healthcare, finance and telecoms) will deploy software that is so badly designed that there is often no need to break any technical aspect to get access to private and sensitive data.
Yet, when I report a breach, the same people who deployed software with broken (or sometimes no) authorisation models, access control, etc, are suddenly competent enough to investigate their own failure. Invariably, they always have perfect logging and reporting that could not possibly have been evaded and which proves that no breach occurred or data was exfiltrated before the vulnerability was reported.
If another professional, say an engineer, lawyer, or doctor, had demonstrated the incompetence or negligence in their field that I’ve seen some software developers display (sometimes wilfully - “It’s a feature”), they would never be allowed to work again. Software is now so important that I believe that some of the developers and technical leaders that I have dealt with in resolving security vulnerabilities should never again be allowed to work with software that interacts with personal or sensitive data (or, more generally, with software that could affect human life, safety, or privacy).
[+] [-] wil421|7 years ago|reply
Vulnerability in struts? Go after the open source engineers.
CPU vulnerability? Go after the engineers at AMD and Intel.
Bad firmware? Go after the network engineer who setup the box.
In a time when even the highest people in companies are basically untouchable, for example Lehman Bros, and you want to start going after the engineers?
[+] [-] was_boring|7 years ago|reply
With that being said, the only way change will come is either through government intervention (but they barely understand the internet, so good luck) or through organized labor movements that then codify it into law. However, there is a large anti-union block within technology so that has it's own challenges.
Realistically, nothing will happen within our life time unless there is a crisis that changes the norms or a particularly likable person makes it their life mission.
[+] [-] benzoate|7 years ago|reply
[+] [-] TheCapn|7 years ago|reply
[+] [-] LiterallyDoge|7 years ago|reply
[+] [-] cageface|7 years ago|reply
[+] [-] Already__Taken|7 years ago|reply
One piece of spam I've got on a brand new email account was ~1 day after ordering a brand new XPS. It was a fake tracking code email about my dell order with correct details like laptop, account name, price. I contacted dell and only managed to find out my order wasn't even in the post yet. They weren't interested in anything.
And I also never got any more than that specific 1 piece of spam.
[+] [-] lostgame|7 years ago|reply
[+] [-] acct1771|7 years ago|reply
[+] [-] logicallee|7 years ago|reply
[deleted]
[+] [-] abo2t|7 years ago|reply
Their refusal to give the number of exposed accoundlts makes it seem like it's pretty bad.
[+] [-] tyingq|7 years ago|reply
[+] [-] lostgame|7 years ago|reply
[+] [-] ndrake|7 years ago|reply
What is a “hashed password”? Hashing is a cryptographic security mechanism, similar to encryption, that scrambles customers’ passwords into an unreadable format. Dell ‘hashes’ all Dell.com customer account passwords prior to storing them in our database using a hashing algorithm that has been tested and validated by an expert third-party firm. This security measure limits the risk of customers’ passwords being revealed if a hashed version of their password were to ever be taken.
[+] [-] xoa|7 years ago|reply
[+] [-] NoPicklez|7 years ago|reply
Not just, your account details are safe.