(no title)
nixos | 9 years ago
Immensely so.
On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.
We know stresses, materials, and how they interact. We can predict what will happen, and how to avoid edge cases.
Software? Is there any commonly used secure software? Forget about Windows and Linux. What about OpenBSD?
Did it ever have a security hole?
And that's just the OS. What about software?
There are just too many variables.
So what will happen?
There will become "best practices" enshrined by law. Most will be security theater. Most will remove our rights, and most will actually make things less safe.
Right now, the number one problem of IoT security is fragmentation. Samsung puts out an S6, three years later stops updating it, a hole is found, too bad. Game over.
The problem is that "locking firmware" is common "security theater", which, if there'll ever be a legal security requirement on IoT, it'll require locked bootloader and firmware.
And you can't make a requirement to "keep code secure", because then the question will be for "how long"? Five years? 10 years?
stonogo|9 years ago
This level of hubris is pretty revolting. Software engineering is easy. Writing secure software is easy. The difference between civil engineering or medicine and software engineering is that practitioners of the former are held responsible for their work, and software engineers are not and never have been.
Nothing will improve until there are consequences for failure. It's that simple.
TeMPOraL|9 years ago
I agree lack of consequences is a big part of the problem. But this only hints at a solution strategy, it doesn't describe the problem itself. The problem is that software is so internally complex that it's beyond comprehension of a human mind. To ultimately solve it and turn programming into a profession[0], we'd need to rein in the complexity - and that would involve actually developing detailed "industry best practices"[1] and stick to them. This would require seriously dumbing down the whole discipline.
--
[0] - which I'm not sure I want; I like that I can do whatever the fuck I want with my general-purpose computer, and I would hate it if my children couldn't play with a Turing-complete language before they graduate with an engineering degree.
[1] - which we basically don't have now.
estefan|9 years ago
Of course it's not that simple. Clearly you've never written much, if any, real software.
You want to make an SSL connection to another web site in your backend. You use a library. If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie? You used a library.
Do you think people will write free libraries if the "consequences" fall back on them? If not, have you even the slightest understanding of how much less secure, less interoperable and more expensive things will be if every developer needs to implement every line themselves to cover their backs? Say goodbye to anyone except MegaCorps being able to write any software.
Where does this end? Would we need to each write our own OSes to cover ourselves against these "consequences", our own languages?
Ygg2|9 years ago
jmnicolas|9 years ago
thelambentonion|9 years ago
The seL4 project has produced a formally verified microkernel, open sourced along with end-to-end proofs of correctness [0].
On the web front, Project Everest [1] is attempting to produce a full, verified HTTPS stack. The miTLS sub-project has made good headway in providing development and reference implementations of 'safe' TLS [2].
These are only a few projects, but imo they're a huge step in the right direction for producing software solutions that have a higher level of engineering rigor.
[0] https://wiki.sel4.systems/FrequentlyAskedQuestions
[1] https://project-everest.github.io
[2] n.b. I'm not crypto-savvy, so I can't comment on what is or isn't 'safe' as any more than an interested layperson.
elihu|9 years ago
We need to ruthlessly eradicate undefined behavior at all levels of our software stacks. That means we need new operating systems. We need new programming languages. We need well-thought-out programming models for concurrency that don't allow the programmer to introduce race conditions accidentally. We need carefully designed APIs that are hard or impossible to mis-use.
Rust is promising. It's not the final word when it comes to safety, but it's a good start.
An interesting thought experiment is what would we have left if we threw out all the C and C++ code and tried to build a usable system without those languages? For me, it's hard to imagine. It eliminates most of the tools I use every day. Maybe those aren't all security critical and don't all need to be re-written, but many of them do if we want our systems to be trustworthy and secure. That's a huge undertaking, and there's not a lot of money in that kind of work so I don't know how it's going to get done.
nixos|9 years ago
It depends on the CPU.
The problem is that C was designed to be as close as possible to hardware, and some places (RTOS? Kernel?) speed is critical.
taeric|9 years ago
However, in the future where software can do everything, there is no such thing as "limited trust." If you trust someone to operate on your car, you are trusting them with everything the car interacts with. Which... quickly explodes to everything.
dustingetz|9 years ago
nixos|9 years ago
The opposite. When the field was in its infancy, one was able to keep whole stacks in his head.
How complicated were CPUs in the 1960s?
How many lines of assembler was in the LM?
How many lines is Linux or FreeBSD kernel? Now add libc.
Now you have a 1970s C compiler.
Now take into account all the optimizations any modern C compiler does. Now make sure there's no bugs _there_.
Now add a Python stack.
Now you can have decent, "safe" code. Most hacks don't target this part. The low hanging fruit is lower.
You need a math library. OK, import that. You need some other library. OK, import that.
Oops, there's a bug in one module. Or the admin setup wasn't done right. Or something blew.
Bam. You have the keys to the kingdom.
And this is all deterministic. Someone _could_ verify that there are no bugs here.
But what about Neural Networks? The whole point of training is that the programmers _can't_ write a deterministic algorithm to self drive, and have to have a huge NN do the heavy lifting.
And that's not verifiable.
_This_ is what's going to be running your self-driving car.
That's why I compared software engineering to biology, where we "test" a lot, hope for the best, and have it blow up in our face a generation later.
cm2187|9 years ago
New SQL injection vulnerabilities are being introduced every day. Passwords being MD5. Array boundaries being sourced from client data. I mean there are perhaps 5 to 10 coding errors that are generating most of the vulnerabilities.
That's not the only problem. We also need to trust the users, who are either careless or malicious. But I'd like at the very least to be able to trust our systems.