I'm not sure how many others share my view but I think that regulation is worth the benefit to security. I have always been very skeptical of the "but it'll hurt innovation" claim. Won't it promote innovation in new approaches for securing low-cost devices? It sure seems nebulous to me, but I am willing to be convinced otherwise.
tptacek|10 years ago
Another problem with regulating software security is that it will inevitably involve licensing software security assessors (it's hard to meaningfully require audits without doing that). The history of licensed security auditors is not reassuring; the economics predict a race-to-the-bottom, and that's what you get (see: PCI).
tzs|10 years ago
I too am not in favor (at this point) of requiring licensed assessors to approve software after it is complete, at least for most products. Embedded medical devices, vehicle control systems, and things like that probably should have an outside assessment.
I'd be happy for now just having some rules to try to make it so IoT device breaches are mostly due to bugs in the implementation of a good design, rather than due to the producers not having a clue about security.
I think we are fast approaching (if we have not already past) the point where good security practices are something that almost every programmer and software architect should know and practice. There should be basic coverage of this in the standard computer science/software engineering curriculum, and there should be more extensive coverage as an optional part of the curriculum. If you take these optional courses, your degree is "B.S. in Computer Science and Computer Security" (BS CSCS). (There should also be a way to get this training outside of college, and get some sort of certificate that you have had this training).
Those making products that reach the thresholds for regulation should have to have someone with a BS CSCS (or a certification of equivalent security training) who signed off on the architecture, development standards, and testing process used for the product.
My expectation is that as everything (for better or worse) gets connected, the vast majority of CS students will go for the CSCS option and so people with a BS CSCS will not be significantly harder to find or more expensive to hire than people with just a BS CS, and so even small new companies should be able to afford them once they get past the point of the founders doing all the work and start hiring employees.
firebones|10 years ago
There is good regulation and bad regulation, but regulation can accelerate innovation. With or without regulation, addressing this problem with more standardization of secure software and hardware infrastructure would reduce the need for human assessors (or at the very least, push what they're worrying about higher in the stack). Addressing it with licensing and more humans is probably not the kind of regulation I'd look for. So could there be bar-raising regulation that encouraged infrastructural solutions that benefited the industry as a whole?
I'd hate to inject insurers into this world, but one way might be to require IoT manufacturers to carry some sort of indemnification against potential consumer damages, and the insurers drive the security quality. In the 1990s, it was insurers, tired of anesthesia-related malpractice losses, who created back-pressure on the profession to put better clinical standards in place, and errors related to anesthesia-related causes dropped, as did premiums for practitioners following the guidelines. Everyone benefited--especially the patients.
But in the IoT world today, there are no meaningful incentives around securing devices, and consumers have little influence.
manyxcxi|10 years ago
How do you define reasonable security practices? If there's PII, what's reasonable then? What's reasonable today OAuth, tokens, 2FA was over the top crazy/impractical/expensive/impossible in 2001. You think there's going to be a committee evolving this crap every month in perpetuity?
On top of that, if actual harm comes to users of these devices as a result of these devices then we already have plenty of consumer laws protecting them. Granted, they're going to have to come up with ways to apply it sometimes and you're going to have to prove it was that device that allowed the harm, but we have it.
I will say this though: I'm mostly okay with laws (whether they exist or not yet) that say that if your negligence or stupidity was the root cause, as a manufacturer of these goods, you are on the hook for a multiplier of damages. There are a lot of companies out there that know they are pushing shit to market in a race to the bottom and then just claim security is hard and they tried their best when clearly, they knew about an 8 year old bug and shipped anyway. I'm that case, I'm okay with hitting them hard.
cstross|10 years ago
Such laws won't work, however, without a regulatory framework that ensures that -- for example -- click-through EULAs aren't used to lock customers into sleazy "binding arbitration" agreements that sacrifice their rights in return for permission to use an appliance they bought in good faith.
It may be difficult for regulators to keep up with specific technologies, but much tougher consumer rights protection is essential in order to hold negligent manufacturers responsible, because it's cheaper for the cowboy manufacturers to hire a lawyer to draft some dodgy contract boilerplate than it is for them to hire security experts and ship a safe product.
Silhouette|10 years ago
The trouble with this is that "actual harm" in a legal context tends to mean something that can be proven in some specific context and have some specific monetary value attached to it.
Personally, I think harm is also done if someone knows their financial details might have leaked and then worries about their credit record and future financial security, or if someone discovers that a creep somewhere in another country has been watching their baby sleeping, or if a "smart" TV has been transmitting personal conversations of whatever nature from the living room to someone else. However, if we're only talking "actual damages", how do you decide what financial compensation is appropriate in such cases?
In reality, the most damaging violations probably aren't the ones with tangible financial losses attached, because financial losses can at least be made good after the fact. You can't make up for lost time, though maybe you can at least assign some nominal value to compensate for time spent on things like updating credentials after a breach. No amount of money can make up for the kind of distress caused to a teenager if a compromised device leaks something like their diary or an intimate video of them getting changed and the results go all around their school.
If security and privacy implications for the Internet of Things are to be taken seriously, I suspect the laws will need updating so that (a) there is a presumption of harm in cases where personal information leaks to an unintended party, and (b) there is a punitive value attached to leaks that cause non-monetary damage, with that value being very high for leaks that cause severe and/or ongoing distress.
I don't think this needs regulation. All it needs is a scale of meaningful penalties, leading up to company-destroying fines and/or jail time for executives for the most serious infringements caused by gross negligence or malice.
JoBrad|10 years ago
> On top of that, if actual harm comes to users of these devices as a result of these devices then we already have plenty of consumer laws protecting them.
When was the last time a software company was held liable for their software not working correctly, and exposing users unnecessarily? Most of them EULA their way out of any lawsuit to begin with.
fiatmoney|10 years ago
So the regulation ends up being "go through the security process" (take something like PCI compliance as your model). This always ends up being a crappy fit because the guy doing the process can typically only throw out a list of "best practices" that may or may not make any sense for any particular application, and in any event aren't comprehensive enough. It's also wildly expensive, since the process is embedded in a regulatory-certified person who charges N$ / hour.
Empirically the best you can do absent some specific industrial setup is a series of bright-line rules like "don't store passwords in the clear", but that's far from sufficient.
acdha|10 years ago
The one which would make the most sense to me is something like a souped-up CERT: researchers report vulnerabilities to them, staff grades the severity, and a company has increasingly strict penalties if the fix isn't shipped within certain timeframes. Imagine if e.g. Samsung, Lenovo, etc. executives knew that their personal assets would be frozen in the U.S. if they continued not to support all of the millions of vulnerable Android devices?
The main thing I'd hope an approach like this could avoid would be the PCI bureaucracy you mentioned where a company might choose to avoid riskier areas rather than being required to expensively audit a process.
noonespecial|10 years ago
The real innovators will then not be able to come to market because the don't have $750k extra laying around for 6 months of burn waiting for/obtaining certifications, bonds, insurance etc.
Edit: In theory, I agree with OP, but in practice, these things almost always end up being more about permission than proficiency so we end up with corruption instead of competence.
pdkl95|10 years ago
If a product leaks pictures of your kids to the internet when it is used normally, the product is defective. If the problem was caused by a bad design[1], then the manufacturer should be liable for their negligence.
Yes, this would make entire categories of currently-used software unusable. It would probably require recalling many current and upcoming products. Adding complex network features (or any network connectivity at all) would also add liability risk, so this would also discourage (but not ban) throwing internet connectivity on everything.
As Dan Geer recommended[2], when the product is Free Software (including the build environment), the end user has the ability to defend themselves, liability can probably be limited to a refund. However with proprietary software or embedded devices where changing the software is not practical, the manufacturer should be liable for any damage their products cause.
I'm sure there will be a lot of resistance to this idea, as many products currently rely on bad design (smart TVs, nest), but allowing a security-free internet of things to happen would be a yet another Sword Of Damocles hanging over our head. Liability may be bad, but the problems that will happen if we connect everything to the internet without serious would be much worse.
[1] "bad design" would not include things outside o f the manufacturer's control, such as new way to weaken crypto or a completely new attack method. Buffer overflows, protocol design problems, incorrect configuration or permissions, unauthenticated updates or other downloads, and sending plaintext over a network should count.
[2] http://geer.tinho.net/geer.blackhat.6viii14.txt
daveguy|10 years ago
That would be simple if companies weren't able to lawyer up and weasel out of any and all liability that doesn't come with explicit standards required by ... regulation. What makes the definition of "defective" vs "not defective" in determining liability is regulation. Regulations don't have to be "fine X will be levied if Y" it can be "Y is required for product Z". That is regulation and it is how we define liability in the legal system. What you are proposing -- establishing bad design -- is the basic definition of regulation.
cmurf|10 years ago
So maybe you're talking about changing the law, but good luck with that.
ctulek|10 years ago
Regulations on hardware devices do not stop innovation in hardware. One can say there are far few hardware startups than software startups, but I don't think regulations are the main reasons of this difference.
golergka|10 years ago
Which will not turn away a dedicated geek, but will give a hint to an average soccer mom.
JoBrad|10 years ago
yk|10 years ago
Joeri|10 years ago
zanny|10 years ago
The only answer that makes any sense at all is funamdental legislation that any product where the primary product is the physical article and not the software must publish the source to included software. That way even if IoT devices are abandoned or become insecure we can update our own hardware.
Most people would not be able to maintain their own devices, but we can easily end up with OpenWRT / DDWRT style products for each class of IoT device if they are required to be freedom respecting. Then techies will naturally instruct their peers to use supported devices, and the natural progression should get us most of the way to where we are today on routers - the liberated ones are recommended and can be supported by the community even if the OEM abandons them, and the ones that are not are a red flag to avoid. The only problem today is that since there is no compulsion to liberate routers a lot of them are sold to ignorant consumers who do not realize the mistake they are making.
So maybe that should be a regulation? Like with how cigarettes must inform consumers of how dangerous they are, proprietary IoT devices must have an FCC general warning their security is out of the users control.
abrezas|10 years ago
xbmcuser|10 years ago
merpnderp|10 years ago
adventured|10 years ago
Federal regulations have gone from 20,000 pages in 1970, to 80,000 this year. With a 60% increase just since 1990. The US loves regulation. And that's just at the Federal level, there's an entire government system nearly the size of the Federal Government at the State level.