top | item 10958965

(no title)

miander | 10 years ago

I'm not sure how many others share my view but I think that regulation is worth the benefit to security. I have always been very skeptical of the "but it'll hurt innovation" claim. Won't it promote innovation in new approaches for securing low-cost devices? It sure seems nebulous to me, but I am willing to be convinced otherwise.

discuss

order

tptacek|10 years ago

If you believe any part of innovation comes from new products launched by small new companies, then regulation will hurt security, because to a first approximation none of those kinds of companies have any coherent plan for software security. None of them can afford market rates for this kind of work.

Another problem with regulating software security is that it will inevitably involve licensing software security assessors (it's hard to meaningfully require audits without doing that). The history of licensed security auditors is not reassuring; the economics predict a race-to-the-bottom, and that's what you get (see: PCI).

tzs|10 years ago

The concerns in the first paragraph could perhaps be addressed by having volume thresholds before regulation kicks in. If your internet-connected special purpose device has more than N unit sales, more than $M dollar sales, or is offered in more than K brick-and-mortar stores, then it is subject to security regulation.

I too am not in favor (at this point) of requiring licensed assessors to approve software after it is complete, at least for most products. Embedded medical devices, vehicle control systems, and things like that probably should have an outside assessment.

I'd be happy for now just having some rules to try to make it so IoT device breaches are mostly due to bugs in the implementation of a good design, rather than due to the producers not having a clue about security.

I think we are fast approaching (if we have not already past) the point where good security practices are something that almost every programmer and software architect should know and practice. There should be basic coverage of this in the standard computer science/software engineering curriculum, and there should be more extensive coverage as an optional part of the curriculum. If you take these optional courses, your degree is "B.S. in Computer Science and Computer Security" (BS CSCS). (There should also be a way to get this training outside of college, and get some sort of certificate that you have had this training).

Those making products that reach the thresholds for regulation should have to have someone with a BS CSCS (or a certification of equivalent security training) who signed off on the architecture, development standards, and testing process used for the product.

My expectation is that as everything (for better or worse) gets connected, the vast majority of CS students will go for the CSCS option and so people with a BS CSCS will not be significantly harder to find or more expensive to hire than people with just a BS CS, and so even small new companies should be able to afford them once they get past the point of the founders doing all the work and start hiring employees.

firebones|10 years ago

I'm confused by your first sentence. Read as you have written it, I think you mean that regulation will hurt innovation in the IoT space, not security in the IoT space, because you've already said that none of these companies have coherent plans for software security today. Regulation won't hurt security; it may help it by creating incentive for standardization around secure infrastructure to chip away at the "market rates for this kind of work" which "none of them can afford".

There is good regulation and bad regulation, but regulation can accelerate innovation. With or without regulation, addressing this problem with more standardization of secure software and hardware infrastructure would reduce the need for human assessors (or at the very least, push what they're worrying about higher in the stack). Addressing it with licensing and more humans is probably not the kind of regulation I'd look for. So could there be bar-raising regulation that encouraged infrastructural solutions that benefited the industry as a whole?

I'd hate to inject insurers into this world, but one way might be to require IoT manufacturers to carry some sort of indemnification against potential consumer damages, and the insurers drive the security quality. In the 1990s, it was insurers, tired of anesthesia-related malpractice losses, who created back-pressure on the profession to put better clinical standards in place, and errors related to anesthesia-related causes dropped, as did premiums for practitioners following the guidelines. Everyone benefited--especially the patients.

But in the IoT world today, there are no meaningful incentives around securing devices, and consumers have little influence.

manyxcxi|10 years ago

I wholeheartedly disagree about letting regulators have anything to do with technology. It moves too fast and has too many interpretations to be codified into common sense law, leaving just the big pocket corps to write the regulations just like they've already done everywhere else.

How do you define reasonable security practices? If there's PII, what's reasonable then? What's reasonable today OAuth, tokens, 2FA was over the top crazy/impractical/expensive/impossible in 2001. You think there's going to be a committee evolving this crap every month in perpetuity?

On top of that, if actual harm comes to users of these devices as a result of these devices then we already have plenty of consumer laws protecting them. Granted, they're going to have to come up with ways to apply it sometimes and you're going to have to prove it was that device that allowed the harm, but we have it.

I will say this though: I'm mostly okay with laws (whether they exist or not yet) that say that if your negligence or stupidity was the root cause, as a manufacturer of these goods, you are on the hook for a multiplier of damages. There are a lot of companies out there that know they are pushing shit to market in a race to the bottom and then just claim security is hard and they tried their best when clearly, they knew about an 8 year old bug and shipped anyway. I'm that case, I'm okay with hitting them hard.

cstross|10 years ago

> I'm mostly okay with laws (whether they exist or not yet) that say that if your negligence or stupidity was the root cause, as a manufacturer of these goods, you are on the hook for a multiplier of damages.

Such laws won't work, however, without a regulatory framework that ensures that -- for example -- click-through EULAs aren't used to lock customers into sleazy "binding arbitration" agreements that sacrifice their rights in return for permission to use an appliance they bought in good faith.

It may be difficult for regulators to keep up with specific technologies, but much tougher consumer rights protection is essential in order to hold negligent manufacturers responsible, because it's cheaper for the cowboy manufacturers to hire a lawyer to draft some dodgy contract boilerplate than it is for them to hire security experts and ship a safe product.

Silhouette|10 years ago

On top of that, if actual harm comes to users of these devices as a result of these devices then we already have plenty of consumer laws protecting them.

The trouble with this is that "actual harm" in a legal context tends to mean something that can be proven in some specific context and have some specific monetary value attached to it.

Personally, I think harm is also done if someone knows their financial details might have leaked and then worries about their credit record and future financial security, or if someone discovers that a creep somewhere in another country has been watching their baby sleeping, or if a "smart" TV has been transmitting personal conversations of whatever nature from the living room to someone else. However, if we're only talking "actual damages", how do you decide what financial compensation is appropriate in such cases?

In reality, the most damaging violations probably aren't the ones with tangible financial losses attached, because financial losses can at least be made good after the fact. You can't make up for lost time, though maybe you can at least assign some nominal value to compensate for time spent on things like updating credentials after a breach. No amount of money can make up for the kind of distress caused to a teenager if a compromised device leaks something like their diary or an intimate video of them getting changed and the results go all around their school.

If security and privacy implications for the Internet of Things are to be taken seriously, I suspect the laws will need updating so that (a) there is a presumption of harm in cases where personal information leaks to an unintended party, and (b) there is a punitive value attached to leaks that cause non-monetary damage, with that value being very high for leaks that cause severe and/or ongoing distress.

I don't think this needs regulation. All it needs is a scale of meaningful penalties, leading up to company-destroying fines and/or jail time for executives for the most serious infringements caused by gross negligence or malice.

JoBrad|10 years ago

You can't hold someone liable for a standard that isn't legally defined. So, defining regulation that sets a reasonable expectation of security that every IoT manufacturer has to adhere to is not a bad idea, and helps everyone.

> On top of that, if actual harm comes to users of these devices as a result of these devices then we already have plenty of consumer laws protecting them.

When was the last time a software company was held liable for their software not working correctly, and exposing users unnecessarily? Most of them EULA their way out of any lawsuit to begin with.

fiatmoney|10 years ago

The problem is that a regulation like "be secure" is impractical, because to a close approximation no system is actually, 100% secure - it's just a matter of the effort taken to hack it.

So the regulation ends up being "go through the security process" (take something like PCI compliance as your model). This always ends up being a crappy fit because the guy doing the process can typically only throw out a list of "best practices" that may or may not make any sense for any particular application, and in any event aren't comprehensive enough. It's also wildly expensive, since the process is embedded in a regulatory-certified person who charges N$ / hour.

Empirically the best you can do absent some specific industrial setup is a series of bright-line rules like "don't store passwords in the clear", but that's far from sufficient.

acdha|10 years ago

What if the regulation largely ignored the technology and instead focused on responsibility: prohibit license terms requiring arbitration or restricting class action cases, setting minimum warranty terms which treat software support as a primary requirement (no selling a washing machine with a 10+ year hardware lifetime but ending software support 6 months after release), and restricting liability disclaimers so a company can't completely shirk responsibility the way everyone does now?

The one which would make the most sense to me is something like a souped-up CERT: researchers report vulnerabilities to them, staff grades the severity, and a company has increasingly strict penalties if the fix isn't shipped within certain timeframes. Imagine if e.g. Samsung, Lenovo, etc. executives knew that their personal assets would be frozen in the U.S. if they continued not to support all of the millions of vulnerable Android devices?

The main thing I'd hope an approach like this could avoid would be the PCI bureaucracy you mentioned where a company might choose to avoid riskier areas rather than being required to expensively audit a process.

noonespecial|10 years ago

I think it will hurt real innovation. The big companies will just build the same crap they always do, but have an army of clerics and lawyers to shepherd that crap through the kafkaesque paperwork scheme that will certainly develop.

The real innovators will then not be able to come to market because the don't have $750k extra laying around for 6 months of burn waiting for/obtaining certifications, bonds, insurance etc.

Edit: In theory, I agree with OP, but in practice, these things almost always end up being more about permission than proficiency so we end up with corruption instead of competence.

pdkl95|10 years ago

Specific security regulation is not necessary, because the solution is simple: liability.

If a product leaks pictures of your kids to the internet when it is used normally, the product is defective. If the problem was caused by a bad design[1], then the manufacturer should be liable for their negligence.

Yes, this would make entire categories of currently-used software unusable. It would probably require recalling many current and upcoming products. Adding complex network features (or any network connectivity at all) would also add liability risk, so this would also discourage (but not ban) throwing internet connectivity on everything.

As Dan Geer recommended[2], when the product is Free Software (including the build environment), the end user has the ability to defend themselves, liability can probably be limited to a refund. However with proprietary software or embedded devices where changing the software is not practical, the manufacturer should be liable for any damage their products cause.

I'm sure there will be a lot of resistance to this idea, as many products currently rely on bad design (smart TVs, nest), but allowing a security-free internet of things to happen would be a yet another Sword Of Damocles hanging over our head. Liability may be bad, but the problems that will happen if we connect everything to the internet without serious would be much worse.

[1] "bad design" would not include things outside o f the manufacturer's control, such as new way to weaken crypto or a completely new attack method. Buffer overflows, protocol design problems, incorrect configuration or permissions, unauthenticated updates or other downloads, and sending plaintext over a network should count.

[2] http://geer.tinho.net/geer.blackhat.6viii14.txt

daveguy|10 years ago

> Specific security regulation is not necessary, because the solution is simple: liability

That would be simple if companies weren't able to lawyer up and weasel out of any and all liability that doesn't come with explicit standards required by ... regulation. What makes the definition of "defective" vs "not defective" in determining liability is regulation. Regulations don't have to be "fine X will be levied if Y" it can be "Y is required for product Z". That is regulation and it is how we define liability in the legal system. What you are proposing -- establishing bad design -- is the basic definition of regulation.

cmurf|10 years ago

What's the definition of "defective". The company defines this, federal regulation can supersede that, but between those two things the "leaks personal data" definition of defective must be present or it simply isn't true just because you (and most any reasonable person) says it's true. If it's explicitly excluded from warranty (or EULA) and inclusion isn't required by federal law, then you're SOL because you've tacitly agreed to be bound by that warranty and EULA by buying the product and not returning it. EULAs allow companies to get away with even known bad design bugs in software that cause data loss, there's nothing you can do about this liability wise.

So maybe you're talking about changing the law, but good luck with that.

ctulek|10 years ago

+1

Regulations on hardware devices do not stop innovation in hardware. One can say there are far few hardware startups than software startups, but I don't think regulations are the main reasons of this difference.

golergka|10 years ago

My be instead of prohibiting insecure products we can just put labels on them? Want to buy shiny new thing from kickstarter, be my guest, but it'll have a huge SECURITY NOT CERTIFIED label on the side or something like that.

Which will not turn away a dedicated geek, but will give a hint to an average soccer mom.

JoBrad|10 years ago

Certified by whom? And how do we keep those security labels from becoming as useless as certifications like "organic", "natural", and "fat free"?

yk|10 years ago

Any kind of suggestion, how regulation should look like? For example, automatic updates would probably be a must-have for network connected devices. On the other hand, I do not really want my TV to phone home. The problem is, that computers are too flexible for meaningfull regulation, contrast it with the case of a boiler, the interest of the user is that the boiler does not explode, both for the engineer and the housewive.

Joeri|10 years ago

Security can never be perfect, but it can be sufficient. Simple sufficient regulation could be having a clearly documented address and process for security issues to get reported and an obligation to provide fixes for remote vulnerabilities in a reasonable time frame after becoming aware of them within the warranty period (they are design flaws, so warranty should cover them). In other words, legally mandate everyone should do what conscientious vendors already do.

zanny|10 years ago

Just compelling the ability to update does not mean anyone will make updates.

The only answer that makes any sense at all is funamdental legislation that any product where the primary product is the physical article and not the software must publish the source to included software. That way even if IoT devices are abandoned or become insecure we can update our own hardware.

Most people would not be able to maintain their own devices, but we can easily end up with OpenWRT / DDWRT style products for each class of IoT device if they are required to be freedom respecting. Then techies will naturally instruct their peers to use supported devices, and the natural progression should get us most of the way to where we are today on routers - the liberated ones are recommended and can be supported by the community even if the OEM abandons them, and the ones that are not are a red flag to avoid. The only problem today is that since there is no compulsion to liberate routers a lot of them are sold to ignorant consumers who do not realize the mistake they are making.

So maybe that should be a regulation? Like with how cigarettes must inform consumers of how dangerous they are, proprietary IoT devices must have an FCC general warning their security is out of the users control.

abrezas|10 years ago

We already have regulations for things like safety, so it all depends on how seriously you think of the matter, and how well you think the government will be able to regulate it.

xbmcuser|10 years ago

I agree US dislike of regulations is understandable becuase of the cost it adds but they let things go to far. Subprime mortgage crisis is a good example they gave an inch and the banks stole miles.

merpnderp|10 years ago

The banks were told by the regulators to give out this loans and the government even invented the scheme to cook the books. More like partners in crime than dereliction of duty.

adventured|10 years ago

The US dislikes regulations? Since when? It has one of the most regulated major economies. With banking being arguably the most regulated industry.

Federal regulations have gone from 20,000 pages in 1970, to 80,000 this year. With a 60% increase just since 1990. The US loves regulation. And that's just at the Federal level, there's an entire government system nearly the size of the Federal Government at the State level.