top | item 13383273

Rave Panic Button: Vulnerabilities in a Nationwide Emergency Alert System

85 points| rwestergren | 9 years ago |randywestergren.com | reply

27 comments

order
[+] zaroth|9 years ago|reply
So they found a hard-coded API token and no further user authentication to limit the requests. This is obviously a common rookie mistake (didn't Facebook, Snapchat, Instagram all fall for this at some point?) and security software particularly should not have such a basic vulnerability, but I do think TFA veers past security writeup into editorial when they start critiquing the cost/pricing and calling for customers to drop the software, particularly since the vendor was responsive to the report.

It seems like maybe the vendor's communication trailed off toward the end leaving OP with bad feelings which show up in the write-up.

Generally I like to see security research and vulnerabilities reported as a way to document the process used and experience gained, but not to skewer the company except with respect to their responsiveness of a patch.

Of course, everyone here is free to skewer the app as much as they like, I just don't like reading it in the actual write-up.

[+] caf|9 years ago|reply
Sure, but.. this isn't Instagram, is it? Somehow I feel that if you're purporting to build a safety-critical service, the bar ought to be set a bit higher for you than if you're sharing pictures of people's lunch.

It's a common rookie mistake, but perhaps the takeaway is that this kind of area isn't something that rookies should be tackling.

[+] sailfast|9 years ago|reply
This discovery and write-up was an awesome read but I do disagree on the price critique. 70K seems relatively inexpensive for deployment of an app at a state level, let alone development of an app. Proof of Concept, perhaps but what does a full security audit cost these days? If anything I would be concerned that the budget did not include those kinds of factors and should have cost a bit more :)

This article does lead me to wonder if Rave has missed an opportunity - installation of an app on Crestron / Andoird room automation systems that accomplish similar things. That would take the mobile component out but still have a benefit from a facilities / awareness perspective.

[+] rwestergren|9 years ago|reply
Appreciate the feedback! My point on the price concern was that the app was not developed solely for my county, it was resold to multiple customers - at least 2,000 according to the link I posted in another comment.

I'm not sure what other customers were charged for the app, but if they were all $70K (as my County was), then that's a hefty rake.

[+] stuaxo|9 years ago|reply
The author says that $70k seems a lot to build the app.

I don't know how long it took, but depending on the size of the team etc, it sounds pretty cheap TBH - certainly not enough $ there to make something secure and supported.

[+] dsl|9 years ago|reply
I read it as they charged one customer $70k. I imagine they plan on having more customers at some point.
[+] raesene9|9 years ago|reply
Interesting technical detail but for me the main takeaway from that was that we're still seeing expensive software systems ($70k per customer) being procured by large numbers of organizations (~2000 customers) to do safety critical work, and still none of them are mandating security reviews as part of the procurement process (or indeed requiring the vendor to have had a review done).

The vulns found here are serious, but any moderately detailed app security review should/would have found them.

Until customers start requiring security reviews for the software their buying we'll see a load more insecure apps being sold.

[+] drc500free|9 years ago|reply
It's not $70k per customer. It's $70k in development costs. On the other hand, 2000 customers is for all of the developer's products, not just this app.
[+] ParadoxOryx|9 years ago|reply
That's definitely concerning. Makes you wonder how well secured our other emergency/critical systems really are.
[+] patcheudor|9 years ago|reply
Hey, good find but I'm a little confused. It appears that you found what could be a serious issue if no other checks are in place; however, in doing so you appear to have exceeded the access you were provided through the application to their back-end system. Did this fall under a bug bounty program where you had permission to do this or did the company give you written permission? I was looking for a bug bounty program and couldn't find one.

I ask because it looks like you were performing testing which touched their infrastructure, not just your phone and the US Computer Fraud and Abuse act gets pretty scary (Felony scary) when it comes to such things:

https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act

I do a ton of mobile application reviews and find stuff like this quite often but back away at the point I start touching their infrastructure rather than just my phone.

[+] charonn0|9 years ago|reply
> In order to confirm this suspicion, I decided to proxy my phone’s traffic and attempt registering with the app using dummy phone values.

Am I wrong in assuming that being able to proxy the app's HTTPS traffic is evidence of another security problem, specifically that the app is not validating the server's SSL certificate?

[+] zaroth|9 years ago|reply
No, the proxy is a trusted CA and so it can mint a valid cert.

It does mean they are not pinning their cert, but most apps do not.

Even if it was pinned, you just have to disassemble and modify the pin and you can still MITM.

[+] wildrhythms|9 years ago|reply
Great snooping, and an awesome writeup! As the author points out, organizations should be wary of security even when the developer/publisher claims that it's secure.
[+] elipsey|9 years ago|reply
The vendor was notified three weeks before this public disclosure. Is this reasonable? How should a timeline for public disclosure be determined?
[+] icebraining|9 years ago|reply
You're misreading, it was in November.
[+] rhodrid|9 years ago|reply
Rave Panic Button: For when the drop goes too deep.