For anyone who is coming straight to the comments before reading the article: the details are even worse than the headline suggests.
Not only was a huge amount of information exposed through a public, unauthenticated MongoDB instance, and not only did CloudPets ignore multiple security researchers' attempts to alert them to the problem, but the database was actually held for ransom multiple times without customers being alerted to the breach.
This is _insane_. My daughter got a surprise cloudpet for her birthday from a distant relative. The app you have to use with the cloudpet is also filled with ads, some of which are of adult nature. This company is sleazy as hell. I hope they get sued out of existence.
From what I've seen, a lot of of those MongoDB ransomwares actually just delete the data and leave a ransom note in the hope of getting free bitcoin. So in a sense they've done some good by removing it from the internet.
A guy I work with did a presentation on this product, he is big into reverse engineering bluetooth devices. I can assure you the toys themselves are just as insecure as apparently their infrastructure is.
Seeing it light up and say "destroy all humans" was pretty funny, moreso because there is pretty much zero authentication on them so you could do it from anywhere from your mobile, and the mic can turn on and record without any authentication at all.
Meanwhile police in a murder case are preparing to take Amazon to court for Echo records. On the privacy front, there's just no saving people, but the IoT brings the magic of invading privacy together with furnishing botnets with millions of new bots!
We're screwed coming and going, and the vast majority still look at you like a woodland hermit if you suggest that you shouldn't have anything listening to you in your home.
I'll be putting out our blog post about this first thing tomorrow (we had it ready to go for next week, but I think now's a good time to add some fuel to the fire). Essentially the toy uses Bluetooth LE very insecurely and it has a speaker and a microphone. Guess what happens next?
Reading and fully comprehending the full contents and implications ofhttps://twitter.com/internetofshit should be required for anyone who is thinking about making an IOT type device.
I do agree that lots of IoT products have terrible security, but is having insecure bluetooth or the likes really a terrible thing for most of these types of products?
I understand that this leak is related to mongodb... and that is terrible, but mostly referring to your bluetooth example.
I mean take bluetooth headphones they are notoriously insecure, but the range in which eavesdropping could take place is pretty small, and for most of us you would just be eavesdropping on our annoying music. Seems reasonable that they save bandwidth on secure transmission of data for higher audio quality. That said I could see an argument the other way, but I'm sure there are more examples where it doesn't seem like a big deal. It would be interesting to hear from someone who thinks I'm dead wrong.
> The Germans had a good point: kids' toys which record their voices and send the recordings up to the web pose some serious privacy risks. It's not that the risks are particularly any different to the ones you and I face every day with the volumes of data we produce and place online (and if you merely have a modern phone, that's precisely what you're doing), it's that our tolerances are very different when kids are involved
It's a bit paradoxical. There are way less things a kid can say that can get him in trouble than an adult. Even the most oppressive regime will not hold what a 4yo toddler says against him. The need for privacy should rather be less for a kid than for an adult.
What it means is that violations of privacy are creepy, period. We try to rationalise it by arguing that we get something out of it, but when dealing with our kids, we stop believing our own bullshit and it is just becomes purely creepy...
Yeah, I'm not worried about my kid saying things that will get him in trouble. However... he repeats literally everything that he hears, sometimes verbatim. Sometimes hours or days layer. To be honest, it's really creepy at times. Plus, he doesn't really have a filter, so he'll talk about everything he sees at school or on the playground, just chattering about all day to himself.
So I'm worried about my kid saying things that could get other people into trouble.
I think it's possibly a bit more that we rationalize it as an adult because we can make a choice to give up the privacy or not. For a child they haven't developed mentally yet to understand that choice. That said I agree that the child has less potential for revealing information.
When will these companies be held liable for beaches like this? The time for feigned ignorance is over, this is negligence at the best, outright greedy indifference at the worst. There are no more excuses.
>the average parent.. is technically literate enough to know the wifi password but not savvy enough to understand how the "magic" of daddy talking to the kids through the bear (and vice versa) actually works [or] that every one of those recordings... is stored as an audio file on the web.
If it is not considered amazingly stupid, or at least ignorant to not understand that the magic talking bear has a computer in it, and that if the computer wants the wifi password it probably uses the internet, and that if the entire purpose of the device is to make recordings available to you over the internet... then I despair. My sympathy for people who buy these sorts of products is wearing thin. But, in this particular instance...
>our tolerances are very different when kids are involved
Interesting. Why? The data is much less valuable:
>One little girl who sounded about the same age as my own 4-year old daughter left a message to her parents: "Hello mommy and daddy, I love you so much." Another one has her singing a short song, others have precisely the sorts of messages you'd expect a young child to share with her parents.
If it is not considered amazingly stupid, or at least ignorant to not understand that the magic talking bear has a computer in it, and that if the computer wants the wifi password it probably uses the internet, and that if the entire purpose of the device is to make recordings available to you over the internet... then I despair.
I think you vastly overestimate the degree to which non-technical consumers understand computers, wifi, the internet, email, web sites, apps on their phone, and the differences and boundaries between any of those.
Because while we can make an informed decision about putting our own data into such a service, weighing up the risks and benefits, a four year old cannot - a parent is making that decision for them, and when you are making such a decision on behalf of someone else it behooves you to act more conservatively than when deciding on your own behalf.
True, but potentially very dangerous material in other ways. It's not hard to image kidnappers piecing together stolen audio clips to create fake messages as part of a ransom attempt. Or scammers creating audio clips to scare parents and extract money. A large bank of audio clips from a child could be used against that child's family in all sorts of ways, especially if the parents don't know the clips were stolen to begin with.
How about the kids who don't leave cutesy messages and saw disturbing or threatening things? How about the parent who sits on the thing and says something?
Voice data was once safe in its obscurity... now I have a $2 app on my phone that can do decent voice transcription.
Audio messages can be used to train a system which then will be able to mimic the voice of the child, almost indistinguishable from the original. AI of this kind will be commodity (i.e. easily accessible by criminals) pretty soon if not today.
Wouldn't you say that as a parent it is your obligation to protect the child's privacy? The threat model doesn't even matter, there will be one eventually. All data can be used and combined (now or in the future). Is it that hard to imagine a future where recordings of a child can be used to recreate the voice of the same person as an adult...hardly.
I find a "where's the harm" attitude towards privacy/data collection very troubling...doubly so if you are making that decision for someone else who can't protect themselves yet. Ethically it's probably a bigger problem than having such a lax attitude about your own privacy (which if perfectly fine/freedom of choice).
And yes I also rant and rave about parents who post pictures of their children everywhere.
Someone steals the recording saying "Hello mommy and daddy, I love you so much."
They then manage to contact you, reporting that they have kidnapped your children. They play you the recording to prove they are in your custody and demand an immediate ransom payout.
Highly prone to error, not very likely to work, incredibly evil and likely to end up with the perpetrator in jail, but, unfortunately, the sort of thing that a desperate criminal might try, and even more unfortunately, it only needs to succeed once for someone to consider it a viable tactic.
I know this is stupidly unlikely occurrence, but extrapolate it with a bit more sophistication and you can start to see why this is actually quite nasty identity theft material.
Apart from the total disaster these kind of incidents are, they serve a valuable purpose: material to educate my children about security. It is surprising to see how quickly my 9-year old daughter picks up the message, especially by these kind of stories.
My 7 year old son is rapidly becoming far more hostile to anything from ads to privacy invasions because it is simply making up a far bigger part of his life than it does for me.
I wonder how children learning about these things from such a young age will play out once they're gron up.
Who's the goto "freedom/privacy marketing" organization (EFF seems to be legal only)? This is an excellent propaganda for freedom opportunity. It involves a creepy invasion of privacy targeting children. Needs to be used in a massive campaign against (insecure) IoT ASAP.
I had a bear like that (not CloudPets, but looks like an exact clone). Thankfully, it was only used by my daughter with my supervision, so I know exactly what has been said. Unless the mic was enabled remotely, that is.
I assumed that the security issues might be bad, but placing the voice on unsecured Mongo facing public Internet is beyond shit.
Thankfully, I have disabled the bear long time ago. But now I worry about my NetAtmo station, which contains an always listening microphone to "measure the noise pollution". Yeah, right.
Until executives of tech companies are convicted for criminal negligence, this will never improve. The current accepted business practices in tech are abominable and criminally reckless. If a company building housing was as negligent in hiring unqualified cheap talent, ignoring reports from engineers about needing more resources or time in deference to business goals, etc, they would be tried as a criminal and face time in prison.
Put some CEOs in handcuffs, lock them in a cage like an animal, and see how quickly companies actually start doing crazy things like mentioning the word 'security' in job listings for software engineers or system administrators or even doing the unthinkable, hiring experienced expensive engineers.
> As you can see by loading the image, all that's required to access the file is the path which is returned by the app every time my profile is loaded.
Put it behind the webapp's authentication and access control layer - so only logged in users with relevant connection/permission to the requested image can get it.
Well your correct that a valid address would be needed to access the content. But if you are being told by an api to access Z content it's easy to say "here is a sig that's only valid for the next x mins" (x being something you base on your user base) instead of anyone knowing the file name of the file (which they can get from dumping your db) from access the file without having that extra layer of protection.
Disable indexing on s3 and you need to know the direct link. Dump the table to get those links but then you still need to know the secret your app shares with s3 to create the sig needed to download from s3. I.E. They need to know Z and X and X only gets issued to people logged in and expires after a time. Knowing Z alone gets you nothing.
I do the same to protect the s3 bucket from direct access our cache pulls from (cache has a shared secret with s3 to pull from. Cache also has another signing process for the client to use to download from the cache). Client Signing for Z file is not valid for client+1. We just use a cache in-between client and s3 as we managed to get a deal thats cheaper for us then direct access to s3. But the same could be done for direct client to s3.
Companies have to get more involved in actually encrypting their data before entering it into the database. For every web app I create, especially when sensitive information is exposed, I try to encrypt as much data as possible. With all the leaks and hacks.. it only makes sense to add some encryption method in there.
I'm working on an idea in the security space, that focusing on data breaches and attempting to identify them early. Keen to validate the idea, so if any fellow startups or businesses are interested, I'd love to talk and see what people think.
Email is in my profile.
So it could all have been avoided if they'd made it unnecessary to identify oneself and paired app with toy via decent public key encrypted communications. I think the toy is a good idea, it just had a shit implementation.
[+] [-] teraflop|9 years ago|reply
Not only was a huge amount of information exposed through a public, unauthenticated MongoDB instance, and not only did CloudPets ignore multiple security researchers' attempts to alert them to the problem, but the database was actually held for ransom multiple times without customers being alerted to the breach.
[+] [-] nvarsj|9 years ago|reply
[+] [-] r1ch|9 years ago|reply
[+] [-] orf|9 years ago|reply
Seeing it light up and say "destroy all humans" was pretty funny, moreso because there is pretty much zero authentication on them so you could do it from anywhere from your mobile, and the mic can turn on and record without any authentication at all.
sigh internet of things
[+] [-] 0xfeba|9 years ago|reply
[+] [-] M_Grey|9 years ago|reply
We're screwed coming and going, and the vast majority still look at you like a woodland hermit if you suggest that you shouldn't have anything listening to you in your home.
[+] [-] pdjstone|9 years ago|reply
Edit: Demo of the CloudPets functionality using Web Bluetooth https://github.com/pdjstone/cloudpets-web-bluetooth/
[+] [-] walrus01|9 years ago|reply
[+] [-] lcw|9 years ago|reply
I understand that this leak is related to mongodb... and that is terrible, but mostly referring to your bluetooth example.
I mean take bluetooth headphones they are notoriously insecure, but the range in which eavesdropping could take place is pretty small, and for most of us you would just be eavesdropping on our annoying music. Seems reasonable that they save bandwidth on secure transmission of data for higher audio quality. That said I could see an argument the other way, but I'm sure there are more examples where it doesn't seem like a big deal. It would be interesting to hear from someone who thinks I'm dead wrong.
[+] [-] cm2187|9 years ago|reply
It's a bit paradoxical. There are way less things a kid can say that can get him in trouble than an adult. Even the most oppressive regime will not hold what a 4yo toddler says against him. The need for privacy should rather be less for a kid than for an adult.
What it means is that violations of privacy are creepy, period. We try to rationalise it by arguing that we get something out of it, but when dealing with our kids, we stop believing our own bullshit and it is just becomes purely creepy...
[+] [-] mirimir|9 years ago|reply
Also, It's not just recordings. Once an adversary has account access, they can talk to children. I can't imagine that being a good thing.
[+] [-] kageneko|9 years ago|reply
So I'm worried about my kid saying things that could get other people into trouble.
[+] [-] tiglionabbit|9 years ago|reply
[+] [-] simcop2387|9 years ago|reply
[+] [-] nkrisc|9 years ago|reply
[+] [-] dTal|9 years ago|reply
>the average parent.. is technically literate enough to know the wifi password but not savvy enough to understand how the "magic" of daddy talking to the kids through the bear (and vice versa) actually works [or] that every one of those recordings... is stored as an audio file on the web.
If it is not considered amazingly stupid, or at least ignorant to not understand that the magic talking bear has a computer in it, and that if the computer wants the wifi password it probably uses the internet, and that if the entire purpose of the device is to make recordings available to you over the internet... then I despair. My sympathy for people who buy these sorts of products is wearing thin. But, in this particular instance...
>our tolerances are very different when kids are involved
Interesting. Why? The data is much less valuable:
>One little girl who sounded about the same age as my own 4-year old daughter left a message to her parents: "Hello mommy and daddy, I love you so much." Another one has her singing a short song, others have precisely the sorts of messages you'd expect a young child to share with her parents.
Hardly identity thief material.
[+] [-] ams6110|9 years ago|reply
I think you vastly overestimate the degree to which non-technical consumers understand computers, wifi, the internet, email, web sites, apps on their phone, and the differences and boundaries between any of those.
[+] [-] caf|9 years ago|reply
Because while we can make an informed decision about putting our own data into such a service, weighing up the risks and benefits, a four year old cannot - a parent is making that decision for them, and when you are making such a decision on behalf of someone else it behooves you to act more conservatively than when deciding on your own behalf.
[+] [-] joatmon-snoo|9 years ago|reply
It's the why-do-I-care-about-my-privacy argument - but it's even more personal now, because it's not just you, it's your kids.
There's always that extra creep factor when it comes to children.
[+] [-] gavman|9 years ago|reply
True, but potentially very dangerous material in other ways. It's not hard to image kidnappers piecing together stolen audio clips to create fake messages as part of a ransom attempt. Or scammers creating audio clips to scare parents and extract money. A large bank of audio clips from a child could be used against that child's family in all sorts of ways, especially if the parents don't know the clips were stolen to begin with.
[+] [-] Spooky23|9 years ago|reply
Voice data was once safe in its obscurity... now I have a $2 app on my phone that can do decent voice transcription.
It's just one more thing to worry about.
[+] [-] orless|9 years ago|reply
Audio messages can be used to train a system which then will be able to mimic the voice of the child, almost indistinguishable from the original. AI of this kind will be commodity (i.e. easily accessible by criminals) pretty soon if not today.
[+] [-] lvh|9 years ago|reply
[+] [-] kriro|9 years ago|reply
And yes I also rant and rave about parents who post pictures of their children everywhere.
[+] [-] Intermernet|9 years ago|reply
Someone steals the recording saying "Hello mommy and daddy, I love you so much."
They then manage to contact you, reporting that they have kidnapped your children. They play you the recording to prove they are in your custody and demand an immediate ransom payout.
Highly prone to error, not very likely to work, incredibly evil and likely to end up with the perpetrator in jail, but, unfortunately, the sort of thing that a desperate criminal might try, and even more unfortunately, it only needs to succeed once for someone to consider it a viable tactic.
I know this is stupidly unlikely occurrence, but extrapolate it with a bit more sophistication and you can start to see why this is actually quite nasty identity theft material.
[+] [-] Taek|9 years ago|reply
Internet-of-Shit will remain exactly that until neglecting security is a substantial threat to the bottom line of a company.
They ignored multiple warnings? Got hacked multiple times? This is negligence, and this company should be fined out of business.
[+] [-] f_allwein|9 years ago|reply
[+] [-] Animats|9 years ago|reply
If you want one, they're now available for the low, low price of only $3.[2] Including WiFi.
[1] https://cloudpets.com/ [2] https://www.hollar.com/products/as-seen-on-tv-cloudpet-dog
[+] [-] janwillemb|9 years ago|reply
[+] [-] vidarh|9 years ago|reply
I wonder how children learning about these things from such a young age will play out once they're gron up.
[+] [-] tudorw|9 years ago|reply
'Internet of Things That Don't Work Anymore Because The Company That Made Them Has Gone Out Of Business, Oh, And Because Security'
[+] [-] kriro|9 years ago|reply
[+] [-] hidden-markov|9 years ago|reply
[+] [-] snug|9 years ago|reply
[+] [-] bbcbasic|9 years ago|reply
[+] [-] atemerev|9 years ago|reply
I assumed that the security issues might be bad, but placing the voice on unsecured Mongo facing public Internet is beyond shit.
Thankfully, I have disabled the bear long time ago. But now I worry about my NetAtmo station, which contains an always listening microphone to "measure the noise pollution". Yeah, right.
[+] [-] scott_karana|9 years ago|reply
Anything with a microphone might always be listening, and you probably can't (easily) verify whether it's on-demand or not ;-)
[+] [-] jasonlotito|9 years ago|reply
[+] [-] otakucode|9 years ago|reply
Put some CEOs in handcuffs, lock them in a cage like an animal, and see how quickly companies actually start doing crazy things like mentioning the word 'security' in job listings for software engineers or system administrators or even doing the unthinkable, hiring experienced expensive engineers.
[+] [-] coldpie|9 years ago|reply
[+] [-] Kiro|9 years ago|reply
How else would you do it?
[+] [-] bigiain|9 years ago|reply
[+] [-] Crosseye_Jack|9 years ago|reply
Disable indexing on s3 and you need to know the direct link. Dump the table to get those links but then you still need to know the secret your app shares with s3 to create the sig needed to download from s3. I.E. They need to know Z and X and X only gets issued to people logged in and expires after a time. Knowing Z alone gets you nothing.
I do the same to protect the s3 bucket from direct access our cache pulls from (cache has a shared secret with s3 to pull from. Cache also has another signing process for the client to use to download from the cache). Client Signing for Z file is not valid for client+1. We just use a cache in-between client and s3 as we managed to get a deal thats cheaper for us then direct access to s3. But the same could be done for direct client to s3.
[+] [-] semiquaver|9 years ago|reply
[+] [-] IanCal|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] mattbgates|9 years ago|reply
[+] [-] graystevens|9 years ago|reply
[+] [-] FullMtlAlcoholc|9 years ago|reply
And now audio of children has been hacked, exposing kids voices. The future is here, and it's weird.
[+] [-] Walf|9 years ago|reply