> TechCrunch contacted TrueDialog about the exposure, which promptly pulled the database offline. Despite reaching out several times, TrueDialog’s chief executive John Wright would not acknowledge the breach nor return several requests for comment. Wright also did not answer any of our questions — including whether the company would inform customers of the security lapse and if he plans to inform regulators, such as state attorneys general, per state data breach notification laws.
Though it doesn't mention a timeline, this does seem like a way to pour gasoline onto a PR dumpster fire.
> But the data also contained sensitive text messages, such as two-factor codes and other security messages, which may have allowed anyone viewing the data to gain access to a person’s online accounts. Many of the messages we reviewed contained codes to access online medical services to obtain, and password reset and login codes for sites including Facebook and Google accounts.
> The data also contained usernames and passwords of TrueDialog’s customers, which if used could have been used to access and impersonate their accounts.
Hence why 2FA tokens and reset links should have a short window of validity, and why shallow information such as knowing account name, or address, or mothers maiden name, should not be used for sensitive purposes.
This article massively overhypes the breach. 2FA and password resets have a very short window of validity. The database contained historical messages and did not operate in real-time.
Like with the last leak a few days ago, I repeat my point: I also blame ES for this as it doesn't enforce authentication. ES people just keep blaming the people who set up those instances ("well duh it should not be facing the public internet") but after we see incident after incident, as a reasonable dev you should try to do your part to make the world a better place. Yes, you were not the one who set up the instance personally, but is it so hard to move a little bit in the direction of "hey a lot of personal information of uninvolved people gets leaked over and over again because of our defaults that stupid admins don't change. Maybe to protect the innocent, we can make ES to require some authentication method to be configured."
Nope. No such thing, no empathy for the people affected by the leaks, all blame shifted, done.
IMO, mining SMS messages for data is by definition going too far in terms of intrusion into people's privacy.
On a related note, I came across a post on the machine learning subreddit[1] recently, where the author claims to have a dataset of 33 million SMSs in Mexican Spanish. I'm half suspecting the OP added the Mexican prefix to prevent anyone from doubting that his dataset was collected in Spain (In which case, GDPR applies). This was likely collected from an Android app which surreptitiously collected with the "Telephony.SMS_RECEIVED" intent, and the author half confirms it[2].
Regardless of the legality of doing so, reading people's private SMSs just reeks of privacy violations. iOS in this specific case does the right thing by not letting apps read incoming text messages (except for the limited case of reading single-factor SMS login codes[3], which was introduced in iOS 12).
> except for the limited case of reading single-factor SMS login codes
Is the app actually reading the code? I thought this was just a UI hint that made it easier for the user to select the code from the suggestion area of the keyboard
Latin American Spanish and Castilian differ quite a bit (and there are pretty obvious differences within Latin America too), so I'm not sure this is necessarily a GDPR cop-out.
[+] [-] breakingcups|6 years ago|reply
Though it doesn't mention a timeline, this does seem like a way to pour gasoline onto a PR dumpster fire.
[+] [-] threatofrain|6 years ago|reply
> The data also contained usernames and passwords of TrueDialog’s customers, which if used could have been used to access and impersonate their accounts.
[+] [-] retSava|6 years ago|reply
[+] [-] ga-vu|6 years ago|reply
[+] [-] ga-vu|6 years ago|reply
Saved you a click
[+] [-] haolez|6 years ago|reply
[+] [-] tyingq|6 years ago|reply
[+] [-] iforgotpassword|6 years ago|reply
Nope. No such thing, no empathy for the people affected by the leaks, all blame shifted, done.
[+] [-] jersey|6 years ago|reply
[deleted]
[+] [-] woadwarrior01|6 years ago|reply
On a related note, I came across a post on the machine learning subreddit[1] recently, where the author claims to have a dataset of 33 million SMSs in Mexican Spanish. I'm half suspecting the OP added the Mexican prefix to prevent anyone from doubting that his dataset was collected in Spain (In which case, GDPR applies). This was likely collected from an Android app which surreptitiously collected with the "Telephony.SMS_RECEIVED" intent, and the author half confirms it[2].
Regardless of the legality of doing so, reading people's private SMSs just reeks of privacy violations. iOS in this specific case does the right thing by not letting apps read incoming text messages (except for the limited case of reading single-factor SMS login codes[3], which was introduced in iOS 12).
[1]: https://www.reddit.com/r/MachineLearning/comments/e0z7xs/dis...
[2]: https://www.reddit.com/r/MachineLearning/comments/e0z7xs/dis...
[3]: https://developer.apple.com/documentation/uikit/uitextconten...
[+] [-] travem|6 years ago|reply
Is the app actually reading the code? I thought this was just a UI hint that made it easier for the user to select the code from the suggestion area of the keyboard
[+] [-] toxicFork|6 years ago|reply
Edit: I have found https://www.howtogeek.com/230683/how-to-manage-app-permissio...
[+] [-] mattkrause|6 years ago|reply
[+] [-] Avery3R|6 years ago|reply
[+] [-] ryanmcdonough|6 years ago|reply
[+] [-] spamlord|6 years ago|reply
[+] [-] sojmq|6 years ago|reply
[+] [-] lightedman|6 years ago|reply