This is horrible. So right now, in order to get access to data for a certain person, you need to hack your way through a few of the potential services he is using and drive from there.
1. The data might have things like IDs (ie: Crypto exchanges).
2. You can use that data to ask for more data. If you got a copy of his passport, now you can ask for more with this new piece.
3. Looks like some people still store passwords in plain text or don't mind exchanging that over email. This means some of these service might reveal a password to you.
4. With that, you can start hacking into other accounts. You also have loads of knowledge about the person, so you might be able to guess his password.
5. Now you have access to his email, dropbox, banking details and maybe even lock him out.
One of the major goals of GDPR is to discourage firms from retaining personal data in the first place. It did not used to cost them anything so they kept it regardless of its use. Now that there are big risks to keeping it these firms have to think twice about it.
This "cobra effect" is one more reason NOT to retain personal information in the first place.
This was actually one of the risks we identified when looking at GDPR for my own businesses last year. Given that in some cases all we have is an online account with minimal personal details, how can we possibly verify their identity to an acceptable standard if someone does send us a GDPR subject access request of any kind? If they have some sort of account with us already and that has associated ID and security checks, that's one thing, but what if they don't or they claim to have forgotten their password etc?
Even if someone were willing to send us "strong" ID, we don't have any special knowledge of what official government-issued ID looks like in every country where we have customers, nor the human resources or automated technology to investigate in detail whether any ID that is sent might be faked. At best, we could find an image of a passport/driving licence/whatever from that person's country and see if what they've sent us looks about right and matches any personal details we do have for the data subject.
Our disturbing conclusion was that if someone did ever send us certain types of request in connection with certain accounts, there might be no action we could safely take to resolve the situation that would definitely be lawful. If we get scammed by fake ID then we're breaking the law. If we don't accept ID that is real and comply with the subject request, we're also breaking the law.
Fortunately the affected services aren't doing anything particularly exciting or risky with personal data either, so it seems unlikely that any serious harm would come to anyone whatever happened in our case. However, the same basic issue surely affects many other data controllers/processors, and they won't necessarily be such unlikely targets as the research here shows. I haven't yet found any practical guidance from the regulators on what would be considered reasonable in this sort of situation.
This is a reflection of the fact that we have no good way for someone to digitally prove their identity. Some countries are getting close-ish - Denmark's NemID system, for example, is used by a lot of financial institutions.
However, there remains no easy way to make ad-hoc verifiable statements like 'I am John Smith and I authorise you to send this data to [email protected]'.
Governments, please solve this problem! Essentially: combine NemID with Keybase and build a UI that normal citizens can understand.
Italy's "PEC"[1] (Posta Elettronica Certificata, or Certified Electronic Mail) comes pretty close.
There's even an RFC[2] for it.
In order to get one, an individual has to prove their identity via a government-issued ID (ID card or passport), and that email address can henceforth be used to send emails for all official correspondence as if it were certified/verified mail, with the added bonus that both parties "know" the identity of the other party (i.e. the company knows it was very likely me who sent it, as they trust that the people in charge of having verified my ID did their job) and with the added bonus of the _contents_ of the email also being certified to have been sent from that recipient to that destination address, and not to have been tampered with (great thing to have for lawsuit reasons), unlike "standard" registered mail, who only certified that a letter has been sent and picked up.
> This is a reflection of the fact that we have no good way for someone to digitally prove their identity.
Centralized identity isn't a solution, it's the problem. Once you implement something like that, it requires everyone to track everything using it in order for it to be used to authenticate access to the information. Which means everybody has to store the ID number as a field in every database and it becomes a de facto primary key that allows all information to be correlated by every blackhat that compromises more than one data set.
Meanwhile there will be the "just make it work" people who are bad at security, who will do whatever is necessary to compromise their own security because the attacker told them to. And then we would then be giving attackers the capacity to take over their entire lives instead of only one relationship with one entity.
Moreover, the scope of the damage if someone were to compromise the central identity system in general rather than only for a specific person is horrifying. It would become a single point of compromise for the whole country. And the worst kind on top of that, because everything would hook into it which would cause it to become ossified and difficult to update. If the system was then publicly compromised, how long does it take for everyone everywhere to update every piece of code to use the replacement? Which thing do you do in the meantime, continue using the compromised system as all hell breaks loose, or shut down your entire country?
There is a better solution. If you have an account with someone, you make requests by authenticating in the same way you do with your account. And if you don't have an account, you should be able to request deletion of the data associated with e.g. your IP address, but not request to download it -- because there is no way to to verify your identity for that. Even with centralized ID an IP address can be used by multiple people who shouldn't be able to give consent for one another and may not be mutually distinguishable by the party receiving the request, and the same for most other global data (e.g. many people share full names with other people). The only way to make centralized ID work in that context is to tag everything with it to begin with, compromising all anonymity and pseudonymity -- which can't possibly be the right trade off for what is supposed to be privacy legislation.
Fun fact, the EU has made a law that should force countries to implement their PKIs for people to be able to digitally sign documents and that those signatures are equivalent to hand written ones.
Latvia's ID cards support e-signing of digital documents, so I could have a pdf "I authorise you to send this data to [email protected]" and sign it so that the recipient can securely verify that this was signed by Name Surname ID123.
Estonia has it quite similar, I'm not certain if it's technically the same standard or something slightly different.
The Netherlands uses DigID, which is effectively a federated identity provider. Problem is that it was originally intended for government use only (recently it's been expanded to include health insurance providers), and it's not accessible for commercial entities.
It was also marred by very bureaucratic policies, for example to get information on account usage (e.g. how many times was my account used, from which IP, to access which site) you would need to file a police report first, and it supported 2FA very early (through SMS service, now via mobile app as well) but the end user could not forcibly enable 2FA, only the target service could determine if they wanted two-factor authentication on their site. Luckily, those have been fixed.
Would be nice if they supported open standards like OAuth though.
I would prefer governments to solve it with competent Software Engineers in the mix and maybe other professionals from the finances and IT security industries, but never one single large entity.
In this case, the solution is easy. The user most likely already has an account, so just ask for the account password. If the users claims they lost the password, then do a classic password recovery via email.
Of course it's tricky for organizations storing data about users without an account. (eg. Facebook or Google, not sure how they could handle that at all, even with government ids)
The only way to do what most people want here is to bake this digital identity into humans, which is beyond our present technology and also feels potentially rather like a recipe for authoritarianism. For one thing, if you can lose it, people will. They'll destroy them on purpose, they'll be stolen by crooks, they will forget them at airports or in hotel safes. That's not a problem for baked in things, people don't leave their hearts behind (outside of Country songs) or (outside of maybe China) get them stolen by crooks but any conceivable device, card, key, or document will have this problem.
Greg Egan's "Orphanogensis" (http://www.gregegan.net/DIASPORA/01/Orphanogenesis.html a short story setting up the protagonist for his novel "Diaspora") describes making Polis Citizens (people who exist only as software, the other branches of humanity have given themselves bodies suitable for long term existence in space, the Gleisner Robots, or given up on consciousness altogether, the Dream Apes) with a cypherclerk, a component that does public key crypto. Once the system achieves confidence that the process of making a new citizen has been successful and has produced a conscious person, the cypherclerk is initialised and the new citizen has a unique and impossible to fake proof of ID. This plays no major role in the story, it's just there because presumably Greg agrees with you that it'd sure be convenient if there was actually digital ID. But there isn't.
Here's a central conflict: I would like to be able to prove that I'm who I say I am, but without being stuck with that identity. This makes the identity disclaimable. You will find plenty of people who feel the same way, and some of them have very concrete practical reasons (e.g. people with stalkers or who ratted on a crime boss). But for a bunch of things people, and especially governments want to do with a digital ID that's no good.
A disclaimable ID can work for a driving license. Barry Shitpeas is licensed to drive an HGV, you can either prove you're Barry Shitpeas, or you can get a new license to drive the HGV with the identity you do want to use.
But if Barry drink drives, taking away his license, knowing he can just get a new one as Jerry Poocabbage tomorrow, well that's a rubbish outcome, isn't it? We want a way to _stop_ Barry from driving even if he changes identity. And we can't do that with disclaimable ID.
In my opinion, we need an open protocol for this. Every government having a separate digital identification tool is a poor solution.
The simplest solution I can imagine is to mimic the solution used to digitally identify companies via HTTPS (certificate authorities), but modified with the intent of identifying a person rather than a company.
> Mr Pavur says he believes he did not break the law himself while conducting the trial
This is a bit odd. I know his partner consented to this, but this doesn't seem like it should be enough to make this not identity fraud.
Obviously the research Pavur carried out is extremely valuable and the mid-sized companies failing to follow proper procedure are the real problem here, but it still seems like it would be technically illegal.
Regardless it would then be a bad law since almost all criminal law looks at intent and reasonable expectations of how a citizen should act.
We don’t need to keep replaying the vilification of security researcher game just because it involves the flawed gov systems imposed on technology itself instead of just technology. The end goal is the same, the privacy and security of end users.
> I know his partner consented to this, but this doesn't seem like it should be enough to make this not identity fraud.
Most crimes require an intent to commit the crime, with notable exceptions (possession).
His intent was not fraud, it was security research. As demonstrated by getting the permission of the potential victim, and carefully avoiding things like forgery.
A lot of the risk of this kind of thing could be greatly reduced if the law were changed so that for data that you should already have about yourself the company only has to tell you whether they have that data.
For example, I know my birth date. A company that also has my birth date should be able to just tell me that they have my birth date. They should not have to tell me the actual date.
Most of the data mentioned in the article is like this: credit card information, login and password information, social security number, stays in hotels, train journeys, high school grades, and maiden name.
Companies like credit reporting agencies that keep such data and share it with others would need to be an exception, so that you could check that they aren't giving out incorrect information about you.
I think this is part of the reason why GDPR needs an exemption for businesses that are too small. This kind of a flaw will be abused more and more and they'll never be able to close this gap with small and medium businesses.
This is a great thing that the hacker noticed. It sounds like we really need more OpenID style auth and a lot more limited-lease access to personal data, and more tools to store our own data.
> "But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."
I wonder if there is a market for selling GDPR compliance / advising services. Some company that makes sure your doing everything right, and inspects the requests for validity.
I think that's exactly the problem -- small/medium companies can't really afford to hire someone to do GDPR compliance (or at least, do it well). It's textbook regulatory capture.
There definitely is. The problem right is now is that a huge number of them are prohibitively expensive. I paid $2,000 for GDPR advising for a $7k/mo MRR app and was quoted $25,000 to advise/assist through the entire implementation (of which I'd still have to do myself). It sounded like it's also necessary to hire someone to receive data requests, which would probably cost more (but seems super possible to provide as a service to many companies).
Aren't companies under the GDPR obliged to report [1] if the got hacked/a data breach? Sounds to me as if quite a few just got hacked via social engineering...
Yes. This just shows the problem that was already there. The GDPR is fundamentally a thing about having the responsibility to handle people's data properly The companies that failed, obviously cannot handle the responsibility of managing personal data without it getting into the hands of the wrong people and they shouldn't have had it in the first place.
It's much harder to test but this is a good indicator that they probably also don't have controls to manage employee access.
Can anyone here who's experienced with GDPR's provisions talk about the extent any given company is allowed to go to verify the identity of someone requesting information?
I'd hate it if someone pulls a "I'd like to be forgotten" request on me without said organization absolutely verifying my identity.
If would seem that if they complied, they wouldn't be able to undo the operation...
GDPR is not a perfect law, but I think it's a great leverage to use when negotiating against US companies if you're a EU country.
I mean at some point, EU countries will get tired of the US going a little too far. It's about being able to negotiate, and with the digital age and the massive ad market, it only makes sense for the EU to protect itself after brexit and the cambridge analytica stories.
The main benefit of GDPR is that it forced companies to audit their policies and procedures to have slightly more hygienic data practices.
The downside is that it made social engineering easier. What this guy did was nothing new. But now those businesses were compelled to help him instead of ignore him until he pestered them enough.
Good job government regulators! You just made the most common fraud even easier!
This is the problem with overly aggressive legislation. The big companies performed well, the small companies ignored the law, and the medium sized companies tried to comply and failed. You can’t legislate good behavior because people will always find a way around the laws. The legal philosophy behind GDPR seems to be nothing more than to make everyone a criminal and then choose who to prosecute, and in that case why have laws at all rather than letting law enforcement punish whoever they choose?
At the end of the day this incentivizes big companies to comply with the laws because they can afford the necessary legal teams and the laws provide a moat to their entrenched power. It incentivizes small companies to ignore the laws just like the move fast and break things mentality of Silicon Valley. The people it hurts are the medium sized, growing companies who are best positioned to fight the big monopolies but are now being held back by well meaning but over burdensome legislation.
The good side of GDPR is that it’s trying to advocate for consumer privacy, which overall is a good thing. The issue is that it’s not a legal problem. In the same way you can’t declare drugs to be illegal and expect the supply of drugs to instantly disappear, you can’t declare misuse of data illegal and expect the same. As long as the data has value there will be a market for it, so why not draft sensible regulations instead of trying to solve the problem with laws.
I wouldn't agree that the medium sized companies tried to comply with the GDPR and failed. Yes, they tried to comply with that particular request but the failures suggest that they didn't even try to be GDPR compliant in the first place - if they had done so, then they would have had assigned a data protection officer who would have long ago asked themselves the question "what do we do in case of a personal information request?", and written down a reasonable process for handling such requests, possibly consulting with the local data protection agency.
That would count as "trying", as it was their duty to have done this a year and a half ago. It's basic 'table stakes', a precondition to being permitted to handle personal data at all. If they started thinking about "how do we verify identities" only on day they received the request from this researcher, then that's not trying to comply, that's being grossly negligent.
this is peanuts. one eu country government decided it needed approval under gdpr from hospital patients before treatment/surgery/etc. this denied service to a bunch of (tech) illiterate people, bringing the budget back in line.
another one: due to gdpr, violent thugs hired by the police to hurt protesters cannot be named. it would apparently trample on their rights.
gdpr is, as forecasted, a complete mess. a mess that is exploited by corrupt eu governments to great effect.
I can’t believe a drivers license scan is all that is needed for many companies. That means that losing my wallet on the street effectively means that someone can go get my entire digital history.
Why not require the user to request this data while signed in to the service?
[+] [-] csomar|6 years ago|reply
1. The data might have things like IDs (ie: Crypto exchanges).
2. You can use that data to ask for more data. If you got a copy of his passport, now you can ask for more with this new piece.
3. Looks like some people still store passwords in plain text or don't mind exchanging that over email. This means some of these service might reveal a password to you.
4. With that, you can start hacking into other accounts. You also have loads of knowledge about the person, so you might be able to guess his password.
5. Now you have access to his email, dropbox, banking details and maybe even lock him out.
Right about time: https://en.wikipedia.org/wiki/Cobra_effect
[+] [-] davidhyde|6 years ago|reply
This "cobra effect" is one more reason NOT to retain personal information in the first place.
[+] [-] gowld|6 years ago|reply
[+] [-] tom_mellior|6 years ago|reply
[+] [-] jfk13|6 years ago|reply
"Overall, of the 83 firms known to have held data... 24% supplied personal information without verifying the requester's identity."
Want someone else's personal data? No need to "hack into" any systems; just ask for it!
[+] [-] Silhouette|6 years ago|reply
Even if someone were willing to send us "strong" ID, we don't have any special knowledge of what official government-issued ID looks like in every country where we have customers, nor the human resources or automated technology to investigate in detail whether any ID that is sent might be faked. At best, we could find an image of a passport/driving licence/whatever from that person's country and see if what they've sent us looks about right and matches any personal details we do have for the data subject.
Our disturbing conclusion was that if someone did ever send us certain types of request in connection with certain accounts, there might be no action we could safely take to resolve the situation that would definitely be lawful. If we get scammed by fake ID then we're breaking the law. If we don't accept ID that is real and comply with the subject request, we're also breaking the law.
Fortunately the affected services aren't doing anything particularly exciting or risky with personal data either, so it seems unlikely that any serious harm would come to anyone whatever happened in our case. However, the same basic issue surely affects many other data controllers/processors, and they won't necessarily be such unlikely targets as the research here shows. I haven't yet found any practical guidance from the regulators on what would be considered reasonable in this sort of situation.
[+] [-] segmondy|6 years ago|reply
[+] [-] paxys|6 years ago|reply
[+] [-] rossng|6 years ago|reply
However, there remains no easy way to make ad-hoc verifiable statements like 'I am John Smith and I authorise you to send this data to [email protected]'.
Governments, please solve this problem! Essentially: combine NemID with Keybase and build a UI that normal citizens can understand.
[+] [-] mfontani|6 years ago|reply
There's even an RFC[2] for it.
In order to get one, an individual has to prove their identity via a government-issued ID (ID card or passport), and that email address can henceforth be used to send emails for all official correspondence as if it were certified/verified mail, with the added bonus that both parties "know" the identity of the other party (i.e. the company knows it was very likely me who sent it, as they trust that the people in charge of having verified my ID did their job) and with the added bonus of the _contents_ of the email also being certified to have been sent from that recipient to that destination address, and not to have been tampered with (great thing to have for lawsuit reasons), unlike "standard" registered mail, who only certified that a letter has been sent and picked up.
[1]: https://en.wikipedia.org/wiki/Certified_email#Italy [2]: https://tools.ietf.org/html/rfc6109
[+] [-] AnthonyMouse|6 years ago|reply
Centralized identity isn't a solution, it's the problem. Once you implement something like that, it requires everyone to track everything using it in order for it to be used to authenticate access to the information. Which means everybody has to store the ID number as a field in every database and it becomes a de facto primary key that allows all information to be correlated by every blackhat that compromises more than one data set.
Meanwhile there will be the "just make it work" people who are bad at security, who will do whatever is necessary to compromise their own security because the attacker told them to. And then we would then be giving attackers the capacity to take over their entire lives instead of only one relationship with one entity.
Moreover, the scope of the damage if someone were to compromise the central identity system in general rather than only for a specific person is horrifying. It would become a single point of compromise for the whole country. And the worst kind on top of that, because everything would hook into it which would cause it to become ossified and difficult to update. If the system was then publicly compromised, how long does it take for everyone everywhere to update every piece of code to use the replacement? Which thing do you do in the meantime, continue using the compromised system as all hell breaks loose, or shut down your entire country?
There is a better solution. If you have an account with someone, you make requests by authenticating in the same way you do with your account. And if you don't have an account, you should be able to request deletion of the data associated with e.g. your IP address, but not request to download it -- because there is no way to to verify your identity for that. Even with centralized ID an IP address can be used by multiple people who shouldn't be able to give consent for one another and may not be mutually distinguishable by the party receiving the request, and the same for most other global data (e.g. many people share full names with other people). The only way to make centralized ID work in that context is to tag everything with it to begin with, compromising all anonymity and pseudonymity -- which can't possibly be the right trade off for what is supposed to be privacy legislation.
[+] [-] Avamander|6 years ago|reply
Fun fact, the EU has made a law that should force countries to implement their PKIs for people to be able to digitally sign documents and that those signatures are equivalent to hand written ones.
https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/What...
[+] [-] PeterisP|6 years ago|reply
Estonia has it quite similar, I'm not certain if it's technically the same standard or something slightly different.
[+] [-] tremon|6 years ago|reply
It was also marred by very bureaucratic policies, for example to get information on account usage (e.g. how many times was my account used, from which IP, to access which site) you would need to file a police report first, and it supported 2FA very early (through SMS service, now via mobile app as well) but the end user could not forcibly enable 2FA, only the target service could determine if they wanted two-factor authentication on their site. Luckily, those have been fixed.
Would be nice if they supported open standards like OAuth though.
[+] [-] giancarlostoro|6 years ago|reply
I would prefer governments to solve it with competent Software Engineers in the mix and maybe other professionals from the finances and IT security industries, but never one single large entity.
[+] [-] progval|6 years ago|reply
Of course it's tricky for organizations storing data about users without an account. (eg. Facebook or Google, not sure how they could handle that at all, even with government ids)
[+] [-] tialaramex|6 years ago|reply
Greg Egan's "Orphanogensis" (http://www.gregegan.net/DIASPORA/01/Orphanogenesis.html a short story setting up the protagonist for his novel "Diaspora") describes making Polis Citizens (people who exist only as software, the other branches of humanity have given themselves bodies suitable for long term existence in space, the Gleisner Robots, or given up on consciousness altogether, the Dream Apes) with a cypherclerk, a component that does public key crypto. Once the system achieves confidence that the process of making a new citizen has been successful and has produced a conscious person, the cypherclerk is initialised and the new citizen has a unique and impossible to fake proof of ID. This plays no major role in the story, it's just there because presumably Greg agrees with you that it'd sure be convenient if there was actually digital ID. But there isn't.
Here's a central conflict: I would like to be able to prove that I'm who I say I am, but without being stuck with that identity. This makes the identity disclaimable. You will find plenty of people who feel the same way, and some of them have very concrete practical reasons (e.g. people with stalkers or who ratted on a crime boss). But for a bunch of things people, and especially governments want to do with a digital ID that's no good.
A disclaimable ID can work for a driving license. Barry Shitpeas is licensed to drive an HGV, you can either prove you're Barry Shitpeas, or you can get a new license to drive the HGV with the identity you do want to use.
But if Barry drink drives, taking away his license, knowing he can just get a new one as Jerry Poocabbage tomorrow, well that's a rubbish outcome, isn't it? We want a way to _stop_ Barry from driving even if he changes identity. And we can't do that with disclaimable ID.
[+] [-] runeks|6 years ago|reply
The simplest solution I can imagine is to mimic the solution used to digitally identify companies via HTTPS (certificate authorities), but modified with the intent of identifying a person rather than a company.
[+] [-] toomuchtodo|6 years ago|reply
[+] [-] mrkeen|6 years ago|reply
[+] [-] skrunch|6 years ago|reply
We're trying to do exactly that.
[+] [-] lucideer|6 years ago|reply
This is a bit odd. I know his partner consented to this, but this doesn't seem like it should be enough to make this not identity fraud.
Obviously the research Pavur carried out is extremely valuable and the mid-sized companies failing to follow proper procedure are the real problem here, but it still seems like it would be technically illegal.
[+] [-] dmix|6 years ago|reply
We don’t need to keep replaying the vilification of security researcher game just because it involves the flawed gov systems imposed on technology itself instead of just technology. The end goal is the same, the privacy and security of end users.
[+] [-] SolarNet|6 years ago|reply
Most crimes require an intent to commit the crime, with notable exceptions (possession).
His intent was not fraud, it was security research. As demonstrated by getting the permission of the potential victim, and carefully avoiding things like forgery.
[+] [-] JoeSmithson|6 years ago|reply
[+] [-] tzs|6 years ago|reply
For example, I know my birth date. A company that also has my birth date should be able to just tell me that they have my birth date. They should not have to tell me the actual date.
Most of the data mentioned in the article is like this: credit card information, login and password information, social security number, stays in hotels, train journeys, high school grades, and maiden name.
Companies like credit reporting agencies that keep such data and share it with others would need to be an exception, so that you could check that they aren't giving out incorrect information about you.
[+] [-] 40acres|6 years ago|reply
"Small companies tended to ignore me.
"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."
This sums up regulatory compliance across the world quite well.
[+] [-] Mirioron|6 years ago|reply
[+] [-] triangleman|6 years ago|reply
[+] [-] gen3|6 years ago|reply
I wonder if there is a market for selling GDPR compliance / advising services. Some company that makes sure your doing everything right, and inspects the requests for validity.
[+] [-] ceejayoz|6 years ago|reply
I'm not aware of any willing to take on the liability of verifying identities as part of that, though.
[+] [-] buboard|6 years ago|reply
[+] [-] ng12|6 years ago|reply
[+] [-] drusepth|6 years ago|reply
[+] [-] arendtio|6 years ago|reply
[1] https://blog.netwrix.com/2018/04/19/gdpr-rules-of-data-breac...
[+] [-] jwildeboer|6 years ago|reply
The funny twist being that these bad implementations are a GDPR violation too and can be punishable under GDPR.
[+] [-] grapehut|6 years ago|reply
[+] [-] dwild|6 years ago|reply
All you know about a user is its name. How can you verify someone identity while still not being "overly burdensome" (which is required by GDPR)?
[+] [-] buboard|6 years ago|reply
[+] [-] tripzilch|6 years ago|reply
It's much harder to test but this is a good indicator that they probably also don't have controls to manage employee access.
[+] [-] Teknoman117|6 years ago|reply
I'd hate it if someone pulls a "I'd like to be forgotten" request on me without said organization absolutely verifying my identity. If would seem that if they complied, they wouldn't be able to undo the operation...
[+] [-] jokoon|6 years ago|reply
I mean at some point, EU countries will get tired of the US going a little too far. It's about being able to negotiate, and with the digital age and the massive ad market, it only makes sense for the EU to protect itself after brexit and the cambridge analytica stories.
[+] [-] jedberg|6 years ago|reply
The downside is that it made social engineering easier. What this guy did was nothing new. But now those businesses were compelled to help him instead of ignore him until he pestered them enough.
Good job government regulators! You just made the most common fraud even easier!
[+] [-] codechicago277|6 years ago|reply
At the end of the day this incentivizes big companies to comply with the laws because they can afford the necessary legal teams and the laws provide a moat to their entrenched power. It incentivizes small companies to ignore the laws just like the move fast and break things mentality of Silicon Valley. The people it hurts are the medium sized, growing companies who are best positioned to fight the big monopolies but are now being held back by well meaning but over burdensome legislation.
The good side of GDPR is that it’s trying to advocate for consumer privacy, which overall is a good thing. The issue is that it’s not a legal problem. In the same way you can’t declare drugs to be illegal and expect the supply of drugs to instantly disappear, you can’t declare misuse of data illegal and expect the same. As long as the data has value there will be a market for it, so why not draft sensible regulations instead of trying to solve the problem with laws.
[+] [-] PeterisP|6 years ago|reply
That would count as "trying", as it was their duty to have done this a year and a half ago. It's basic 'table stakes', a precondition to being permitted to handle personal data at all. If they started thinking about "how do we verify identities" only on day they received the request from this researcher, then that's not trying to comply, that's being grossly negligent.
[+] [-] kmlx|6 years ago|reply
another one: due to gdpr, violent thugs hired by the police to hurt protesters cannot be named. it would apparently trample on their rights.
gdpr is, as forecasted, a complete mess. a mess that is exploited by corrupt eu governments to great effect.
[+] [-] VMG|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] anbop|6 years ago|reply
Why not require the user to request this data while signed in to the service?