So, 1) a public service, 2) with no authentication, 3) and no encryption? (http only??), 4) sent every single response with a token, 5) giving full admin access to every client's legal documents. This is like a law firm with an open back door, open back window, and all the confidential legal papers sprawled out on the floor.
Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.
This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.
but google told me everyone can vibe code apps now and software engineers should count their days... it's almost as if there's more stuff we do than just write code...
Basically what happened in the Vastaamo case in Finland [1]. Except of course it wasn't individual phone calls – it was mass extortion of 30,000 people at once via email.
http-only makes it also very easy to sniff for LE if they decide to. This allows them to get knowledge about cases. Like, they could be scanning it with their own AI tool for all we know. In a free country with proper LE, this would neither be legal nor happening. But I am not sure the USA is remaining one, given the leader is a convicted felon with very dubious moral standards.
The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.
This is HN. We understood exactly what “exposed … confidential files” meant before reading your overly dramatic scenario. As overdone as it is, it’s not even realistic. A likely single mother is likely tiny potatoes in comparison to deep-pocketed legal firms or large corporations.
The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.
The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?
This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.
The reported incident is a great reminder of caveat emptor.
The bigwigs at my company want to build out a document management suite. After talking to VP of technology about requirements I ask about security as well as what the regulatory requirements are and all I get is a blank stare.
I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.
But now I understand it’s not the developers who are incompetent…
I've had the same. Ask them to come up with a ToS and they're like "we'll talk about that in an upcoming meeting" it's been a few years now with nothing.
I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
In my experience, it comes down to project management and organizational structure problems.
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.
security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.
The security@ inbox has so much junk these days with someone reporting that if you paste alert('hacked') into devtools then it makes the website hacked!
I reckon only 1% of reports are valid.
LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.
I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.
Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
SOC2 is mainly to check boxes, and forces you to think about a few things. There’s no real / actual audit, and in my experience the pen tests are very much a money grab. You’re paying way too much money for some “pentesting” automated suite to run.
The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.
Soc2 and most other certifications are akin to the tsa, security theater. After seeing the info sec security space from the inside i can only say that it blows my mind how abhorrent the security space is. Prod db creds in code? A ok. Not using some stupid vendors “pen testing” software on each mr, blasphemy?
Unless im missing something, they replied stating they would look into it and then its totally vague when they patched, with Alex apparently randomly testing later and telling them in a "follow up" that it was fixed.
I dont at all get why there is a paragraph thanking their communication if that is the case.
The time to fix isn't really important, assuming that they took the system offline in the mean time... but we all know they didn't, because that would cost to much.
If they have a billion dollar valuation, this fairly basic (and irresponsible) vulnerability could have cost them a billion dollars. If someone with malice had been in your shoes, in that industry, this probably wouldn't have been recoverable. Imagine a firm's entire client communications and discovery posted online.
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
While the FileVine service is indeed a Legal AI tool, I don't see the connection between this particular blunder and AI itself. It sure seems like any company with an inexperienced development team and thoughtless security posture could build a system with the same issues.
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
The first thing that comes to my mind is SOC2 HIPAA and the whole security theater.
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
SemiAnalysis made this a base requirement for being appropriately ranked on their ClusterMAX report, telling me it is akin to FAA certifications, and then getting hacked themselves for not enforcing simple security controls.
You have to start somewhere though. Security theater sucks, and it's not like compliance is a silver bullet, but at least it's something. Having been through implementing standards compliance, it did help the company in some areas. Was it perfect? Definitely not. Was it driven by financial goals? Absolutely. It did tighten up some weak spots though.
If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.
I'm less and less sure that when a billion-dollar company screws up this bad, the right thing to do is privately disclose it and let them fix it. This kind of thing just allows companies to go on taking people's money without facing the consequences of their mistakes.
What would you suggest the right thing to do would be?
Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.
I am at a loss for words. This wasn't a sophisticated attack.
I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.
This was my impression after reading the article too. I have no doubt that the team at Filevine attempted to secure their systems and have probably thwarted other attackers, but got their foot stuck in what is an unsophisticated attack. It only takes one chain vulnerability to bring down the site.
Security reminds me of the Anna Karenina principle: All happy families are alike; each unhappy family is unhappy in its own way.
Old school blue chip type of companies are like this too. They’ve thrown all the process and caution they used to have to the wind so that they can… apply AI to their IT org which isn’t even their core business?
It's so great that they allowed him to publish a technical blog post. I once discovered a big vulnerability in a listed consumer tech company -- exposing users' private messages and also allowing to impersonate any user. The company didn't allow me to write a public blogpost.
Attorneys are ethically obligated to follow very stringent rules to protect their client's confidential information. Having been a practicing litigator for 40+ years, I can confidently state I came across very few attorneys who truly understood their obligations.
Things were easier when I first began practicing in the 1970s. There weren't too many ways confidential materials in our files could be compromised. Leaving my open file spread out on the conference room table while I went to lunch while attorneys arriving for a deposition on my partner's case were one by one seated into the conference room. That's the kind of thing we had to keep an eye on.
But things soon got complicated. Computers. Digital copies of files that didn't disappear into an external site for storage like physical files. Then email. What were our obligations to know what could - and could not - be intercepted while email traveled the internet.
Then most dangerous of all. Digital storage that was outside our physical domain. How could we now know if the cloud vendor had access to our confidential data? Where were the backups stored? How exactly was the data securely compartmentalized by a cloud vendor? Did we need our own IT experts to control the data located on the external cloud? What did the contracts with the cloud vendor say about the fact we were a law firm and that we, as the lawyers responsible for our clients confidential information, needed to know that they - the cloud vendor - understood the legal obligations and that they - the cloud vendor - would hire lawyers to oversee the manner in which the cloud vendor blocked all access to the legal data located on their own servers. And so on and so forth.
I'm no longer in active practice but these issues were a big part of my practice my last few years at a Fortune 500 insurance company that used in-house attorneys nationwide to represent insureds in litigation - and the corporation was in engaged in signing onto a cloud service to hold all of the corporate data - including the legal departments across all 50 states. It was a nightmare. I'm confident it still is.
I don't disagree with the sentiment. But let's also be honest. There is a lot of improvement to be made in security software, in terms of ease of use and overcomplicating things.
I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...
So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI
Through that API, the frontend is handed a high-privilege API key for Box. Not only was that a huge blunder on the backend, but it reveals what passes for architecture these days. Should our application's backend speak to the super sensitive file store? No, we should hand over the keys to that to the React app, because it literally did not occur to them that there's anything that physically could be driven by the frontend that shouldn't be.
My apologies to the frontend engineers out there who know what they're doing.
Lawyers can and will send cease and desist letters to people whether or not there is any legal basis for it. Often the threat of a lawsuit, even a meritless one, is enough to keep people quiet.
FYI, a "cease and desist" carries the same legal weight as me sending a one-liner saying "Knock it off".
They are strongly worded requests from a legal point of view. The only real message they send is that the sender is serious enough about the issue to have involved a lawyer, unless of course you write it yourself, which is something that literally anyone can do.
If you want to actually force an action, you need a court order of some type.
NB for the actual lawyers: I'm oversimplifying, since they can be used in court to prove that you tried to get the other party to stop, and tried to resolve the issue outside of court.
Given the absurd amount startups I see lately that have the words "healthcare" and "AI", I'm actually incredibly concerned that in just a couple of months we're going to have an multiple, enormous HIPAA-data disasters
-The Filevine team was responsive, professional, and took the findings seriously throughout the disclosure process. They acknowledged the severity, worked to remediate the issues, allowed responsible disclosure, and maintained clear communication. This is another great example of how organizations should handle security disclosures.
In the same tenure I think that a professional etical hacker or a curious fellow that is poking around with no harm intent, shouldn't disclose the name of the company that had a security issue if they resolve it professionally.
You can write the same blog post without mentioning that it was Filevine.
If they didn't take care of the incident that's a different story...
This is a very standard part of responsible disclosure. Hacker finds bugs -> discloses them to the vendor -> (hopefully) the vendor communicates with them and remediates -> both sides publish the technical details. It also helps to demonstrate to the rest of the security world which companies will take reports seriously and which ones won’t, which is very useful information to have.
That's not how ethical disclosure works. Both parties should publish and we, the wider tech industry should see this as a good thing both for the hacker and the company that worked with them.
Eh, with something this horrendously egregious I think their customers have a right to know how carelessly their data was handled, regardless of the remediation steps taken after disclosure; that aside, who knows how many other AI SaaS vendors might stumble across this article and realize they've made a similarly boneheaded error, and save both themselves and their customers a huge amount of pain . . .
That doesn't surprise me one bit. Just think about all the confidential information that people post into their Chatgpt and Claude sessions. You could probably keep the legal system busy for the next century on a couple of days of that.
Hey I think I've just found a new marketing stunt for a new vibe-coding platform:
"Worried your vibe-coded app is about to be broadcast on the internet’s biggest billboard? Chill. ACME AI now wraps it in “NSA-grade” security armor."
I've never thought that there will be multiple billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from the older billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from...
Man. Hopefully their remediation steps included a full audit of their Box's account.
One could only imagine that if OP wasn't the first to discover it, people could've generated tons of shared links for all kinds of folders, for instance, which would remain active even if they invalidated the API token.
This might be off topic since we are in topic of AI tool and on HackerNews.
I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.
I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.
This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.
Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?
I think if you have no domain expertise or unique insight it will be quite hard to find a real pain point to solve, deliver a winning solution, and have the ability to sell it.
Not impossible, but very hard. And starting a company is hard enough as it is.
So 9/10 times the answer will be to partner with someone who understands the space and pain point, preferably one who has lived it, or find an easier problem to solve.
I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).
I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.
Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).
"Companies often have a demo environment that is open" - huh?
And... Margolis allowed this open demo environment to connect to their ENTIRE Box drive of millions of super sensitive documents?
HUH???!
Before you get to the terrible security practices of the vendor, you have to place a massive amount of blame on the IT team of Margolis for allowing the above.
No amount of AI hype excuses that kind of professional misjudgement.
I don't think we have enough information to conclude exactly what happened. But my read is the researcher was looking for demo.filevine.com and found margolis.filevine.com instead. The implication is that many other customers may have been vulnerable in the same way.
I mean... in what world would you send a customers private root key to a web browsing client. Like even if the user was authenticated why would they need this? This sort of secret shouldn't even be in an environment variable or database but stored with encryption at rest. There could easily have been a proxy service between client and box if the purpose is to search or download files. It's very bad, even for a prototype... this researcher deserves a bounty!
When liability of a corporation and its owners is limited, does it benefit their business to be dilligent in every step or have a mentality of move fast and break things?
> Who is Margolis, and are they happy that OP publicly announced accessing all their confidential files?
Google tells me they are a NY law firm specializing in Real Estate and Immigration law. There are other firms with Margolis in the name too. Kinda doesn't matter; see below.
I doubt that they are thrilled to have their name involved in this, but that is covered by the US constitution's protections on free press.
My thing is, even ingesting the BOK should have been done in phases, to avoid having all your virtual eggs in one basket or nest at any ONE time. Staggering tokens to these compartments would not have cost them anything at all . I always say, whatever convenience you enjoy yourself, will be highly appreciated by bad actors... WHEN, not if.. they get thru.
For all the talk in the blog of how "super-professional" their team was (probably just a courtesy on the side of the author, I don´t think he believes his own words either)... I have noticed using AI to produce some kind of API -OR- use a 3rd party point with integration into frontend, is almost guaranteed to produce code in which the frontend either exposes the API secrets directly in the frontend code (literally injecting it into a variable as string), or if you ask it for authentication, it will build some half-built lazy solution which makes no sense. So I imagine their "super-professional" team built this with AI, blindly trusting, probably even allowing it to commit and merge changes itself because if you are not merging 10K LoC a day with all this "great" technology, what are you even doing, right? It is not super-professional to work effectively with a blindfold on, I´d argue.
I'll be honest... I'm not at all surprised that this happened. Purely because it seems like everyone who wants to implement AI just forgot all of the institutional knowledge that cybersecurity has acquired over the last 30-40 years. When you "forget" all of that because you want to rush out something really fast, well, you know what they say: play stupid games, win stupid prizes and all that.
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
That comment didn't read like AI generated content to me. It made useful points and explained them well. I would not expect even the best of the current batch of LLMs to produce an argument that coherent.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
Yeah, you have a point... the comment - and their other comments, on average - seem to fit quite a specific pattern. It's hard to really draw a line between policing style and actually recognising AI-written content, though.
What makes you think that? it would need some prompt engineering if so since ChatGPT won't write like that (bad capitalization, lazy quoting) unless you ask it to
We finally have a blog that no one (yet) has accused of being ai generated, so obviously we just have to start accusing comments of being ai. Can't read for more than 2 seconds on this site without someone yelling "ai!".
For what it's worth, even if the parent comment was directly submitted by chatgpt themselves, your comment brought significantly less value to the conversation.
I think this class of problems can be protected against.
It's become clear that the first and most important and most valuable agent, or team of agents, to build is the one that responsibly and diligently lays out the opsec framework for whatever other system you're trying to automate.
A meta-security AI framework, cursor for opsec, would be the best, most valuable general purpose AI tool any company could build, imo. Everything from journalism to law to coding would immediately benefit, and it'd provide invaluable data for post training, reducing the overall problematic behaviors in the underlying models.
Move fast and break things is a lot more valuable if you have a red team mechanism that scales with the product. Who knows how many facepalm level failures like this are out there?
0xbadcafebee|2 months ago
Imagine the potential impact. You're a single mother, fighting for custody of your kids. Your lawyer has some documentation of something that happened to you, that wasn't your fault, but would look bad if brought up in court. Suddenly you receive a phone call - it's a mysterious voice, demanding $10,000 or they will send the documents to the opposition. Neither of them knows each other; someone just found a trove of documents in an open back door and wanted to make a quick buck.
This is exactly what a software building code would address (if we had one!). Just like you can't open a new storefront in a new building without it being inspected, you should not be able to process millions of sensitive files without having your software's building inspected. The safety and privacy of all of us shouldn't be optional.
altmanaltman|2 months ago
Sharlin|2 months ago
[1] https://en.wikipedia.org/wiki/Vastaamo_data_breach
Fnoord|2 months ago
The problem here however is that they get away with their sloppiness as long as the security researcher who found this is a whitehat, and the regular news won't pick it up. Once regular media pick this news up (and the local ones should), their name is tarnished and they may regret their sloppiness. Which is a good way to ensure they won't make the same mistake. After all, money talks.
anshumankmr|2 months ago
gbacon|2 months ago
The story is an example of the market self-correcting, but out comes this “building code” hobby horse anyway. All a software “building code” will do is ossify certain current practices, not even necessarily the best ones. It will tilt the playing field in favor of large existing players and to the disadvantage of innovative startups.
The model fails to apply in multiple ways. Building physical buildings is a much simpler, much less complex process with many fewer degrees of freedom than building software. Local city workers inspecting by the local municipality’s code at least has clear jurisdiction because of where the physical fixed location is. Who will write the “building code”? Who will be the inspectors?
This is HN. Of all places, I’d expect to see this presented as an opportunity for new startups, not calls for slovenly bureaucracy and more coercion. The private market is perfectly capable of performing this function. E&O and professional liability insurers if they don’t already will be soon motivated after seeing lawsuits to demand regular pentests.
The reported incident is a great reminder of caveat emptor.
theoldgreybeard|2 months ago
I used to think developers had to be supremely incompetent to end up with vulnerabilities like this.
But now I understand it’s not the developers who are incompetent…
eru|2 months ago
hahn-kev|2 months ago
icyfox|2 months ago
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
Aurornis|2 months ago
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
Barathkanna|2 months ago
ipdashc|2 months ago
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
bfxbjuf|2 months ago
londons_explore|2 months ago
I reckon only 1% of reports are valid.
LLM's can now make a plausible looking exploit report ('there is a use after free bug in your server side implementation of X library which allows shell access to your server if you time these two API calls correctly'), but the LLM has made the whole thing up. That can easily waste hours of an experts time for a total falsehood.
I can completely see why some companies decide it'll be an office-hours-only task to go through all the reports every day.
gwbas1c|2 months ago
Capricorn2481|2 months ago
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
perlgeek|2 months ago
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
jofzar|2 months ago
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
1 week is surprisingly not that slow.
bgbntty2|2 months ago
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
habosa|2 months ago
Also … shows you what a SOC 2 audit is worth: https://www.filevine.com/news/filevine-proves-industry-leade...
Even the most basic pentest would have caught this.
stingraycharles|2 months ago
The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.
(I’m responsible for the SOC2 audits at our firm)
rustystump|2 months ago
technion|2 months ago
I dont at all get why there is a paragraph thanking their communication if that is the case.
eru|2 months ago
I wouldn't expect them to find any computer problems either to be honest.
mrweasel|2 months ago
jonny_eh|2 months ago
aitchnyu|2 months ago
kylecazar|2 months ago
They should have given you some money.
edm0nd|2 months ago
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
RagnarD|2 months ago
Tepix|2 months ago
sys32768|2 months ago
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
pr337h4m|2 months ago
hughes|2 months ago
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
layer8|2 months ago
mbesto|2 months ago
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
Aperocky|2 months ago
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
teej|2 months ago
whalesalad|2 months ago
pstuart|2 months ago
canopi|2 months ago
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
latchkey|2 months ago
https://jon4hotaisle.substack.com/i/180360455/anatomy-of-the...
It is crazy how this gets perpetuated in the industry as actually having security value, when in reality, it is just a pay-to-play checkbox.
chickensong|2 months ago
If the options mainly consist of "trust me bro" vs "we can demonstrate that we put in some effort", the latter seems more preferable, even if it's not perfect.
BrenBarn|2 months ago
barbazoo|2 months ago
abustamam|2 months ago
Edit: I agree with you that we shouldnt let companies like this get away with what amounts to a slap on the wrist. But everything else seems irresponsible as well.
magnetowasright|2 months ago
I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.
It's inexcusable.
rashidujang|2 months ago
Security reminds me of the Anna Karenina principle: All happy families are alike; each unhappy family is unhappy in its own way.
GJim|2 months ago
To be fair, data security breaches seldom are.
yieldcrv|2 months ago
and otherwise well structured engineering orgs have lost their goddamn minds with move fast and break things
because they're worried that OpenAI/Google/Meta/Amazon/Anthropic will release the tool they're working on tomorrow
literally all of them are like this
trollbridge|2 months ago
deep_thinker26|2 months ago
qmr|2 months ago
Go on write your blog post. Don't let your dreams be dreams.
gessha|2 months ago
trollbridge|2 months ago
keernan|2 months ago
Things were easier when I first began practicing in the 1970s. There weren't too many ways confidential materials in our files could be compromised. Leaving my open file spread out on the conference room table while I went to lunch while attorneys arriving for a deposition on my partner's case were one by one seated into the conference room. That's the kind of thing we had to keep an eye on.
But things soon got complicated. Computers. Digital copies of files that didn't disappear into an external site for storage like physical files. Then email. What were our obligations to know what could - and could not - be intercepted while email traveled the internet.
Then most dangerous of all. Digital storage that was outside our physical domain. How could we now know if the cloud vendor had access to our confidential data? Where were the backups stored? How exactly was the data securely compartmentalized by a cloud vendor? Did we need our own IT experts to control the data located on the external cloud? What did the contracts with the cloud vendor say about the fact we were a law firm and that we, as the lawyers responsible for our clients confidential information, needed to know that they - the cloud vendor - understood the legal obligations and that they - the cloud vendor - would hire lawyers to oversee the manner in which the cloud vendor blocked all access to the legal data located on their own servers. And so on and so forth.
I'm no longer in active practice but these issues were a big part of my practice my last few years at a Fortune 500 insurance company that used in-house attorneys nationwide to represent insureds in litigation - and the corporation was in engaged in signing onto a cloud service to hold all of the corporate data - including the legal departments across all 50 states. It was a nightmare. I'm confident it still is.
etamponi|2 months ago
I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...
So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI
xp84|2 months ago
My apologies to the frontend engineers out there who know what they're doing.
hbarka|2 months ago
Can that company tell you to cease and desist? How does the law work?
me_again|2 months ago
dghlsakjg|2 months ago
They are strongly worded requests from a legal point of view. The only real message they send is that the sender is serious enough about the issue to have involved a lawyer, unless of course you write it yourself, which is something that literally anyone can do.
If you want to actually force an action, you need a court order of some type.
NB for the actual lawyers: I'm oversimplifying, since they can be used in court to prove that you tried to get the other party to stop, and tried to resolve the issue outside of court.
badbird33|2 months ago
valbaca|2 months ago
Just search "healthcare" in https://news.ycombinator.com/item?id=46108941
Invictus0|2 months ago
unknown|2 months ago
[deleted]
culanuchachamim|2 months ago
In the same tenure I think that a professional etical hacker or a curious fellow that is poking around with no harm intent, shouldn't disclose the name of the company that had a security issue if they resolve it professionally.
You can write the same blog post without mentioning that it was Filevine.
If they didn't take care of the incident that's a different story...
evan_a_a|2 months ago
deelowe|2 months ago
manbash|2 months ago
CBMPET2001|2 months ago
jacquesm|2 months ago
giancarlostoro|2 months ago
aperture147|2 months ago
"Worried your vibe-coded app is about to be broadcast on the internet’s biggest billboard? Chill. ACME AI now wraps it in “NSA-grade” security armor."
I've never thought that there will be multiple billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from the older billion-dollar-AI-features that fixes all the monkey patching problems that no one saw them coming from...
6thbit|2 months ago
One could only imagine that if OP wasn't the first to discover it, people could've generated tons of shared links for all kinds of folders, for instance, which would remain active even if they invalidated the API token.
mattfrommars|2 months ago
I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.
I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.
This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.
Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?
Apologies for the scattered thoughts...
airstrike|2 months ago
Not impossible, but very hard. And starting a company is hard enough as it is.
So 9/10 times the answer will be to partner with someone who understands the space and pain point, preferably one who has lived it, or find an easier problem to solve.
strgcmc|2 months ago
I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.
Reference: https://www.thetimes.com/sport/formula-one/article/professor...
Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).
jimbokun|2 months ago
corry|2 months ago
And... Margolis allowed this open demo environment to connect to their ENTIRE Box drive of millions of super sensitive documents?
HUH???!
Before you get to the terrible security practices of the vendor, you have to place a massive amount of blame on the IT team of Margolis for allowing the above.
No amount of AI hype excuses that kind of professional misjudgement.
me_again|2 months ago
stanfordkid|2 months ago
1vuio0pswjnm7|2 months ago
Would there be a "pretty printer" or some other "unminifier" for this task
If not, then is minification effectively a form of obfuscation
gu5|2 months ago
testemailfordg2|2 months ago
lupire|2 months ago
Clever work by OP. Surely there is automatic prober tool that already hacked this product?
dghlsakjg|2 months ago
Google tells me they are a NY law firm specializing in Real Estate and Immigration law. There are other firms with Margolis in the name too. Kinda doesn't matter; see below.
I doubt that they are thrilled to have their name involved in this, but that is covered by the US constitution's protections on free press.
richwater|2 months ago
satya71|2 months ago
densone|2 months ago
canto|2 months ago
bzmrgonz|2 months ago
MangoToupe|2 months ago
People should really look this law up before they reference it
nstj|2 months ago
hansmayer|2 months ago
Fokamul|2 months ago
tonyhart7|2 months ago
unknown|2 months ago
[deleted]
ethin|2 months ago
fallinditch|2 months ago
AI tends to be good at un-minifying code.
a_victorp|2 months ago
nodesocket|2 months ago
2ndatblackrock|2 months ago
quapster|2 months ago
[deleted]
j45|2 months ago
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
larrysanchez77|2 months ago
[deleted]
kitschman|2 months ago
[deleted]
electric_muse|2 months ago
tomhow|2 months ago
We detached this subthread from https://news.ycombinator.com/item?id=46137863 and marked it off topic.
simonw|2 months ago
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
snapdeficit|2 months ago
Conasg|2 months ago
snapcaster|2 months ago
samdoesnothing|2 months ago
syndacks|2 months ago
koumou92|2 months ago
vkou|2 months ago
The point you raised is both a distraction... And does not engage with the ones it did.
jfindper|2 months ago
For what it's worth, even if the parent comment was directly submitted by chatgpt themselves, your comment brought significantly less value to the conversation.
chunk1000|2 months ago
observationist|2 months ago
It's become clear that the first and most important and most valuable agent, or team of agents, to build is the one that responsibly and diligently lays out the opsec framework for whatever other system you're trying to automate.
A meta-security AI framework, cursor for opsec, would be the best, most valuable general purpose AI tool any company could build, imo. Everything from journalism to law to coding would immediately benefit, and it'd provide invaluable data for post training, reducing the overall problematic behaviors in the underlying models.
Move fast and break things is a lot more valuable if you have a red team mechanism that scales with the product. Who knows how many facepalm level failures like this are out there?
croes|2 months ago
Of course, it’s called proper software development
dghlsakjg|2 months ago
This was just plain terrible web security.
imvetri|2 months ago
How does above sound like and what kind of professional write like that?