This is so strange, this is exactly how you don't handle problems like this.
Write a blog post, explain what happened, explain who's affected and to what extent, explain if it can be fixed and what you're doing, and explain what you'll do to make sure it doesn't happen again.
Putting out a supposed hidden fix in the Drive for Desktop client, to see if it can recover files locally (?!) when the entire issue is files disappearing from the cloud, doesn't seem like it makes any sense.
Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer. As the saying goes, "the cover-up is worse than the crime". Why Google is not fully and transparently acknowledging this issue baffles me. The corporate playbook for these types of situations is well known, and it involves being transparent and accountable.
To preface, I do not intend to defend Google nor do I work with them or represent them.
That said, I have been in similar situations with large scale customers. It is hard. Some percentage of customers are pathological, and even after you fix their problem refuse to stop continuing the rumors.
Once it’s fixed, I want all communication forward looking. Some percent of people are flat out insane, incompetent, or just assholes. Sometimes you have to lock the thread in order to stop a conversation about something that is already fixed.
Large scale customer bases are just a different beast. Once you experience it, you know what I mean. That doesn’t mean Google took the right path - only people with a comprehensive perspective can evaluate that, and I’m just some idiot on a forum who knows nothing about the specifics.
That’s what any normal company who is used to dealing with customers would do, but google isn’t that. Google is entirely unacquainted with the concept of “customer relations”. I’m half convinced that google-the-business sees customers as convenient peasants that purchase whatever it deigns to sell. The idea of supporting customers is basically antithetical to them: look at all the stories of people trying to get support for GCP as a great case in point.
> I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer.
They're not pretending it didn't happen though? As per the article they acknowledged it and published a help center article on it. They named the software versions affected (notifying the affected users seems impossible, since the entire problem was that the data had not been synced). Following the links in the help center article, during the incident they posted in a pinned article in the support forum (multiple times) on how to avoid triggering the bug and how to avoid making it worse.
That's pretty much what you wanted to see except for a blog post with an RCA, no?
> Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
So the suggestion is that in addition to the bug that they acknowledged, there's a totally different one that appeared at the same time affecting totally different functionality and with different symptoms, and that they're covering up despite not covering up the other bug? That seems like a complicated explanation when there's an obvious and simpler explanation around.
That's also the kind of thing that's pretty much impossible to prove categorically, let alone communicate the proof in a way that's understandable to the average user. What are you going to say? "We've checked really hard and can't confirm the reports"?
(I mean, I guess it's possible to do it. Collect 100 credible reports of files going missing that can reliably identify the supposedly missing file by name and creation date rather than say that it was probably a .doc file sometime in March. Then do an analysis on e.g. audit logs on what the reality is. How many files were never there at all? How many were explicitly deleted by the user? How many were accidentally uploaded to a user's work account rather than personal account? How many were still in the drive, and the user just couldn't find them? And yes, once you've exhausted all the possibilities, how many disappeared without a trace? Then publish the statistics. But while doing such an investigation privately to make sure whether there is a problem makes sense, publishing the results seems like a stunningly bad PR strategy even if no data was indeed lost.)
Your question "why" is probably because you think google should know better. However, I am reminded of the post from a few weeks of the person who left google after decades of working there who claims the culture has changed and attracted more incompetent corporate, political types.
EDIT: found it, not decades but almost 2, he left after working there 18 years.
for google the problem is small enough they're encouraging individuals to file a small claims they'll gladly hand a check for, or it's big enough that google doesn't want to document shooting themselves in the foot.
I also think there's a long tail of Beavis' out there that you need to lock things down to stop the rumors.
Genuinely, I would love an answer from someone that believes in both "never talk to the cops" and "corporations should be open about their fuck-ups" to articulate how they reconcile both concepts. For me they're the same side of the coin, but I'd enjoy to be convince otherwise.
I don't understand the point of Company hosting forums that aren't staffed by Company. Well I do. It doesn't help users at all. The only feature is Company's censorship. It's a hostile social hack on the user base who should be using a different forum host.
When I ask people if they have out-of-cloud backups of their data they look at me as if I'm mad. The cloud can't lose data. Until it does. And then what?
I think it is generally fine to not have out-of-cloud backups of data as long as you still have the primary copy locally (as opposed to data being only in the cloud), so you're screwed only if the cloud provider loses data the same day as you happen to lose your local data.
I like the expression, "Your data doesn't exist unless it exists in three places, and at least one of those places should be under your direct control."
I'm currently battling their support. Tried opening a Play account to publish an app to the app store but missed an email to verify my identity. Now the link in the email no longer works because "Google couldn't verify your identity" as I've had "too many tries" and my account is restricted meaning I can't publish the app.
Support just repeats the same things back that I've had too many tries and my account is restricted and I can't get a refund.
It's woeful how bad the support is to get such a simple thing sorted out. Don't miss that email if setting up a developer play account!
(If anyone can fix this my developer play account ID is: 7827257533299144892)
I like the text at the bottom of the page if you don't have javascript enabled:
> Hey NoScript peeps, (or other users without Javascript), I dig the way you roll. That's why this page is mostly static and doesn't generate the list dynamically. The only things you're missing are a progressive descent into darkness and a happy face who gets sicker and sicker as you go on. Oh, and there's a total at the bottom, but anyone who uses NoScript can surely count for themselves.
I feel this is increasingly becoming the era of the NAS.
A lot of the time I go looking for shows or movies it’s no longer offered on the same service if quite literally at all. Many of my liked YouTube videos are now just [deleted].
Not to mention any data you store in the cloud when engineer(s) experience career altering events.
Most Americans don't have enough data to warrant a dedicated NAS. And even if they do, the cost is a major dealbreaker, $400+ including drives is way more than most would ever consider spending on computer hardware.
So last time I saw this discussed on HN [1], people said there were tape backups of all these files and that they would be fully recovered. Some people said they worked adjacent to this tape backup system. I'm inclined to believe they weren't lying and that the tape backup system does exist.
So why weren't they able to recover from tape? Is the tape backup more limited than people reported, and this data wasn't backed up? Was it just too difficult and expensive to scan the tapes and decide which was the canonical version of each file?
Are we sure this is limited to Drive? The Ars article mentions that some users reportedly lost data without using the desktop app at all, which seems to imply that (one of?) the bugs was inside Google's infra.
I wonder if they might have suffered some invisible data corruption issue in Colossus or whatever they use now, and the effects on Drive just happen to be the most visible. Though presumably whatever broke wasn't part of GCP or we would have noticed by now, right?
> Are we sure this is limited to Drive? The Ars article mentions that some users reportedly lost data without using the desktop app at all, which seems to imply that (one of?) the bugs was inside Google's infra.
Seems much more plausible that there's something wrong with the backend code for google drive (the product).
I had a problem using it just yesterday. I uploaded an image using the phone app, then tried to find it and download it on my desktop. It wasn’t in the Recent list. I found it by searching, tried to download it and the website told me “You have selected a file that does not exist”.
I can confirm that some files were somehow deleted in march. I suspect the problem is related to announced that 5m files is not limit anymore [1]. I was thinking that some of our team members messed up but now I feel bad.
Haven’t used these “cloud” storage systems to store anything critical for a looong time. I was burned a long time ago when I had college assignments stored on a university backup system (I think they outsourced it to MS?).
Lost a half semester of work because a stupid sys admin(s) for the university nuked the primary blob storage (instead of months old records) in an attempt to save $$$.
The more you look at Google products, the more the illusion of their "eliteness" evaporates.
Shitty branding. Shitty UI. Shitty functional design. And most of all, shitty attitude.
I had the luxury of telling a Google recruiter thanks, but NOPE. I had friends who are highly competent programmers with FAANG resumes treated like trash by Google interviewers, and another who actually worked there who gave detailed accounts of a toxic culture that pitted peers against each other.
Google sucks. They're coasting on their entrenchment and that's that.
Whether or not Google truly 'lost data', any backup is only as good as the last verification done by the datas owner.
Simply putting varying degrees of closed-source Syncing App in place long term, then expecting them to just keep working perfectly in all circumstances, through all possible future PEBKAC (either by the user or the corporation) - to the extent that they're reliable as a persons long term and only data backup - is a false expectation that the masses have bought into as part of the whole mobile/cloud world hype.
Turns out "Cloud" is someone else's computer, with the same possible issues. Saying that, I would be very surprised if the same ever happened on S3. Not saying it won't, but if the specs are to be believe, seems pretty rock solid.
On a separate note, really hope the industry either makes cloud compute cheaper or starts using better priced competitors/on premisis soon, the big 3 clouds are crazy expensive IMO.
Interestingly: S3 loses based on their own figures some data every year. They store 100 trillion objects and they say they are 99.999999999% reliable on an annual basis. So every year they lose some objects, but the chance that these are your objects is vanishingly small.
DropBox lost a shitload of my files, which were due to a client. They never figured out how, and the files reappeared a week later. In the meantime, I had to sign up for Google Drive and deliver the files late... looking unprofessional.
Don't rely on "the cloud" unless you absolutely have to.
I had an issue the other day where I deleted a file through the web interface and saved a file with the same name using the Windows client. The file disappeared and was never uploaded. Would that be a manifestation of the problem, or is the problem a "files at rest are disappearing" issue?
[+] [-] crazygringo|2 years ago|reply
Write a blog post, explain what happened, explain who's affected and to what extent, explain if it can be fixed and what you're doing, and explain what you'll do to make sure it doesn't happen again.
Putting out a supposed hidden fix in the Drive for Desktop client, to see if it can recover files locally (?!) when the entire issue is files disappearing from the cloud, doesn't seem like it makes any sense.
Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
I don't understand what's going on at Google. If actual data loss occurred, trying to pretend it didn't happen is never the answer. As the saying goes, "the cover-up is worse than the crime". Why Google is not fully and transparently acknowledging this issue baffles me. The corporate playbook for these types of situations is well known, and it involves being transparent and accountable.
[+] [-] edmundsauto|2 years ago|reply
That said, I have been in similar situations with large scale customers. It is hard. Some percentage of customers are pathological, and even after you fix their problem refuse to stop continuing the rumors.
Once it’s fixed, I want all communication forward looking. Some percent of people are flat out insane, incompetent, or just assholes. Sometimes you have to lock the thread in order to stop a conversation about something that is already fixed.
Large scale customer bases are just a different beast. Once you experience it, you know what I mean. That doesn’t mean Google took the right path - only people with a comprehensive perspective can evaluate that, and I’m just some idiot on a forum who knows nothing about the specifics.
[+] [-] FridgeSeal|2 years ago|reply
[+] [-] jsnell|2 years ago|reply
They're not pretending it didn't happen though? As per the article they acknowledged it and published a help center article on it. They named the software versions affected (notifying the affected users seems impossible, since the entire problem was that the data had not been synced). Following the links in the help center article, during the incident they posted in a pinned article in the support forum (multiple times) on how to avoid triggering the bug and how to avoid making it worse.
That's pretty much what you wanted to see except for a blog post with an RCA, no?
> Or if the problem really is solely with files that should have been uploaded but weren't, and nothing from the cloud ever actually got deleted, then explain and justify that as well -- because that's not what people are saying.
So the suggestion is that in addition to the bug that they acknowledged, there's a totally different one that appeared at the same time affecting totally different functionality and with different symptoms, and that they're covering up despite not covering up the other bug? That seems like a complicated explanation when there's an obvious and simpler explanation around.
That's also the kind of thing that's pretty much impossible to prove categorically, let alone communicate the proof in a way that's understandable to the average user. What are you going to say? "We've checked really hard and can't confirm the reports"?
(I mean, I guess it's possible to do it. Collect 100 credible reports of files going missing that can reliably identify the supposedly missing file by name and creation date rather than say that it was probably a .doc file sometime in March. Then do an analysis on e.g. audit logs on what the reality is. How many files were never there at all? How many were explicitly deleted by the user? How many were accidentally uploaded to a user's work account rather than personal account? How many were still in the drive, and the user just couldn't find them? And yes, once you've exhausted all the possibilities, how many disappeared without a trace? Then publish the statistics. But while doing such an investigation privately to make sure whether there is a problem makes sense, publishing the results seems like a stunningly bad PR strategy even if no data was indeed lost.)
[+] [-] noobermin|2 years ago|reply
EDIT: found it, not decades but almost 2, he left after working there 18 years.
https://ln.hixie.ch/?start=1700627373&count=1
See the HN comments
https://news.ycombinator.com/item?id=38381573
[+] [-] matthewaveryusa|2 years ago|reply
I also think there's a long tail of Beavis' out there that you need to lock things down to stop the rumors.
Genuinely, I would love an answer from someone that believes in both "never talk to the cops" and "corporations should be open about their fuck-ups" to articulate how they reconcile both concepts. For me they're the same side of the coin, but I'd enjoy to be convince otherwise.
[+] [-] zx8080|2 years ago|reply
Is it a monopolistic behaviour by the book?
[+] [-] rvba|2 years ago|reply
Since they struggle to innovate they use the strategy of cost cutting. What means moving development and support (if any) to low cost countries.
Low cost countries mean low cost country standards in both code quality and handling of the disaster after the code breaks.
[+] [-] gowld|2 years ago|reply
[+] [-] Laaas|2 years ago|reply
[+] [-] lokar|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] jacquesm|2 years ago|reply
[+] [-] rsync|2 years ago|reply
We used to run an ad on reddit that read something like:
Your infra is on Amazon AWS and your backups are on Amazon AWS ... you're doing it wrong ..."
... and we had to stop because it made people angry.
They were quite irate and combative at the very notion that there was any non-zero risk whatsoever* at AWS.
[+] [-] toomuchtodo|2 years ago|reply
[+] [-] donmcronald|2 years ago|reply
[+] [-] PeterisP|2 years ago|reply
[+] [-] minerva23|2 years ago|reply
[+] [-] NiallBunting|2 years ago|reply
Support just repeats the same things back that I've had too many tries and my account is restricted and I can't get a refund.
It's woeful how bad the support is to get such a simple thing sorted out. Don't miss that email if setting up a developer play account!
(If anyone can fix this my developer play account ID is: 7827257533299144892)
[+] [-] beretguy|2 years ago|reply
[+] [-] overboard2|2 years ago|reply
[+] [-] SushiHippie|2 years ago|reply
> Hey NoScript peeps, (or other users without Javascript), I dig the way you roll. That's why this page is mostly static and doesn't generate the list dynamically. The only things you're missing are a progressive descent into darkness and a happy face who gets sicker and sicker as you go on. Oh, and there's a total at the bottom, but anyone who uses NoScript can surely count for themselves.
> Rock on!
[+] [-] goles|2 years ago|reply
A lot of the time I go looking for shows or movies it’s no longer offered on the same service if quite literally at all. Many of my liked YouTube videos are now just [deleted].
Not to mention any data you store in the cloud when engineer(s) experience career altering events.
[+] [-] crazygringo|2 years ago|reply
1) Western Digital (MyBook I think was the name?) with built-in HD -- notoriously buggly and unreliable pieces of junk
2) Then just... no NAS as cloud seemed to be the answer
3) Now, Synology NAS enclosures and their custom OS (DSM) exists which is just a dream of ease-of-use
If there is a new era of NAS, I'd say it's single-handedly being enabled by Synology, which is really quite remarkable.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] judge2020|2 years ago|reply
[+] [-] maxbond|2 years ago|reply
So why weren't they able to recover from tape? Is the tape backup more limited than people reported, and this data wasn't backed up? Was it just too difficult and expensive to scan the tapes and decide which was the canonical version of each file?
[1] https://news.ycombinator.com/item?id=38427864
[+] [-] e40|2 years ago|reply
[+] [-] sterlind|2 years ago|reply
I wonder if they might have suffered some invisible data corruption issue in Colossus or whatever they use now, and the effects on Drive just happen to be the most visible. Though presumably whatever broke wasn't part of GCP or we would have noticed by now, right?
[+] [-] gruez|2 years ago|reply
Seems much more plausible that there's something wrong with the backend code for google drive (the product).
[+] [-] magicalhippo|2 years ago|reply
[1]: https://bofh.bjash.com/bofh/bofh1.html
[+] [-] code_duck|2 years ago|reply
[+] [-] tlogan|2 years ago|reply
[1] https://news.ycombinator.com/item?id=35329135
[+] [-] xyst|2 years ago|reply
Lost a half semester of work because a stupid sys admin(s) for the university nuked the primary blob storage (instead of months old records) in an attempt to save $$$.
[+] [-] jpc0|2 years ago|reply
If it's only in the cloud it's not backed up, its stored.
If its on the cloud and on your local PC it's not backed up, it's copied.
If it's on your local PC and two independent clouds, or on your pc, on your local NAS and on the cloud it is backed up.
3 copies, two different mediums, at least one off site. In most interpretations cloud is considered a different mediu..
[+] [-] klysm|2 years ago|reply
[+] [-] chris_wot|2 years ago|reply
[+] [-] ShadowBanThis01|2 years ago|reply
Shitty branding. Shitty UI. Shitty functional design. And most of all, shitty attitude.
I had the luxury of telling a Google recruiter thanks, but NOPE. I had friends who are highly competent programmers with FAANG resumes treated like trash by Google interviewers, and another who actually worked there who gave detailed accounts of a toxic culture that pitted peers against each other.
Google sucks. They're coasting on their entrenchment and that's that.
[+] [-] asmor|2 years ago|reply
[+] [-] toasted-subs|2 years ago|reply
Idk seems way too off to me. Maybe we should have other organizations step in and smack the big G around until the call uncle.
[+] [-] crtified|2 years ago|reply
Simply putting varying degrees of closed-source Syncing App in place long term, then expecting them to just keep working perfectly in all circumstances, through all possible future PEBKAC (either by the user or the corporation) - to the extent that they're reliable as a persons long term and only data backup - is a false expectation that the masses have bought into as part of the whole mobile/cloud world hype.
[+] [-] RecycledEle|2 years ago|reply
[+] [-] layoric|2 years ago|reply
On a separate note, really hope the industry either makes cloud compute cheaper or starts using better priced competitors/on premisis soon, the big 3 clouds are crazy expensive IMO.
[+] [-] jacquesm|2 years ago|reply
[+] [-] ShadowBanThis01|2 years ago|reply
Don't rely on "the cloud" unless you absolutely have to.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] computer23|2 years ago|reply
[+] [-] awinter-py|2 years ago|reply