The lack of ACLs or comparable permissions is by far the biggest thing that prevents me from recommending DO for production workloads. This kind of thing is absolutely essential. You can't even separate dev resources from prod resources, every API key has godmode on your whole account. This is a security disaster.
For a simple example, I'm running externaldns on a kubernetes cluster. For production use, I'd want to at least have an API key that can only access the DNS domains and nothing else. As is, the key can create compute resources or delete anything (including object storage buckets!). Again with no dev/prod isolation, so a leaked dev key that looks innocuous means your prod resources are now compromised.
Some second tier providers do better at this, OVH and scaleway both have some concept of ACLs, but DO just don't even try.
Apologies for the slight rant but it's frustrating to see something so basic overlooked by a company the size of DO.
I agree, probably the biggest point Digitalocean is lacking. I think the only workaround is creating several projects? But that probably becomes annoying and hard to manage quite easily if you want a lot of separation.
Agreed. We had solved it at PhishDeck back in the day through the use of multiple projects, i.e., Platform Production, Platform Staging, Tooling Production, Tooling Staging etc.
Still not ideal, but it minimised security risk and kept things organised.
S3 (and similar storages) have caused plenty of security issues in several occasions (usually because of misconfigured buckets, making all contents available to the public). Given this, it would be expected that companies would pay a bit more attention to the security of these data storage methods.
However, in Digital Ocean, by design, you can't restrict keys to certain buckets. Once you issue a key, it can access all buckets within your project, with all operations (list, read, delete) files.
The issue has been reported and is known for a long time. It's even the top voted "idea" in the company portal.
Posting this here in hopes to bring awareness of this issue.
I'm afraid you broke the site guidelines by using the submission title to editorialize. Please see https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait; don't editorialize."
If you want to say what you think is important about a page, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so.... Alternatively, you could have made a text post to communicate what you think is important about the issue, and then your title would have been fine.
What's not ok on HN is to use the title field as a way to comment on an issue, because then the submitter gets a privileged position relative to other commenters. Since the title is by far the largest influence on a thread, we want to avoid that. Being the submitter of a page shouldn't confer any extra special power over the discussion.
Things like these are why I stopped using digitalocean.
They had issues with DNS PTR records too for years. Same for initial auth tokens being global. I had hopes for them and used them exclusively at 1 time.
As a user, you can fix this by having a proxy that fits in front and manages ACL's.
Obviously you have to pay for resources for that proxy, and it's probably going to want a large bandwidth allocation. Lucky because DO doesn't charge for bandwidth.
DO has had a terrible attitude to security for as long as they've been around. I reported a major data leak vulnerability to them and they told me it was operating as intended, so I published it, and then they accused me of irresponsible disclosure, while simultaneously claiming that there was no security issue. This was a dozen years ago and the founders were involved in the response.
They are at the top of the list of companies with which I will never do business.
Wait till you hear that when you generate a DO container registry key it includes a key that gives you access to all of the resources, not only the registry.
I haven't used DO Spaces for a good while as I've shifted to B2 as cheaper S3-compatible storage that suits my archival workload.
However I absolutely would be using Spaces as storage alongside any apps running on DO, and would have assumed that all the usual per-bucket permissions I'm used to elsewhere were present, so thanks for the heads up!
A flaw was found in Ceph RGW. An unprivileged user can write to any bucket(s) accessible by a given key if a POST's form-data contains a key called 'bucket' with a value matching the name of the bucket used to sign the request.
The result of this is that a user could actually upload to any bucket accessible by the specified access key as long as the bucket in the POST policy matches the bucket in said POST form part.
We have assigned it a CVE of CVE-2023-43040 and the patch is attached.
Nope, this post is from 2021 pretty much begging that DO provide per-bucket level access keys for Spaces. Otherwise, right now, all keys created for an account can access all buckets on that account.
I unfortunately had to ditch DigitalOcean because of a number of small problems like this (not taking permissions seriously is a symptom of a larger problem). Other issues included how you can't assign a floating IP to a load balancer, the managed kubernetes is embarassing (to resize nodes, you must destroy and recreate the whole cluster), paltry volumes IOPS, minimal support for DNS stuff, and more.
Another reason DigitalOcean should not be used for production use, or the sole network.
I am looking into blackblaze and other providers.
As seriously, Digital Ocean contains a critical flaw in their operations management that will delete all of your long standing droplets with a valid credit card on file and zero issue with the conduct of the droplets or with the credit card.
Aside from this, their Spaces service has had multiple outages, the service seems to be pretty unreliable in general (definitely compared to S3). A quick glance of their NYC3 status suggests they may have finally improved this recently, but I don't know what use case I would have for a version of S3 that goes down multiple times a year.
Indeed - I used to use Spaces for a project but found not just the uptime but teh overall reliability and performance was, at the time at least, markedly worse than AWS.
I don't know if they've improved it, but 5 years ago I basically couldn't delete a DO Space because they had no way of doing a mass deletion. I even got technical support who basically told me I was doing it the best way possible. My delete operation took several months to run.
That is a limitation that comes along with having to be S3 compatible; the best you can do is to combine ListObjectsV2[1] + DeleteObjects[2] (1000 objects in a single API call).
DigitalOcean should not be used for production use, or the sole network.
There is a critical flaw in their operations management that will delete all of your long standing droplets with a valid credit card on file and zero issue with the conduct of the droplets or with the credit card.
I used to use DO Spaces and Backblaze, but then I found Wasabi and have switched everything over since. It pretty much has full S3 compatibility, and doesn’t use Minio or Ceph. Mostly just keep adding data, very rarely delete data.
The point is that there are no bucket keys. Only Access keys for all buckets. There is no way to issue bucket specific keys. I'm not sure I follow how the title could be misleading.
[+] [-] turtles3|2 years ago|reply
For a simple example, I'm running externaldns on a kubernetes cluster. For production use, I'd want to at least have an API key that can only access the DNS domains and nothing else. As is, the key can create compute resources or delete anything (including object storage buckets!). Again with no dev/prod isolation, so a leaked dev key that looks innocuous means your prod resources are now compromised.
Some second tier providers do better at this, OVH and scaleway both have some concept of ACLs, but DO just don't even try.
Apologies for the slight rant but it's frustrating to see something so basic overlooked by a company the size of DO.
[+] [-] tigeroil|2 years ago|reply
Whilst AWS is more expensive and perhaps over-engineered for our use-case, it was absolutely essential to have some kind of ACLs.
I mean, I can't exactly give my junior dev access to DO if it means he can also delete the entire production database and all backups with a misclick.
[+] [-] csnweb|2 years ago|reply
[+] [-] juxhindb|2 years ago|reply
Still not ideal, but it minimised security risk and kept things organised.
[+] [-] sergioisidoro|2 years ago|reply
However, in Digital Ocean, by design, you can't restrict keys to certain buckets. Once you issue a key, it can access all buckets within your project, with all operations (list, read, delete) files.
The issue has been reported and is known for a long time. It's even the top voted "idea" in the company portal.
Posting this here in hopes to bring awareness of this issue.
[+] [-] dang|2 years ago|reply
If you want to say what you think is important about a page, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so.... Alternatively, you could have made a text post to communicate what you think is important about the issue, and then your title would have been fine.
What's not ok on HN is to use the title field as a way to comment on an issue, because then the submitter gets a privileged position relative to other commenters. Since the title is by far the largest influence on a thread, we want to avoid that. Being the submitter of a page shouldn't confer any extra special power over the discussion.
[+] [-] karambir|2 years ago|reply
They had issues with DNS PTR records too for years. Same for initial auth tokens being global. I had hopes for them and used them exclusively at 1 time.
[+] [-] londons_explore|2 years ago|reply
Obviously you have to pay for resources for that proxy, and it's probably going to want a large bandwidth allocation. Lucky because DO doesn't charge for bandwidth.
[+] [-] k8sToGo|2 years ago|reply
[+] [-] kbumsik|2 years ago|reply
I honestly find it hard to understand why this kind of basic security feature wasn't in GA.
[1]: https://community.cloudflare.com/t/r2-token-per-bucket/38906...
[+] [-] snowstormsun|2 years ago|reply
[+] [-] sneak|2 years ago|reply
They are at the top of the list of companies with which I will never do business.
[+] [-] MaxBarraclough|2 years ago|reply
https://news.ycombinator.com/item?id=22490390
[+] [-] giancarlostoro|2 years ago|reply
[+] [-] drogus|2 years ago|reply
[+] [-] sergioisidoro|2 years ago|reply
[+] [-] jamiedumont|2 years ago|reply
However I absolutely would be using Spaces as storage alongside any apps running on DO, and would have assumed that all the usual per-bucket permissions I'm used to elsewhere were present, so thanks for the heads up!
[+] [-] octagons|2 years ago|reply
[+] [-] patrakov|2 years ago|reply
https://www.openwall.com/lists/oss-security/2023/09/26/10
---
A flaw was found in Ceph RGW. An unprivileged user can write to any bucket(s) accessible by a given key if a POST's form-data contains a key called 'bucket' with a value matching the name of the bucket used to sign the request.
The result of this is that a user could actually upload to any bucket accessible by the specified access key as long as the bucket in the POST policy matches the bucket in said POST form part.
We have assigned it a CVE of CVE-2023-43040 and the patch is attached.
Credits to Lucas Henry of Digital Ocean.
[+] [-] Operyl|2 years ago|reply
[+] [-] nik736|2 years ago|reply
[+] [-] mootline|2 years ago|reply
[+] [-] slig|2 years ago|reply
[+] [-] j45|2 years ago|reply
I am looking into blackblaze and other providers.
As seriously, Digital Ocean contains a critical flaw in their operations management that will delete all of your long standing droplets with a valid credit card on file and zero issue with the conduct of the droplets or with the credit card.
[+] [-] kyledrake|2 years ago|reply
[+] [-] tigeroil|2 years ago|reply
[+] [-] leros|2 years ago|reply
[+] [-] supriyo-biswas|2 years ago|reply
[1] https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObje...
[2] https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteOb...
[+] [-] j45|2 years ago|reply
There is a critical flaw in their operations management that will delete all of your long standing droplets with a valid credit card on file and zero issue with the conduct of the droplets or with the credit card.
[+] [-] ph4te|2 years ago|reply
[+] [-] jddj|2 years ago|reply
[+] [-] slig|2 years ago|reply
[+] [-] nik736|2 years ago|reply
[+] [-] pizzafeelsright|2 years ago|reply
Access request needs to be created, validated, logged, processed, expired or revoked.
Or just validated, processed, and logged.
[+] [-] jacooper|2 years ago|reply
Tailscale uses it as their main host for the repos, snd its down right annoying with no CDN and slowdowns occuring often.
[+] [-] whateveracct|2 years ago|reply
[+] [-] fulafel|2 years ago|reply
[+] [-] sergioisidoro|2 years ago|reply
[+] [-] angra_mainyu|2 years ago|reply
[+] [-] roelvanhintum|2 years ago|reply
[+] [-] whateveracct|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]