Good that Google is working to improve in this area. They're light years behind AWS.
A couple of weeks ago I contacted their support because our newly created Google Cloud Transfer Service jobs were failing with "UNKNOWN ERROR". Turned out that it was a permission issue. I pointed out this must be a bug on their part since the page were you create transfer jobs says "Creating a transfer grants a Cloud Storage Transfer Service account the necessary source, destination, and project permissions to complete the transfer." Oddly enough they didn't agree that it was a bug.
Instead gave me a gsutil command to update the permissions. To my big surprise there was no way to set permissions to certain prefixes of the bucket. The command had to loop over ALL items in the bucket and update each permission individually. I pointed out that we have >300 million items in out buckets so this is going to incur large I/O costs (changing permissions is a Class a Request) and take ages to complete. Apparently t there was no other way. The command has now been running for 10 days and it's not even half way through. It has also crashed on several occasions, forcing me to restart it.
Did they at least comp the costs? When I have been in remotely similar situations with Amazon they have said "just do a hundred million API calls and we'll drop the fees, we are sorry this is so hard".
I hate Google's approach to authentication. Amazon got it so right with two simple strings and environment variables. By contrast Google have a mess of Oauth2 certificates, central config files, and a bunch of programs that seem to look in different places for credentials. Since AWS created aws-cli everything just works predictably. Google really need to catch up in this area.
FWIW, I was annoyed by this at first, then I started using instance default credentials and realized I love 'em. No need to bundle/package/distribute credentials. As long as the instance is authorized to do something you app can do it.
Slightly tangential, but could be useful for someone to use. I use IAM (AWS IAM) as authentication mechanism for my fun projects, without implementing password management.
This definitely seems like an improvement over the existing permissions system, but still seems to lack the granularity in resources to do things like per-bucket permissions in GCS.
It's a little silly to have to give a machine full read access to GCS if it just needs to download some packages/binaries but doesn't need access to things like database backups.
If a cloud feature is not in the SDK that a programmer uses then it effectively that feature does not exist.
One of the really nice things about Amazon Web Services is that when it releases new features you can be fairly sure they will be supported in all the SDK's if not immediately upon announcement then soon after that. Does Google have a policy of supporting all cloud features in all its SDK's?
Google - if this is not supported in all your SDK's, then why is it released? It's not really finished until it's in the SDKs is it?
I'm guessing you mean Amazon? IAM is a standard term that's been used in the dreaded-horrible enterprise for at least 15 years. IBM's Tivoli, Oracle has some IAM middleware a client of mine dropped a few hundred grand on and never used (it's called OIM, because they have to put their name on everything, but it's the same thing), I'm pretty sure HP has one as well...
[+] [-] Yrlec|10 years ago|reply
A couple of weeks ago I contacted their support because our newly created Google Cloud Transfer Service jobs were failing with "UNKNOWN ERROR". Turned out that it was a permission issue. I pointed out this must be a bug on their part since the page were you create transfer jobs says "Creating a transfer grants a Cloud Storage Transfer Service account the necessary source, destination, and project permissions to complete the transfer." Oddly enough they didn't agree that it was a bug.
Instead gave me a gsutil command to update the permissions. To my big surprise there was no way to set permissions to certain prefixes of the bucket. The command had to loop over ALL items in the bucket and update each permission individually. I pointed out that we have >300 million items in out buckets so this is going to incur large I/O costs (changing permissions is a Class a Request) and take ages to complete. Apparently t there was no other way. The command has now been running for 10 days and it's not even half way through. It has also crashed on several occasions, forcing me to restart it.
[+] [-] saurik|10 years ago|reply
[+] [-] estefan|10 years ago|reply
[+] [-] ceocoder|10 years ago|reply
[+] [-] balajijayaraman|10 years ago|reply
https://medium.com/@BalajiJayaraman/i-like-hacking-some-fun-...
I always wanted to practice writing a bit more but have this writer's block. After seeing this topic on HN I felt compelled to write this one. :)
[+] [-] awinograd|10 years ago|reply
It's a little silly to have to give a machine full read access to GCS if it just needs to download some packages/binaries but doesn't need access to things like database backups.
[+] [-] tonfa|10 years ago|reply
[+] [-] derFunk|10 years ago|reply
[+] [-] hoodoof|10 years ago|reply
If a cloud feature is not in the SDK that a programmer uses then it effectively that feature does not exist.
One of the really nice things about Amazon Web Services is that when it releases new features you can be fairly sure they will be supported in all the SDK's if not immediately upon announcement then soon after that. Does Google have a policy of supporting all cloud features in all its SDK's?
Google - if this is not supported in all your SDK's, then why is it released? It's not really finished until it's in the SDKs is it?
[+] [-] pinkunicorn|10 years ago|reply
[+] [-] nzoschke|10 years ago|reply
[+] [-] iheartmemcache|10 years ago|reply
[+] [-] nivertech|10 years ago|reply