Holy crap! They're actually doubling the pricing (for some important products)!
I actually followed the links and found this:
> Coldline Storage Class B operations pricing will increase from $0.05 per 10,000 operations to $0.10 per 10,000 operations.
> Coldline Storage Class A operations pricing in regions will increase from $0.10 per 10,000 operations to $0.20 per 10,000 operations.
> Coldline Storage Class A operations pricing in multi-regions and dual-regions will increase from $0.10 per 10,000 operations to $0.40 per 10,000 operations.
> For all other storage classes, Class A operations pricing in multi-regions and dual-regions will increase to be double the Class A operations pricing in regions. For example, Standard Storage Class A operations in multi-regions and dual-regions will increase from $0.05 per 10,000 operations to $0.10 per 10,000 operations.
This announcement is just an eye wash to hide the fact that they're doubling their pricing structure for some products. And they claim most customers will see a cost decrease.
Sigh, I was just thinking of moving all my stuff, projects and even websites from cloud hosted solutions to my own home server and slapping a cache like CloudFlare on top of it and calling it a day. This is only pushing me in that direction, haha.
Google cloud has to be the most confusing product suite known to mankind. What an unbelievable mess.
After merging two companies I had to move a bunch of stuff over to a new bankaccount. three weeks later and I'm still not 100% sure that I got it all, the interfaces are so opaque and the different ways in which you can get billed so confusing (never mind the bills themselves) that it is nearly impossible to get a clear picture.
This does not feel like it is an accident, and this message is very much in line with that.
I always wonder how such systems come about. The number of confusing error messages you have to deal with for pretty basic stuff is off the scale. You can name anything, except of course when it actually matters and then only some cryptic UID is shown. Don't get me started on users and permission management, or how it is perfectly possible to orphan an entire project[1] if a person leaves your org. (Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover.)
My intuition, and some experience sitting in meetings with GCP folks, is that their engineering teams don't dogfood their own products end-to-end sufficiently (e.g. including billing) on a daily basis, like their customers have to.
The amount of blank stares and "Oh..."s that happened when asked about relatively simple, everyone-would-need-it use cases for management, visibility, etc was mind boggling.
GCP feels like Google rediscovering being Microsoft of the 1990s. If you have strong product teams, but no strong overarching experience teams, your resulting system is going to be a hash of well-polished but distinct products, with an extremely ugly unification layer.
A fair amount of my own confusion with GCP's offerings comes from their decision not to use proper names for their services.
AWS may have arbitrary names that don't follow any patterns, and Azure may have names that are grandiose, but at least you know with both of those clouds that they will always capitalize the name of all their service/product in documentation. There's no confusion if they are talking about a load balancer in the abstract, or their specific managed offerings.
> how it is perfectly possible to orphan an entire project[1] if a person leaves your org
Maybe not the best example since this (unlike other IAM oddities) actually makes sense - it can only happen when you don't have a top-level org tied to a project, like when you do something like using a gmail.com account to spin up GCP resources. Inside a GSuite org, this is not the default and I can't imagine how it'd happen by accident.
If your project is not attached to an org, and all the accounts tied to are gone, then what else do you expect?
> Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover
The way it's implemented is actually quite nice for complex scenarios/defense in depth - for instance, you can set it up such that whoever owns the GSuite org does not automatically get access to all GCP resources. Of course, any security measures good enough to restrict an org admin's privileges also have the potential of locking yourself out in a way that's semi-irrecoverable.
I can't believe how confusing all these cloud products are to do the most basic things. It really makes me appreciate Cloudflare, they seem to do a really great job with their UIs.
I'd say Azure is the worst. It's like Windows control panel with 10 times more items except you don't have the muscle memory from older Windows to navigate it through.
And then some edge case where I was charged on an account I closed because some subscription was still left open while I could no longer login...
It would be nice if most of Google ran off GCP, but alas, few marquee Google properties run on GCP. Ever notice how Google search, GMail, etc are usually just fine during a GCP outage?
> Will customers’ bills increase? Decrease? The impact of the pricing changes depends on customers’ use cases and usage. While some customers may see an increase in their bills, we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs. We’re working directly with customers to help them understand which changes may impact them.
There is a zero percent chance they haven't ran the analysis and concluded what % of customers would see a bill increase. It's high. If its low-to-zero, cloud companies are clear about how the prices are changing, and usually outline how many customers are would be negatively impacted. If it's high, they're ambiguous about what is changing, and shift the blame onto customers; if its still expensive for you, you're just not using it right.
More specifically: if the majority of customers were going to see lower bills from Google, even if the top N% would see higher bills, you can bet that the headline would be "New pricing structure reduces bills for most customers".
> There is a zero percent chance they haven't ran the analysis and concluded what % of customers would see a bill increase.
Zero percent is correct. I'm a GCP customer, and today I received an email from Google with a table explaining precisely how my bill would have changed, with columns labeled, e.g., "List Price $ increase in monthly bill due to data replication", and a corresponding dollar amount. My bill will increase by 5% overall if I don't make any changes.
> we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs.
- Using multi-regional storage (used to be a 30% premium, now much more)
- Making lots of object writes (Class A) vs lots of reads (Class B)
So, if you can:
- Move to regional or dual-region (NAM4) storage
- Snowball your writes into bigger overall objects, <3 immutability
- Keep your data in the same region as access
Then you can reduce the impact here.
They are also closing the Coldline/Nearline loophole, where you could use a bucket lifecycle policy to keep your objects in Standard (cheap access) for a few weeks/months, and then move them to a cheap long term storage tier (Nearline/Coldline), because that move is another Class operation, that just got a lot more expensive. This is inline with two years ago when they quietly moved lifecycle operation pricing from the origin service tier (e.g. cheap Standard storage) to the destination tier (e.g. much more expensive Coldline), cutting down on the savings of tier jockeying.
They are expecting behavior to change based on the new prices, so that’s why they have to be vague and can’t precisely predict what the final cost will be to customers.
This seems to be the biggest deal, a few links away.
> Reading data in a Cloud Storage bucket located in a multi-region from a Google Cloud service located in a region on the same continent will no longer be free; instead, such moves will be priced the same as general data moves between different locations on the same continent.
If I understand correctly (do I?), this means that storing frequently used data in a multi-region bucket is suddenly very expensive — we go from paying $0 to $0.02/GB. Reading 10TB / hour goes from $0/year to $1.75M/year.
We can switch to single-region buckets, but it's quite an effort to move all the data.
Who cares about DR, and having 3x copies of your data 100mi apart from each other? Small startups, or Enterprises? Enterprises can just push those costs to their DR budget.
The fire last year at OVH showed us impressively that it is not a good idea to have your data only in one region. So don't do it and stick to multi-region.
I actually got excited, thinking that this would be another drop to egress in order to compete with Cloudflare and AWS. AWS just significantly improved pricing on egress to compete with Cloudflare, so it seemed like an obvious next step for other clouds to do so.
Instead, huge price increases? That's... confusing. I honestly wonder if Google wants to kill off Cloud, given how much money they lose on it every year.
According to Google's own calculations (in the email they sent about the price changes), this will increase our GCS bill by about 400% (and our entire Google Cloud bill by about 60%).
It would seem that we have until October to move elsewhere... :(
> It would seem that we have until October to move elsewhere
the biggest fear especially with this class of infrastructure (long term cold storage) is that they can make it too expensive to leave at any time by upping the retrieval / egress costs. How expensive is that move going to be?
Google as an organization seems hellbent on teaching their users not to rely on them. On the consumer side it's by rapidly abandoning products, on the cloud side it's by dramatic price increases.
I think this is the third time we've been slapped with a new charge for something that used to be free. (In this case, egress from multi-region storage to a local region.) That's not going to burn us super hard, but maybe it's only a matter of time before they add a new charge that hikes our bill by 50%.
we moved off Google Cloud functions after they become 10x more expensive for us
they first introduced container registry, which made us pay for the storage (before you only paid for invocation and egress)
> If your functions are stored in Container Registry, you'll see small charges after you deploy because Container Registry has no free tier. Container Registry's regional storage costs are currently about $0.026 per GB per month.
recently they sent an email telling us new functions are going to use to “Artifact Registry” and prompting to migrate our old functions
Actually lol’d at “unlock more choice,” - if it’s truly a commodity product we’d expect basically zero margin. Clearly Azure, AWS and GCP are not zero margin, which implies oligopolistic (does Oracle even count?) price coordination for enterprise cloud. (Edited, forgot Azure)
Cloud is not a commodity product. Commodities are easily interchangable. For the most part, a banana is a banana, a pound of corn is a pound of corn, a ton of steel is a ton of steel. There can be quality variations of course, but at any given level of quality there are still multiple suppliers, and the costs of switching between them are fairly low.
That is not true of the cloud. Every cloud is unique in their own special snowflake ways, the APIs are often fairly different, the switching costs are high and there is a small number of suppliers.
While I agree that there's a lot of marketing speak here, I have to note that:
1) You wouldn't expect zero margin, you would expect normal margin, that is, these companies should have around the same margin as the average of the rest of the economy.
2) Commodity markets don't have to be low margin, because a commodity market with high market concentration will be a high margin market.
A little surprise hidden away in here: it is currently possible to exfiltrate data from a Cloud Storage bucket at standard tier ($0.085+/GB) instead of premium tier network rates ($0.12+/GB). This is achieved by making the bucket a backend for an external HTTP(S) load balancer [1] ($18/month).
This announcement adds an additional $0.008+/GB for the cost of outbound data moving through the load balancer, so effectively that's a 9% increase on the standard tier bandwidth pricing.
Once again this proves Google is NOT a customer focused company. These price increases are driven by accounting and are short term calculations.
As someone who's used AWS for most of my professional career, I've only ever seen prices being reduced to be more competitive rather than increased to 'align' with offerings of other vendors.
Newer generations of compute and storage are often cheaper and faster than previous generations which shows they are able to invest in technology to make things cheaper for the customer and lower cost for them to maintain which is impressive.
I do expect AWS to capitalize on this and persuade GCP customers to switch. I have no idea why GCP thinks that their customers are sticky enough to stay with them through the price increase.
Has any other large cloud provider increase prices like this ? I remember using Google App Engine awhile ago and switched to AWS when they increased prices and I don’t understand why you would just have prices higher and eventually lower them once you get more customers . Other than BigQuery and TPUs I’m not sure of the advantages of Google cloud …
Not yet, but with Amazon increasing everyone's salaries, the price of everything in general going up, you're likely to see every large provider raise their prices.
It seems like only the new players will lower prices.
22% increase in some per-GB costs and 50% increase in some per-request costs of the most fungible, commodity service any cloud offers. Really no idea what to make of this. At least it seems reasonable to expect further pricing changes from other clouds in the coming weeks (and knowing AWS, maybe even an announcement in the coming day or two)
> The impact of the pricing changes depends on customers’ use cases and usage. While some customers may see an increase in their bills, we’re also introducing new options for some services to better align with usage, which could lower some customers’ bills. In fact, many customers will be able to adapt their portfolios and usage to decrease costs. We’re working directly with customers to help them understand which changes may impact them.
> Cloud storage and multi-region replication and inter-region access are changing in pricing.
> The introduction of a lower cost option in archive snapshots for Persistent Disk pricing.
> New pricing for Load Balancing (to bring it in line with other providers. Read: very likely AWS pricing)
> A new price for Network Topology, now included in the price is Performance Dashboard and Network Intelligence Center.
All without what the new prices will be so based on the fact that it is several services with varying prices based on usage it could be a substantial change or not much at all.
Quite vague and unhelpful of a post by Google other than to give you a heads up to not be surprised about your bill in October.
This seems like GCP is shifting its strategy away from trying to win more market share and catch up with Azure, AWS, Tencent. Perhaps they realised that this is futile and not are now focussing on revenue, milking their existing customer base.
> In fact, many customers will be able to adapt their portfolios and usage to decrease costs.
Sounds like a sneaky way to win some quick revenue because they know a huge number of customers are not going to be able to go back and re-engineer their storage use in time, so will end up out-of-pocket even while Google gets to pretend they are saving everyone money.
It seems particularly problematic to increase prices on anything to do with long term cold storage. That is where customers are placing the most trust in their vendor since much of this data is held for mandatory compliance reasons and retrieval costs are sufficient that it is completely infeasible to migrate out.
> Storage Transfer Service will be available free-of-cost for transfers within Cloud Storage, starting April 2 until the end of the year
Is this a loophole to get free retrieval of data from cold storage that we could exploit for other reasons?
[+] [-] neya|4 years ago|reply
I actually followed the links and found this:
This announcement is just an eye wash to hide the fact that they're doubling their pricing structure for some products. And they claim most customers will see a cost decrease.Sigh, I was just thinking of moving all my stuff, projects and even websites from cloud hosted solutions to my own home server and slapping a cache like CloudFlare on top of it and calling it a day. This is only pushing me in that direction, haha.
Reference: https://cloud.google.com/storage/pricing-announce
[+] [-] jacquesm|4 years ago|reply
After merging two companies I had to move a bunch of stuff over to a new bankaccount. three weeks later and I'm still not 100% sure that I got it all, the interfaces are so opaque and the different ways in which you can get billed so confusing (never mind the bills themselves) that it is nearly impossible to get a clear picture.
This does not feel like it is an accident, and this message is very much in line with that.
I always wonder how such systems come about. The number of confusing error messages you have to deal with for pretty basic stuff is off the scale. You can name anything, except of course when it actually matters and then only some cryptic UID is shown. Don't get me started on users and permission management, or how it is perfectly possible to orphan an entire project[1] if a person leaves your org. (Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover.)
[1] https://cloud.google.com/resource-manager/docs/project-suspe...
[+] [-] ethbr0|4 years ago|reply
The amount of blank stares and "Oh..."s that happened when asked about relatively simple, everyone-would-need-it use cases for management, visibility, etc was mind boggling.
GCP feels like Google rediscovering being Microsoft of the 1990s. If you have strong product teams, but no strong overarching experience teams, your resulting system is going to be a hash of well-polished but distinct products, with an extremely ugly unification layer.
[+] [-] drewda|4 years ago|reply
AWS may have arbitrary names that don't follow any patterns, and Azure may have names that are grandiose, but at least you know with both of those clouds that they will always capitalize the name of all their service/product in documentation. There's no confusion if they are talking about a load balancer in the abstract, or their specific managed offerings.
[+] [-] ssijak|4 years ago|reply
[+] [-] latchkey|4 years ago|reply
I contacted support and the first thing they asked is which browser I'm using. Brave.
Turned off the shield and everything magically worked. I got a small laugh out of that one.
[+] [-] lima|4 years ago|reply
Maybe not the best example since this (unlike other IAM oddities) actually makes sense - it can only happen when you don't have a top-level org tied to a project, like when you do something like using a gmail.com account to spin up GCP resources. Inside a GSuite org, this is not the default and I can't imagine how it'd happen by accident.
If your project is not attached to an org, and all the accounts tied to are gone, then what else do you expect?
> Gsuite and GCP may superficially appear to share a bunch of stuff but that just sets you up for some very cute surprises, from which it can be extremely difficult to recover
The way it's implemented is actually quite nice for complex scenarios/defense in depth - for instance, you can set it up such that whoever owns the GSuite org does not automatically get access to all GCP resources. Of course, any security measures good enough to restrict an org admin's privileges also have the potential of locking yourself out in a way that's semi-irrecoverable.
[+] [-] admn2|4 years ago|reply
[+] [-] tpmx|4 years ago|reply
No, that's AWS.
[+] [-] mekster|4 years ago|reply
And then some edge case where I was charged on an account I closed because some subscription was still left open while I could no longer login...
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] twistedpair|4 years ago|reply
[+] [-] 015a|4 years ago|reply
There is a zero percent chance they haven't ran the analysis and concluded what % of customers would see a bill increase. It's high. If its low-to-zero, cloud companies are clear about how the prices are changing, and usually outline how many customers are would be negatively impacted. If it's high, they're ambiguous about what is changing, and shift the blame onto customers; if its still expensive for you, you're just not using it right.
[+] [-] dsr_|4 years ago|reply
[+] [-] williamstein|4 years ago|reply
Zero percent is correct. I'm a GCP customer, and today I received an email from Google with a table explaining precisely how my bill would have changed, with columns labeled, e.g., "List Price $ increase in monthly bill due to data replication", and a corresponding dollar amount. My bill will increase by 5% overall if I don't make any changes.
[+] [-] bluedino|4 years ago|reply
So, your bill is going up.
[+] [-] twistedpair|4 years ago|reply
- Using multi-regional storage (used to be a 30% premium, now much more)
- Making lots of object writes (Class A) vs lots of reads (Class B)
So, if you can:
- Move to regional or dual-region (NAM4) storage
- Snowball your writes into bigger overall objects, <3 immutability
- Keep your data in the same region as access
Then you can reduce the impact here.
They are also closing the Coldline/Nearline loophole, where you could use a bucket lifecycle policy to keep your objects in Standard (cheap access) for a few weeks/months, and then move them to a cheap long term storage tier (Nearline/Coldline), because that move is another Class operation, that just got a lot more expensive. This is inline with two years ago when they quietly moved lifecycle operation pricing from the origin service tier (e.g. cheap Standard storage) to the destination tier (e.g. much more expensive Coldline), cutting down on the savings of tier jockeying.
[+] [-] ec109685|4 years ago|reply
[+] [-] maximilianroos|4 years ago|reply
> Reading data in a Cloud Storage bucket located in a multi-region from a Google Cloud service located in a region on the same continent will no longer be free; instead, such moves will be priced the same as general data moves between different locations on the same continent.
If I understand correctly (do I?), this means that storing frequently used data in a multi-region bucket is suddenly very expensive — we go from paying $0 to $0.02/GB. Reading 10TB / hour goes from $0/year to $1.75M/year.
We can switch to single-region buckets, but it's quite an effort to move all the data.
[+] [-] NAHWheatCracker|4 years ago|reply
[+] [-] twistedpair|4 years ago|reply
Who cares about DR, and having 3x copies of your data 100mi apart from each other? Small startups, or Enterprises? Enterprises can just push those costs to their DR budget.
[+] [-] Cyclenerd|4 years ago|reply
[+] [-] staticassertion|4 years ago|reply
Instead, huge price increases? That's... confusing. I honestly wonder if Google wants to kill off Cloud, given how much money they lose on it every year.
[+] [-] jcoene|4 years ago|reply
According to Google's own calculations (in the email they sent about the price changes), this will increase our GCS bill by about 400% (and our entire Google Cloud bill by about 60%).
It would seem that we have until October to move elsewhere... :(
[+] [-] zmmmmm|4 years ago|reply
the biggest fear especially with this class of infrastructure (long term cold storage) is that they can make it too expensive to leave at any time by upping the retrieval / egress costs. How expensive is that move going to be?
[+] [-] seiryth|4 years ago|reply
[+] [-] profmonocle|4 years ago|reply
I think this is the third time we've been slapped with a new charge for something that used to be free. (In this case, egress from multi-region storage to a local region.) That's not going to burn us super hard, but maybe it's only a matter of time before they add a new charge that hikes our bill by 50%.
[+] [-] ushakov|4 years ago|reply
they first introduced container registry, which made us pay for the storage (before you only paid for invocation and egress)
> If your functions are stored in Container Registry, you'll see small charges after you deploy because Container Registry has no free tier. Container Registry's regional storage costs are currently about $0.026 per GB per month.
recently they sent an email telling us new functions are going to use to “Artifact Registry” and prompting to migrate our old functions
> Cloud Functions (2nd gen) exclusively uses Artifact Registry.
Artifact Registry price: $0.10 per GB per month
[+] [-] flycatcha|4 years ago|reply
[+] [-] sklargh|4 years ago|reply
[+] [-] dralley|4 years ago|reply
Cloud is not a commodity product. Commodities are easily interchangable. For the most part, a banana is a banana, a pound of corn is a pound of corn, a ton of steel is a ton of steel. There can be quality variations of course, but at any given level of quality there are still multiple suppliers, and the costs of switching between them are fairly low.
That is not true of the cloud. Every cloud is unique in their own special snowflake ways, the APIs are often fairly different, the switching costs are high and there is a small number of suppliers.
[+] [-] napoleon_thepig|4 years ago|reply
1) You wouldn't expect zero margin, you would expect normal margin, that is, these companies should have around the same margin as the average of the rest of the economy.
2) Commodity markets don't have to be low margin, because a commodity market with high market concentration will be a high margin market.
[+] [-] onlyrealcuzzo|4 years ago|reply
[+] [-] gnfargbl|4 years ago|reply
This announcement adds an additional $0.008+/GB for the cost of outbound data moving through the load balancer, so effectively that's a 9% increase on the standard tier bandwidth pricing.
[1] https://cloud.google.com/network-tiers/docs/overview
[+] [-] Dylan16807|4 years ago|reply
[+] [-] tedk-42|4 years ago|reply
As someone who's used AWS for most of my professional career, I've only ever seen prices being reduced to be more competitive rather than increased to 'align' with offerings of other vendors.
Newer generations of compute and storage are often cheaper and faster than previous generations which shows they are able to invest in technology to make things cheaper for the customer and lower cost for them to maintain which is impressive.
[+] [-] nerdyadventurer|4 years ago|reply
Does AWS have anything similar to Cloud Run?
[+] [-] rainboiboi|4 years ago|reply
[+] [-] zitterbewegung|4 years ago|reply
[+] [-] bluedino|4 years ago|reply
It seems like only the new players will lower prices.
[+] [-] pbiggar|4 years ago|reply
[+] [-] dmw_ng|4 years ago|reply
[+] [-] oauea|4 years ago|reply
So they're raising prices.
[+] [-] bithavoc|4 years ago|reply
[+] [-] kemotep|4 years ago|reply
> Cloud storage and multi-region replication and inter-region access are changing in pricing.
> The introduction of a lower cost option in archive snapshots for Persistent Disk pricing.
> New pricing for Load Balancing (to bring it in line with other providers. Read: very likely AWS pricing)
> A new price for Network Topology, now included in the price is Performance Dashboard and Network Intelligence Center.
All without what the new prices will be so based on the fact that it is several services with varying prices based on usage it could be a substantial change or not much at all.
Quite vague and unhelpful of a post by Google other than to give you a heads up to not be surprised about your bill in October.
[+] [-] hepinhei|4 years ago|reply
[+] [-] ithkuil|4 years ago|reply
[+] [-] hankman86|4 years ago|reply
[+] [-] zmmmmm|4 years ago|reply
Sounds like a sneaky way to win some quick revenue because they know a huge number of customers are not going to be able to go back and re-engineer their storage use in time, so will end up out-of-pocket even while Google gets to pretend they are saving everyone money.
It seems particularly problematic to increase prices on anything to do with long term cold storage. That is where customers are placing the most trust in their vendor since much of this data is held for mandatory compliance reasons and retrieval costs are sufficient that it is completely infeasible to migrate out.
> Storage Transfer Service will be available free-of-cost for transfers within Cloud Storage, starting April 2 until the end of the year
Is this a loophole to get free retrieval of data from cold storage that we could exploit for other reasons?
[+] [-] deanCommie|4 years ago|reply
Said Amazon never.