top | item 41235500

(no title)

ai4ever | 1 year ago

This is nice but not very useful to me. What would be more useful in my use case is tooling to audit my cloud accounts periodically and reclaim garbage which is not used or re optimize my usage (smaller instances or databases depending on usage patterns) which directly saves me money.

discuss

order

rumno0|1 year ago

I think this is the next natural evolution - bring in usage information directly from the cloud accounts then offer right-sizing suggestions and, like you say, reclaim garbage on demand.

Definitely one for the road map!

nathanwallace|1 year ago

For runtime cost analysis, you could try Steampipe [1] with it's Powerpipe "thrifty" [2] mods. They run dozens of automatic checks across cloud providers for waste and cost saving opportunities.

If you want to automatically make these changes (with optional approval in Slack) you can use the Flowpipe thrifty mods, e.g. AWS [3].

It's all open source and easy to update / extend (SQL, HCL).

1 - https://github.com/turbot/steampipe 2 - https://hub.powerpipe.io/?objectives=cost 3 - https://hub.flowpipe.io/mods/turbot/aws_thrifty

aduwah|1 year ago

Steampipe is amazing. I am using it daily for about 4 months now

Gasp0de|1 year ago

Determining accurately if we can safely scale down an instance is one of the hardest things we do, I can not think of a way to determine this in an automated fashion.

rumno0|1 year ago

I agree - there will always be an element of engineering knowledge required.

It's not dissimilar to AWS urging people to use Flex or Graviton instances, only we can decide if our workload will run appropriately!

tamiral|1 year ago

performance testing? metrics analysis?