top | item 39842342

(no title)

blaerk | 1 year ago

I really hope the crazy prize increase of vmware products will end the use of esxi and the rest of the vsphere suite, it is one of the worst applications and apis i have ever had the displeasure of working with!

discuss

order

candiddevmike|1 year ago

VMware has a track record of pretty great reliability across a vast array of hardware. Yes, the APIs suck, but they're a case study on tech debt: vSphere is basically the Windows equivalent of datacenter APIs. They chose the best technology at the time (2009, which meant SOAP, Powershell, XML, etc) and had too much inertia to rework it.

mianos|1 year ago

Not to mention how flakey it is at scale. There is always some vmware guy who replies to me saying how good it is, but if you have thousands of VMs it is a random crapshoot. Something you just don't see with say AWS and Azure at similar scale. It reaks of old age and hack on hack over many years, and that is saying something when compared to AWS.

oneplane|1 year ago

The VMWare APIs are indeed pretty bad, even the ones on their modern products for some reason (i.e. NSX etc.) where they did adopt more modern methods but still managed to pull a Microsoft with 'one API for you, a different API for us'.

Being pretty bad doesn't mean they don't work of course, but when the best a product has to offer is clickops, they have missed the boat about 15 years ago.

fh973|1 year ago

I really hope that the price increase creates a business opportunity for new technology. This space has been plagued by subpar "free" alternatives (Openstack, Kubernetes) for a decade.

zettabomb|1 year ago

I can't concur. VMware was the leader in virtualization technology for a long time, and honestly nothing is quite as simple to start with as ESXi if you've never used a type 1 hypervisor before. I'm not so familiar with the APIs, so perhaps you're correct in that sense.

nolok|1 year ago

> nothing is quite as simple to start with as ESXi if you've never used a type 1 hypervisor before

Not sure about where ESXi is at lately on that level, but latest proxmox is really, really simple to start with if you've never used an hypervisor. You boot on the usb drive, press yes a few times, open the ip:port they give you and then you can click "create vm", next next next here is the iso to boot from and that's it.

Any tech user who has some vague knowledge about virtual machine or even run virtualbox on his computer could do it, and the more advanced fonctions (from proper backups and snapshot to multi node replication and load balancing) are absurdly simple to figure out in the UI.

I can't talk about the performance or quality of one against the other, but in pure difficulty to approach proxmox is doing very very good.

moondev|1 year ago

By application do you mean vCenter? It's in an entirely different league than proxmox.

https://i0.wp.com/williamlam.com/wp-content/uploads/2023/04/...

https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

MrDarcy|1 year ago

It’s not in a different league. I’ve used both in production. As others have said vSphere breaks down with thousands of VM’s, and worse the vSwitch implementation is buggy and unreliable as soon as you add more than a couple to a cluster.

kazen44|1 year ago

i would disagree with you there, especially because there is very little on the sdn front which matches NSX-T in terms of SDN capabilities, this is something in which vmware has been ahead, the only other people with the same capabilities seem to be hyperscalers.

c0l0|1 year ago

Take a look at Proxmox SDN features: https://pve.proxmox.com/pve-docs/chapter-pvesdn.html (some of it is still in beta, I think).

I think it comes pretty close - close enough for probably most but the very largest of users, who, I think, should probably have tried to become hyperscalers themselves, instead of betting the farm and all the land around it on VMware (by Broadcom).

oneplane|1 year ago

NSX-T and what hyperscalers do is essentially orchestration of things that already exist anyway. The load balancing in NSX is mostly just some openresty and Lua which as been around for quite a while. Classic Q-in-Q and bridging also does practically all of the classic L2 & L3 networking that tends to be touted as 'new', while you could even do that fully orchestrated when Puppet was the hot new thing back in the day.

Some things (that were created before NSX) may have come from internet exchanges and hyperscalers, like openflow, P4, and FRR, but were really not missing parts that were required to do software defined networking. If anything, the only thing you really needed for SDN was Linux, and the only real distinction between SDN and non-SDN was hardwired ASICs in the network fabric (well, not hard-hardwired, but with limited programmability or 'secret' APIs).

SV_BubbleTime|1 year ago

We went from $66 last year to $3600 this year.

There won’t be another year.