Hetzner (German data center provider) claims to achieve a PUE of 1.1 (https://www.golem.de/news/besuch-im-rechenzentrum-so-betreib...), admittedly their cloud offerings are quite limited but I think they are expanding on that front. So it doesn’t seem like only hyper scalers would fall into that limit.
Hetzner runs an intentionally primitive shop. Famously, one of their (historic) cheapest offerings were desktop "servers" on wooden shelves with flying cabling. So anything in the way of UPS, PDUs, monitoring, airflow, etc just isn't there, keeping PUE low.
Yes, and I think this leads to a more sophisticated analysis than PUE gives us, because a shop like Hetzner puts more of the burden for reliability and availability on the customer, compared to an Amazon or Google who internalize as much of the redundancy and replication that they can manage.
An example of where the PUE analysis really fails: I have two facilities, one on each American coast, and they operate in a primary-spare arrangement. This is far, far less energy efficient than if I have 20 datacenters all over the place and I am prepared to lose 2 of them at any time. In the latter architecture I am using much less energy, but enjoying much better reliability. PUE does not capture this type of architectural waste. It also fails to reflect the problem of burning megawatts because you are running your logs analysis pipeline in Perl or whatever.
c4mpute|2 years ago
jeffbee|2 years ago
An example of where the PUE analysis really fails: I have two facilities, one on each American coast, and they operate in a primary-spare arrangement. This is far, far less energy efficient than if I have 20 datacenters all over the place and I am prepared to lose 2 of them at any time. In the latter architecture I am using much less energy, but enjoying much better reliability. PUE does not capture this type of architectural waste. It also fails to reflect the problem of burning megawatts because you are running your logs analysis pipeline in Perl or whatever.