top | item 38024835

(no title)

pravus | 2 years ago

In a data center you have racks of computers performing all of the workloads. At this point these racks are fairly standardized in terms of sizing and ancillary features. These are built-out to solve the following:

    * Physical space - The servers themselves require a certain amount of room and depending on the workloads assigned will need different dimensions.  These are specified in rack "units" (U) as the height dimension.  The width is fixed and depths can vary but are within a standard limit.  A rack might have something like 44U of total vertical space and each server can take anywhere from 1-4U generally.  Some equipment may even go up to 6U or 8U (or more).

    * Power - All rack equipment will require power so there are generally looms or wiring schemes to run all cabling and outlets for all powered devices in the rack.  For the most part this can be run on or in the post rails and remains hidden other than the outlet receptacles and mounted power strips.  This might also include added battery and power conditioning systems which will eat into your total vertical U budget.  Total rack power consumption is a vital figure.

    * Cooling - Most rack equipment will require some minimum amount of airflow or temperature range to operate properly.  Servers have fans but there will also be a need for airflow within the rack itself and you might have to solve unexpected issues such as temperature gradients from the floor to the ceiling of the rack.  Net heat output from workloads is a vital figure.

    * Networking - Since most rack equipment will be networked there are standard ways of cabling and patching in networks built into many racks.  This will include things such as bays for switches, some of which may eat into the vertical U budget.  These devices typically aggregate all rack traffic into a single higher-throughput network backplane that interconnects multiple racks into the broader network topology.

    * Storage - Depending on the workloads involved storage may be a major consideration and can require significant space (vertical Us), power, and cooling.  You will also need to take into account the bus interconnects between storage devices and servers.  This may also be delegated out into a SAN topology similar to a network where you have dedicated switches to connect to external storage networks.
These are some of the major challenges with rack-mounted computing in a data center among many others. What's not really illustrated here is that since all of this has become so standardized we can now fully integrate these components directly rather than buying them piece-meal and installing them in a rack.

This is what Oxide has to offer. They have built essentially an entire rack that solves the physical space, power, cooling, networking, and storage issues by simply giving you a turn-key box you plant in your data center and hook power and interconnects to. In addition it is a fully integrated solution so they can capture a lot of efficiencies that would be hard or impossible in traditional design.

As someone with a lot of data center experience I am very excited to see this. It is built by people with the correct attitude toward compute, imo.

discuss

order

No comments yet.