top | item 42513696

(no title)

TimSchumann | 1 year ago

Adding together all the different standards/feature sets a chip supports and then aggregating the bandwidth into a single number is actually a very reasonable way to arrive at an approximation for total chip computational throughput.

Ultimately, unless the chip architecture is oversubscribed or overloaded (unsure what the right term is), the features are all meant to be used simultaneously and thus the bits being read/written have to come from somewhere.

That somewhere is a % of the total throughput of the chip.

Stated another way — people forget that there’s almost always a single piece of silicon backing the total bandwidth throughput of modern computing devices regardless of what ‘standard’ is being used.

discuss

order

No comments yet.