top | item 36353161

(no title)

hdkrgr | 2 years ago

I do think this will be a useful metric, and it seems obvious that the hyperscalers will have a feature helping you keep track of energy use and emissions of the resources you rented. But why demand this on the level of an individual model/product? For these foundation models, I think it's reasonable to assume they will all be trained on hyperscaler-provided gpu-clusters, so there'll likely be an off-the-shelf funcitonality by AWS/Azure/GCP to report this number, but the draft of the EU AI Act also demands tracking energy use for other 'high-risk' AI systems which companies may plausibly train and/or deploy on-prem. Good luck tracking the per-token energy use of your model that's running on some on-prem server on last-gen GPUs.

discuss

order

Dylan16807|2 years ago

Especially for a server GPU, looking up watts and multiplying by time per token should give you a pretty good number.

hdkrgr|2 years ago

Sure... but maybe the GPU is sitting idle 40% of the time while still consuming 200W. Should I have to break this idle energy consumption down onto actual use (assuming the server/gpu is only used for this one model)? I guess it would make sense, but... WHO should do this and then continually update the model documentation when idle rates or the hardware changes?