(no title)
cshores | 5 months ago
On the energy side, Google recently estimated that an average Gemini inference consumes around 0.24 Wh, which is roughly the same as running a microwave for a single second. Older rule-of-thumb comparisons put the figure closer to 3–6 seconds of microwave use, or about 0.8–1.7 Wh per prompt. If you apply those numbers to U.S. usage, you get somewhere between 79 MWh and 550 MWh per day nationally, which translates to only a few to a few dozen megawatts of continuous load. Spread across the population, that works out to between 0.09 and 0.6 kWh per person per year — just pennies worth of electricity, comparable to a few minutes of running a clothes dryer. The bigger concern for the grid is not individual prompts but the growth of AI data centers and the energy cost of training ever-larger models.
No comments yet.