top | item 44255446

(no title)

joshjob42 | 8 months ago

Well, Altman is also investing in Helion, which projects to get the price of electricity to ~$10/MWh, but for whom, much like solar, wind, and actual nuclear the cost structure is overwhelmingly dominated by capital costs and non-varying capital costs (the cost of uranium or Helion's fuel will be negligible vs capital and manpower). So there's actually a pretty good reason to think long term electricity will be marginally so cheap that it isn't metered but instead basically bought in chunks of capacity or availability.

Another way for intelligence to get too cheap to meter is for the cost to fall so low it becomes hyperabundant. If you were to, for instance, take AI2027 as a benchmark and think ultimately we'll achieve something like the equivalent of John von Neumann in a box with a 2T dense equivalent parameter model and it will match such a Nobel prize winner's productivity when running inference at say 15 tokens a second (as fast as people can read) then you only need in principle 60 teraflops of AI infernce compute, which is roughly 2x the current Apple Neural Engine. So plausibly by the time you get to the 2030s, every laptop, smartphone, etc will be easily able to run models as powerful as the smartest people.

Somewhat longer term, I'm sure Altman expects the entire process to be automated and for the computational efficiency to rise significantly. If you take recent estimates from various players in the reversible computing space, you'd guesstimate that you ought to be able to do 60tflops by the late 2030s using under 0.1W or ~1kWh/yr which Helion could produce for ~1ยข. I do feel like 1 year of cognitive labor from the smartest person a penny or two renders intelligence too cheap to meter out on a per-hour basis.

discuss

order

No comments yet.