(no title)
joshjob42 | 8 months ago
Another way for intelligence to get too cheap to meter is for the cost to fall so low it becomes hyperabundant. If you were to, for instance, take AI2027 as a benchmark and think ultimately we'll achieve something like the equivalent of John von Neumann in a box with a 2T dense equivalent parameter model and it will match such a Nobel prize winner's productivity when running inference at say 15 tokens a second (as fast as people can read) then you only need in principle 60 teraflops of AI infernce compute, which is roughly 2x the current Apple Neural Engine. So plausibly by the time you get to the 2030s, every laptop, smartphone, etc will be easily able to run models as powerful as the smartest people.
Somewhat longer term, I'm sure Altman expects the entire process to be automated and for the computational efficiency to rise significantly. If you take recent estimates from various players in the reversible computing space, you'd guesstimate that you ought to be able to do 60tflops by the late 2030s using under 0.1W or ~1kWh/yr which Helion could produce for ~1ยข. I do feel like 1 year of cognitive labor from the smartest person a penny or two renders intelligence too cheap to meter out on a per-hour basis.
No comments yet.