(no title)
nvch | 7 months ago
Perhaps AI companies don’t know how to run continuous learning on their models:
* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction
* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans
Mars008|7 months ago
It's likely in brain inference is learning. If you want a technical analog it's like a conversation in LLMs. Previous tokens do affect the currently generated. I.e. it's inference time learning, well known and widely used.